Updates from: 07/02/2021 03:05:29
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Password Reset Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-password-reset-policy.md
Previously updated : 05/24/2021 Last updated : 07/01/2021
zone_pivot_groups: b2c-policy-type
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
-## Password reset flow
+## Overview
-The [sign-up and sign-in journey](add-sign-up-and-sign-in-policy.md) allows users to reset their own password using the **Forgot your password?** link. The password reset flow involves the following steps:
+Within a [sign-up and sign-in journey](add-sign-up-and-sign-in-policy.md), users can reset their own passwords using the **Forgot your password?** link. This self-service password reset flow applies to local accounts in Azure AD B2C that use an [email address](sign-in-options.md#email-sign-in) or [username](sign-in-options.md#username-sign-in) with a password for sign-in.
-1. From the sign-up and sign-in page, the user clicks the **Forgot your password?** link. Azure AD B2C initiates the password reset flow.
-2. The user provides their email address and selects **Send verification code**. Azure AD B2C will then send the user a verification code.
+The password reset flow involves the following steps:
-* The user needs to open the mail box and copy the verification code. The user then enters the verification code in Azure AD B2C password reset page, and selects **Verify code**.
+![Password reset flow](./media/add-password-reset-policy/password-reset-flow.png)
-> [!NOTE]
-> After the email is verified, the user can still select **Change e-mail**, type the other email, and repeat the email verification from the beginning.
-3. The user can then enter a new password.
+**1.** From the sign-up and sign-in page, the user clicks the **Forgot your password?** link. Azure AD B2C initiates the password reset flow.
-![Password reset flow](./media/add-password-reset-policy/password-reset-flow.png)
+**2.** The user provides their email address and selects **Send verification code**. Azure AD B2C sends the verification code to the user's inbox. The user copies the verification code from the email, enters the code in the Azure AD B2C password reset page, and selects **Verify code**.
-The password reset flow applies to local accounts in Azure AD B2C that use an [email address](sign-in-options.md#email-sign-in) or [username](sign-in-options.md#username-sign-in) with a password for sign-in.
+**3.** The user can then enter a new password. (After the email is verified, the user can still select the **Change e-mail** button; see [Hiding the change email button](#hiding-the-change-email-button) below.)
> [!TIP]
-> The self-service password reset flow allows users to change their password when the user forgets their password and wants to reset it. Consider configuring a [password change flow](add-password-change-policy.md) to support cases where a user knows their password and wants to change it.
+> The self-service password reset flow allows users to change their password when the user forgets their password and wants to reset it.
+> - For cases where a user knows their password and wants to change it, use a [password change flow](add-password-change-policy.md).
+> - For cases where you want to to force users to reset their passwords (for example, when they sign in for the first time, when their passwords have been reset by an admin, or after they've been migrated to Azure AD B2C with random passwords) use a [force password reset](force-password-reset.md) flow.
+
+### Hiding the change email button
+
+After the email is verified, the user can still select **Change email**, type the another email, and repeat the email verification from the beginning. If you'd prefer to hide the **Change email** button, you can modify the CSS to hide the associated HTML element(s) on the page. For example, you can add the CSS entry below to the selfAsserted.HTML and [customize the user interface with HTML templates](customize-ui-with-html.md).
+
+```html
+<style type="text/css">
+ .changeClaims
+ {
+ visibility: hidden;
+ }
+</style>
+```
-A common practice after migrating users to Azure AD B2C with random passwords is to have the users verify their email addresses and reset their passwords during their first sign-in. It's also common to force the user to reset their password after an administrator changes their password; see [force password reset](force-password-reset.md) to enable this feature.
+Note that the default name of the **Change email** button in the selfasserted.html page is `changeclaims`. You can find the button name by inspecting the page source of the sign-up page using a browser tool (such as Inspect).
## Prerequisites
active-directory On Premises Sql Connector Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-sql-connector-configure.md
For more installation and configuration information see:
Depending on the options you select, some of the wizard screens may or may not be available and the information may be slightly different. For purposes of this configuration, the user object type is used. Use the information below to guide you in your configuration. -
+**Supported systems**
+* Microsoft SQL Server & SQL Azure
+* IBM DB2 10.x
+* IBM DB2 9.x
+* Oracle 10 & 11g
+* Oracle 12c and 18c
+* MySQL 5.x
## Create a generic SQL connector To create a generic SQL connector use the following steps:
active-directory Concept Authentication Methods https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-methods.md
Previously updated : 06/17/2021 Last updated : 07/01/2021
To learn more about how each authentication method works, see the following sepa
> [!NOTE] > In Azure AD, a password is often one of the primary authentication methods. You can't disable the password authentication method. If you use a password as the primary authentication factor, increase the security of sign-in events using Azure AD Multi-Factor Authentication.
-> [!IMPORTANT]
-> While FIDO2 meets the requirements necessary to serve as a form of MFA, FIDO2 can only be used as a passwordless form of authentication.
- The following additional verification methods can be used in certain scenarios: * [App passwords](howto-mfa-app-passwords.md) - used for old applications that don't support modern authentication and can be configured for per-user Azure AD Multi-Factor Authentication.
active-directory Concept Authentication Phone Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-phone-options.md
Previously updated : 06/08/2021 Last updated : 06/30/2021
For direct authentication using text message, you can [Configure and enable users for SMS-based authentication](howto-authentication-sms-signin.md). SMS-based sign-in is great for Frontline workers. With SMS-based sign-in, users don't need to know a username and password to access applications and services. The user instead enters their registered mobile phone number, receives a text message with a verification code, and enters that in the sign-in interface.
-Users can also verify themselves using a mobile phone or office phone as secondary form of authentication used during Azure AD Multi-Factor Authentication or self-service password reset (SSPR).
+Users can also verify themselves using a mobile phone or office phone as secondary form of authentication used during Azure AD Multi-Factor Authentication or self-service password reset (SSPR). Phone call verification is not available for Azure AD tenants with trial subscriptions.
To work properly, phone numbers must be in the format *+CountryCode PhoneNumber*, for example, *+1 4251234567*.
With phone call verification during SSPR or Azure AD Multi-Factor Authentication
## Office phone verification
-With phone call verification during SSPR or Azure AD Multi-Factor Authentication, an automated voice call is made to the phone number registered by the user. To complete the sign-in process, the user is prompted to press # on their keypad.
+With phone call verification during SSPR or Azure AD Multi-Factor Authentication, an automated voice call is made to the phone number registered by the user. To complete the sign-in process, the user is prompted to press # on their keypad.
## Troubleshooting phone options
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-aadsts-error-codes.md
Previously updated : 03/17/2021 Last updated : 07/01/2021
For example, if you received the error code "AADSTS50058" then do a search in [h
| AADSTS50048 | SubjectMismatchesIssuer - Subject mismatches Issuer claim in the client assertion. Contact the tenant admin. | | AADSTS50049 | NoSuchInstanceForDiscovery - Unknown or invalid instance. | | AADSTS50050 | MalformedDiscoveryRequest - The request is malformed. |
-| AADSTS50053 | IdsLocked - The account is locked because the user tried to sign in too many times with an incorrect user ID or password. |
-| AADSTS50055 | InvalidPasswordExpiredPassword - The password is expired. |
-| AADSTS50056 | Invalid or null password -Password does not exist in store for this user. |
-| AADSTS50057 | UserDisabled - The user account is disabled. The account has been disabled by an administrator. |
-| AADSTS50058 | UserInformationNotProvided - This means that a user is not signed in. This is a common error that's expected when a user is unauthenticated and has not yet signed in.</br>If this error is encountered in an SSO context where the user has previously signed in, this means that the SSO session was either not found or invalid.</br>This error may be returned to the application if prompt=none is specified. |
+| AADSTS50053 | IdsLocked - The account is locked because the user tried to sign in too many times with an incorrect user ID or password. The user is blocked due to repeated sign-in attempts. See [Remediate risks and unblock users](/azure/active-directory/identity-protection/howto-unblock-user). |
+| AADSTS50055 | InvalidPasswordExpiredPassword - The password is expired. The user's password is expired, and therefore their login or session was ended. They will be offered the opportunity to reset it, or may ask an admin to reset it via [Reset a user's password using Azure Active Directory](/azure/active-directory/fundamentals/active-directory-users-reset-password-azure-portal). |
+| AADSTS50056 | Invalid or null password: password does not exist in the directory for this user. The user should be asked to enter their password again. |
+| AADSTS50057 | UserDisabled - The user account is disabled. The user object in Active Directory backing this account has been disabled. An admin can re-enable this account [through Powershell](/powershell/module/activedirectory/enable-adaccount) |
+| AADSTS50058 | UserInformationNotProvided - Session information is not sufficient for single-sign-on. This means that a user is not signed in. This is a common error that's expected when a user is unauthenticated and has not yet signed in.</br>If this error is encountered in an SSO context where the user has previously signed in, this means that the SSO session was either not found or invalid.</br>This error may be returned to the application if prompt=none is specified. |
| AADSTS50059 | MissingTenantRealmAndNoUserInformationProvided - Tenant-identifying information was not found in either the request or implied by any provided credentials. The user can contact the tenant admin to help resolve the issue. |
-| AADSTS50061 | SignoutInvalidRequest - The sign-out request is invalid. |
+| AADSTS50061 | SignoutInvalidRequest - Unable to complete signout. The request was invalid. |
| AADSTS50064 | CredentialAuthenticationError - Credential validation on username or password has failed. | | AADSTS50068 | SignoutInitiatorNotParticipant - Sign out has failed. The app that initiated sign out is not a participant in the current session. | | AADSTS50070 | SignoutUnknownSessionIdentifier - Sign out has failed. The sign out request specified a name identifier that didn't match the existing session(s). |
For example, if you received the error code "AADSTS50058" then do a search in [h
| AADSTS50079 | UserStrongAuthEnrollmentRequired - Due to a configuration change made by the administrator, or because the user moved to a new location, the user is required to use multi-factor authentication. | | AADSTS50085 | Refresh token needs social IDP login. Have user try signing-in again with username -password | | AADSTS50086 | SasNonRetryableError |
-| AADSTS50087 | SasRetryableError - The service is temporarily unavailable. Try again. |
-| AADSTS50089 | Flow token expired - Authentication Failed. Have the user try signing-in again with username -password. |
+| AADSTS50087 | SasRetryableError - A transient error has occurred during strong authentication. Please try again. |
+| AADSTS50088 | Limit on telecom MFA calls reached. Please try again in a few minutes. |
+| AADSTS50089 | Authentication failed due to flow token expired. Expected - auth codes, refresh tokens, and sessions expire over time or are revoked by the user or an admin. The app will request a new login from the user. |
| AADSTS50097 | DeviceAuthenticationRequired - Device authentication is required. | | AADSTS50099 | PKeyAuthInvalidJwtUnauthorized - The JWT signature is invalid. | | AADSTS50105 | EntitlementGrantsNotFound - The signed in user is not assigned to a role for the signed in app. Assign the user to the app. To learn more, see the troubleshooting article for error [AADSTS50105](/troubleshoot/azure/active-directory/error-code-aadsts50105-user-not-assigned-role). | | AADSTS50107 | InvalidRealmUri - The requested federation realm object does not exist. Contact the tenant admin. | | AADSTS50120 | ThresholdJwtInvalidJwtFormat - Issue with JWT header. Contact the tenant admin. | | AADSTS50124 | ClaimsTransformationInvalidInputParameter - Claims Transformation contains invalid input parameter. Contact the tenant admin to update the policy. |
+| AADSTS501241 | Mandatory Input '{paramName}' missing from transformation id '{transformId}'. This error is returned while Azure AD is trying to build a SAML response to the application. NameID claim or NameIdentifier is mandatory in SAML response and if Azure AD failed to get source attribute for NameID claim, it will return this error. As a resolution, ensure you add claim rules in Azure Portal > Azure Active Directory > Enterprise Applications > Select your application > Single Sign-On > User Attributes & Claims > Unique User Identifier (Name ID). |
| AADSTS50125 | PasswordResetRegistrationRequiredInterrupt - Sign-in was interrupted because of a password reset or password registration entry. | | AADSTS50126 | InvalidUserNameOrPassword - Error validating credentials due to invalid username or password. | | AADSTS50127 | BrokerAppNotInstalled - User needs to install a broker app to gain access to this content. |
For example, if you received the error code "AADSTS50058" then do a search in [h
| AADSTS50135 | PasswordChangeCompromisedPassword - Password change is required due to account risk. | | AADSTS50136 | RedirectMsaSessionToApp - Single MSA session detected. | | AADSTS50139 | SessionMissingMsaOAuth2RefreshToken - The session is invalid due to a missing external refresh token. |
-| AADSTS50140 | KmsiInterrupt - This error occurred due to "Keep me signed in" interrupt when the user was signing-in. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) with Correlation ID, Request ID, and Error code to get more details. |
+| AADSTS50140 | KmsiInterrupt - This error occurred due to "Keep me signed in" interrupt when the user was signing-in. This is an expected part of the login flow, where a user is asked if they want to remain signed into their current browser to make further logins easier. For more information, see [The new Azure AD sign-in and ΓÇ£Keep me signed inΓÇ¥ experiences rolling out now!](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/the-new-azure-ad-sign-in-and-keep-me-signed-in-experiences/m-p/128267). You can [open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) with Correlation ID, Request ID, and Error code to get more details.|
| AADSTS50143 | Session mismatch - Session is invalid because user tenant does not match the domain hint due to different resource. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) with Correlation ID, Request ID, and Error code to get more details. | | AADSTS50144 | InvalidPasswordExpiredOnPremPassword - User's Active Directory password has expired. Generate a new password for the user or have the user use the self-service reset tool to reset their password. | | AADSTS50146 | MissingCustomSigningKey - This app is required to be configured with an app-specific signing key. It is either not configured with one, or the key has expired or is not yet valid. |
For example, if you received the error code "AADSTS50058" then do a search in [h
| AADSTS65004 | UserDeclinedConsent - User declined to consent to access the app. Have the user retry the sign-in and consent to the app| | AADSTS65005 | MisconfiguredApplication - The app required resource access list does not contain apps discoverable by the resource or The client app has requested access to resource, which was not specified in its required resource access list or Graph service returned bad request or resource not found. If the app supports SAML, you may have configured the app with the wrong Identifier (Entity). To learn more, see the troubleshooting article for error [AADSTS650056](/troubleshoot/azure/active-directory/error-code-aadsts650056-misconfigured-app). | | AADSTS650052 | The app needs access to a service `(\"{name}\")` that your organization `\"{organization}\"` has not subscribed to or enabled. Contact your IT Admin to review the configuration of your service subscriptions. |
+| AADSTS650054 | The application asked for permissions to access a resource that has been removed or is no longer available. Make sure that all resources the app is calling are present in the tenant you are operating in. |
| AADSTS67003 | ActorNotValidServiceIdentity | | AADSTS70000 | InvalidGrant - Authentication failed. The refresh token is not valid. Error may be due to the following reasons:<ul><li>Token binding header is empty</li><li>Token binding hash does not match</li></ul> | | AADSTS70001 | UnauthorizedClient - The application is disabled. To learn more, see the troubleshooting article for error [AADSTS70001](/troubleshoot/azure/active-directory/error-code-aadsts70001-app-not-found-in-directory). |
For example, if you received the error code "AADSTS50058" then do a search in [h
| AADSTS70003 | UnsupportedGrantType - The app returned an unsupported grant type. | | AADSTS70004 | InvalidRedirectUri - The app returned an invalid redirect URI. The redirect address specified by the client does not match any configured addresses or any addresses on the OIDC approve list. | | AADSTS70005 | UnsupportedResponseType - The app returned an unsupported response type due to the following reasons:<ul><li>response type 'token' is not enabled for the app</li><li>response type 'id_token' requires the 'OpenID' scope -contains an unsupported OAuth parameter value in the encoded wctx</li></ul> |
+| AADSTS700054 | Response_type 'id_token' is not enabled for the application. The application requested an ID token from the authorization endpoint, but did not have ID token implicit grant enabled. Go to Azure Portal > Azure Active Directory > App registrations > Select your application > Authentication > Under 'Implicit grant and hybrid flows', make sure 'ID tokens' is selected.|
| AADSTS70007 | UnsupportedResponseMode - The app returned an unsupported value of `response_mode` when requesting a token. | | AADSTS70008 | ExpiredOrRevokedGrant - The refresh token has expired due to inactivity. The token was issued on XXX and was inactive for a certain amount of time. | | AADSTS70011 | InvalidScope - The scope requested by the app is invalid. |
active-directory Scenario Web App Sign User App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-web-app-sign-user-app-configuration.md
Here's the Python configuration file in [app_config.py](https://github.com/Azure
```Python CLIENT_SECRET = "Enter_the_Client_Secret_Here"
-AUTHORITY = "https://login.microsoftonline.com/common""
+AUTHORITY = "https://login.microsoftonline.com/common"
CLIENT_ID = "Enter_the_Application_Id_here" ENDPOINT = 'https://graph.microsoft.com/v1.0/users' SCOPE = ["User.ReadBasic.All"]
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/whats-new-docs.md
Previously updated : 04/30/2021 Last updated : 07/01/2021
Welcome to what's new in the Microsoft identity platform documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. +
+## June 2021
+
+### New articles
+
+- [Best practices for least privileged access for applications](secure-least-privileged-access.md)
+- [Differences between ADAL.NET and MSAL.NET apps](msal-net-differences-adal-net.md)
+- [How to: Get a complete list of apps using ADAL in your tenant](howto-get-list-of-all-active-directory-auth-library-apps.md)
+- [How to migrate confidential client applications from ADAL.NET to MSAL.NET](msal-net-migration-confidential-client.md)
+
+### Updated articles
+
+- [Application and service principal objects in Azure Active Directory](app-objects-and-service-principals.md)
+- [A web app that calls web APIs: Code configuration](scenario-web-app-call-api-app-configuration.md)
+- [Customize claims emitted in tokens for a specific app in a tenant](active-directory-claims-mapping.md)
+- [Microsoft identity platform code samples](sample-v2-code.md)
+- [Migrating applications to MSAL.NET or Microsoft.Identity.Web](msal-net-migration.md)
+- [Token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md)
+- [What's new for authentication?](reference-breaking-changes.md)
+ ## May 2021 ### New articles
Welcome to what's new in the Microsoft identity platform documentation. This art
- [Single-page application: Sign-in and Sign-out](scenario-spa-sign-in.md) - [Use MSAL in a national cloud environment](msal-national-cloud.md) - [Understanding Azure AD application consent experiences](application-consent-experience.md)-
-## March 2021
-
-### New articles
--- [Restore or remove a recently deleted application with the Microsoft identity platform](./howto-restore-app.md)-
-### Updated articles
--- [Admin consent on the Microsoft identity platform](v2-admin-consent.md)-- [Configuration requirements and troubleshooting tips for Xamarin Android with MSAL.NET](msal-net-xamarin-android-considerations.md)-- [Daemon app that calls web APIs - acquire a token](scenario-daemon-acquire-token.md)-- [Daemon app that calls web APIs - code configuration](scenario-daemon-app-configuration.md)-- [Daemon app that calls web APIs - call a web API from the app](scenario-daemon-call-api.md)-- [Daemon app that calls web APIs - move to production](scenario-daemon-production.md)-- [Desktop app that calls web APIs: Acquire a token](scenario-desktop-acquire-token.md)-- [Desktop app that calls web APIs: Code configuration](scenario-desktop-app-configuration.md)-- [Desktop app that calls web APIs: Call a web API](scenario-desktop-call-api.md)-- [How to: Customize claims emitted in tokens for a specific app in a tenant (Preview)](active-directory-claims-mapping.md)-- [Logging in MSAL for Python](msal-logging-python.md)-- [Microsoft Enterprise SSO plug-in for Apple devices (preview)](apple-sso-plugin.md)-- [Quickstart: Add Microsoft identity platform sign-in to an ASP.NET web app](quickstart-v2-aspnet-webapp.md)-- [Quickstart: Add sign-in with Microsoft to an ASP.NET Core web app](quickstart-v2-aspnet-core-webapp.md)-- [Quickstart: Get a token and call the Microsoft Graph API by using a console app's identity](quickstart-v2-netcore-daemon.md)-- [Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform](quickstart-v2-aspnet-core-web-api.md)-- [Quickstart: Sign in users and get an access token in an Angular single-page application](quickstart-v2-angular.md)-- [Support and help options for developers](developer-support-help-options.md)-- [Web app that signs in users: Code configuration](scenario-web-app-sign-user-app-configuration.md)-- [Web app that signs in users: Sign-in and sign-out](scenario-web-app-sign-user-sign-in.md)
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
Previously updated : 05/20/2021 Last updated : 06/30/2021 -+ # Preview: Login to a Linux virtual machine in Azure with Azure Active Directory using SSH certificate-based authentication
There are multiple ways you can configure role assignments for VM, as an example
To configure role assignments for your Azure AD enabled Linux VMs:
-1. Navigate to the virtual machine to be configured.
-1. Select **Access control (IAM)** from the menu options.
-1. Select **Add**, **Add role assignment** to open the Add role assignment pane.
-1. In the **Role** drop-down list, select the role **Virtual Machine Administrator Login** or **Virtual Machine User Login**.
-1. In the **Select** field, select a user, group, service principal, or managed identity. If you do not see the security principal in the list, you can type in the **Select** box to search the directory for display names, email addresses, and object identifiers.
-1. Select **Save**, to assign the role.
+1. Select **Access control (IAM)**.
+
+1. Select **Add** > **Add role assignment** to open the Add role assignment page.
+
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+
+ | Setting | Value |
+ | | |
+ | Role | **Virtual Machine Administrator Login** or **Virtual Machine User Login** |
+ | Assign access to | User, group, service principal, or managed identity |
+
+ ![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
After a few moments, the security principal is assigned the role at the selected scope.
az login
This command will launch a browser window and a user can sign in using their Azure AD account.
-The following [az ssh](/cli/azure/ssh?view=azure-cli-latest) example automatically resolves the appropriate IP address for the VM.
+The following example automatically resolves the appropriate IP address for the VM.
```azurecli az ssh vm -n myVM -g AzureADLinuxVMPreview
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
Previously updated : 06/04/2021 Last updated : 06/30/2021 -+ # Login to Windows virtual machine in Azure using Azure Active Directory authentication
There are multiple ways you can configure role assignments for VM:
To configure role assignments for your Azure AD enabled Windows Server 2019 Datacenter VMs:
-1. Navigate to the specific virtual machine overview page
-1. Select **Access control (IAM)** from the menu options
-1. Select **Add**, **Add role assignment** to open the Add role assignment pane.
-1. In the **Role** drop-down list, select a role such as **Virtual Machine Administrator Login** or **Virtual Machine User Login**.
-1. In the **Select** field, select a user, group, service principal, or managed identity. If you don't see the security principal in the list, you can type in the **Select** box to search the directory for display names, email addresses, and object identifiers.
-1. Select **Save**, to assign the role.
+1. Select **Access control (IAM)**.
-After a few moments, the security principal is assigned the role at the selected scope.
+1. Select **Add** > **Add role assignment** to open the Add role assignment page.
-![Assign roles to users who will access the VM](./media/howto-vm-sign-in-azure-ad-windows/azure-portal-access-control-assign-role.png)
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+
+ | Setting | Value |
+ | | |
+ | Role | **Virtual Machine Administrator Login** or **Virtual Machine User Login** |
+ | Assign access to | User, group, service principal, or managed identity |
+
+ ![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
### Using the Azure Cloud Shell experience
The AADLoginForWindows extension must install successfully in order for the VM t
- `curl https://login.microsoftonline.com/ -D -` - `curl https://login.microsoftonline.com/<TenantID>/ -D -`-
- > [!NOTE]
- > Replace `<TenantID>` with the Azure AD Tenant ID that is associated with the Azure subscription.
- - `curl https://enterpriseregistration.windows.net/ -D -` - `curl https://device.login.microsoftonline.com/ -D -` - `curl https://pas.windows.net/ -D -`
+ > [!NOTE]
+ > Replace `<TenantID>` with the Azure AD Tenant ID that is associated with the Azure subscription.<br/> `enterpriseregistration.windows.net` and `pas.windows.net` should return 404 Not Found, which is expected behavior.
+
1. The Device State can be viewed by running `dsregcmd /status`. The goal is for Device State to show as `AzureAdJoined : YES`. > [!NOTE]
This Exit code translates to `DSREG_AUTOJOIN_DISC_FAILED` because the extension
- `curl https://login.microsoftonline.com/ -D -` - `curl https://login.microsoftonline.com/<TenantID>/ -D -`
-
- > [!NOTE]
- > Replace `<TenantID>` with the Azure AD Tenant ID that is associated with the Azure subscription. If you need to find the tenant ID, you can hover over your account name to get the directory / tenant ID, or select **Azure Active Directory > Properties > Directory ID** in the Azure portal.
- - `curl https://enterpriseregistration.windows.net/ -D -` - `curl https://device.login.microsoftonline.com/ -D -` - `curl https://pas.windows.net/ -D -`
+
+ > [!NOTE]
+ > Replace `<TenantID>` with the Azure AD Tenant ID that is associated with the Azure subscription. If you need to find the tenant ID, you can hover over your account name to get the directory / tenant ID, or select **Azure Active Directory > Properties > Directory ID** in the Azure portal.<br/>`enterpriseregistration.windows.net` and `pas.windows.net` should return 404 Not Found, which is expected behavior.
1. If any of the commands fails with "Could not resolve host `<URL>`", try running this command to determine the DNS server that is being used by the VM.
active-directory Groups Saasapps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-saasapps.md
Previously updated : 12/02/2020 Last updated : 06/30/2021
# Using a group to manage access to SaaS applications
-Using Azure Active Directory (Azure AD) with an Azure AD Premium license plan, you can use groups to assign access to a SaaS application that's integrated with Azure AD. For example, if you want to assign access for the marketing department to use five different SaaS applications, you can create a group that contains the users in the marketing department, and then assign that group to these five SaaS applications that are needed by the marketing department. This way you can save time by managing the membership of the marketing department in one place. Users then are assigned to the application when they are added as members of the marketing group, and have their assignments removed from the application when they are removed from the marketing group. This capability can be used with hundreds of applications that you can add from within the Azure AD Application Gallery.
+Using Azure Active Directory (Azure AD) with an Azure AD Premium license plan, you can use groups to assign access to a SaaS application that's integrated with Azure AD. For example, if you want to assign access for the marketing department to use five different SaaS applications, you can create an Office 365 or security group that contains the users in the marketing department, and then assign that group to these five SaaS applications that are needed by the marketing department. This way you can save time by managing the membership of the marketing department in one place. Users then are assigned to the application when they are added as members of the marketing group, and have their assignments removed from the application when they are removed from the marketing group. This capability can be used with hundreds of applications that you can add from within the Azure AD Application Gallery.
> [!IMPORTANT] > You can use this feature only after you start an Azure AD Premium trial or purchase Azure AD Premium license plan.
active-directory Direct Federation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/direct-federation.md
Previously updated : 06/10/2021 Last updated : 06/17/2021
You can also give guest users a direct link to an application or resource by inc
## Limitations ### DNS-verified domains in Azure AD
-The domain you want to federate with must ***not*** be DNS-verified in Azure AD. You're allowed to set up federation with unmanaged (email-verified or "viral") Azure AD tenants because they aren't DNS-verified.
+You can set up SAML/WS-Fed IdP federation with domains that aren't DNS-verified in Azure AD, including unmanaged (email-verified or "viral") Azure AD tenants. However, we block SAML/WS-Fed IdP federation for Azure AD verified domains in favor of native Azure AD managed domain capabilities. You'll see an error in the Azure portal or PowerShell if you try to set up SAML/WS-Fed IdP federation with a domain that is DNS-verified in Azure AD.
### Signing certificate renewal If you specify the metadata URL in the IdP settings, Azure AD will automatically renew the signing certificate when it expires. However, if the certificate is rotated for any reason before the expiration time, or if you don't provide a metadata URL, Azure AD will be unable to renew it. In this case, you'll need to update the signing certificate manually.
We donΓÇÖt currently support SAML/WS-Fed IdP federation with multiple domains fr
## Frequently asked questions ### Can I set up SAML/WS-Fed IdP federation with a domain for which an unmanaged (email-verified) tenant exists?
-Yes. If the domain hasn't been verified and the tenant hasn't undergone an [admin takeover](../enterprise-users/domains-admin-takeover.md), you can set up federation with that domain. Unmanaged, or email-verified, tenants are created when a user redeems a B2B invitation or performs a self-service sign-up for Azure AD using a domain that doesnΓÇÖt currently exist. You can set up federation with these domains. If you try to set up federation with a DNS-verified domain, either in the Azure portal or via PowerShell, you'll see an error.
+Yes. If the domain hasn't been verified and the tenant hasn't undergone an [admin takeover](../enterprise-users/domains-admin-takeover.md), you can set up federation with that domain. Unmanaged, or email-verified, tenants are created when a user redeems a B2B invitation or performs a self-service sign-up for Azure AD using a domain that doesnΓÇÖt currently exist. You can set up SAML/WS-Fed IdP federation with these domains.
### If SAML/WS-Fed IdP federation and email one-time passcode authentication are both enabled, which method takes precedence? When SAML/WS-Fed IdP federation is established with a partner organization, it takes precedence over email one-time passcode authentication for new guest users from that organization. If a guest user redeemed an invitation using one-time passcode authentication before you set up SAML/WS-Fed IdP federation, they'll continue to use one-time passcode authentication. ### Does SAML/WS-Fed IdP federation address sign-in issues due to a partially synced tenancy?
Next, your partner organization needs to configure their IdP with the required c
Azure AD B2B can be configured to federate with IdPs that use the SAML protocol with specific requirements listed below. For more information about setting up a trust between your SAML IdP and Azure AD, see [Use a SAML 2.0 Identity Provider (IdP) for Single Sign-On](../hybrid/how-to-connect-fed-saml-idp.md). > [!NOTE]
-> The target domain for SAML/WS-Fed IdP federation must not be DNS-verified on Azure AD. See the [Limitations](#limitations) section for details.
+> The target domain for SAML/WS-Fed IdP federation must not be DNS-verified in Azure AD. See the [Limitations](#limitations) section for details.
#### Required SAML 2.0 attributes and claims The following tables show requirements for specific attributes and claims that must be configured at the third-party IdP. To set up federation, the following attributes must be received in the SAML 2.0 response from the IdP. These attributes can be configured by linking to the online security token service XML file or by entering them manually.
active-directory Hybrid Cloud To On Premises https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/hybrid-cloud-to-on-premises.md
As an organization that uses Azure Active Directory (Azure AD) B2B collaboration
## Access to SAML apps
-If your on-premises app uses SAML-based authentication, you can easily make these apps available to your Azure AD B2B collaboration users through the Azure portal.
+If your on-premises app uses SAML-based authentication, you can easily make these apps available to your Azure AD B2B collaboration users through the Azure portal using Azure AD Application Proxy.
-You must do both of the following:
+You must do the following :
-- Integrate the app using SAML as described in [Configure SAML-based single sign-on](../manage-apps/configure-saml-single-sign-on.md). Make sure to note what you use for the **Sign-on URL** value.-- Use Azure AD Application Proxy to publish the on-premises app, with **Azure Active Directory** configured as the authentication source. For instructions, see [Publish applications using Azure AD Application Proxy](../app-proxy/application-proxy-add-on-premises-application.md).
+- Enable Application Proxy and install a connector. For instructions, see [Publish applications using Azure AD Application Proxy](../app-proxy/application-proxy-add-on-premises-application.md).
+- Publish the on-premises SAML-based application through Azure AD Application Proxy by following the instructions in [SAML single sign-on for on-premises applications with Application Proxy](../app-proxy/application-proxy-configure-single-sign-on-on-premises-apps.md).
+- Assign Azure AD B2B Users to the SAML Application.
- When you configure the **Internal Url** setting, use the sign-on URL that you specified in the non-gallery application template. In this way, users can access the app from outside the organization boundary. Application Proxy performs the SAML single sign-on for the on-premises app.
-
- ![Shows on-premises app settings internal URL and authentication](media/hybrid-cloud-to-on-premises/OnPremAppSettings.PNG)
+When you've completed the steps above, your app should be up and running. To test Azure AD B2B access:
+1. Open a browser and navigate to the external URL that you created when you published the app.
+2. Sign in with the Azure AD B2B account that you assigned to the app. You should be able to open the app and access it with single sign-on.
## Access to IWA and KCD apps
Make sure that you have the correct Client Access Licenses (CALs) for external g
- [Azure Active Directory B2B collaboration for hybrid organizations](hybrid-organizations.md) -- For an overview of Azure AD Connect, see [Integrate your on-premises directories with Azure Active Directory](../hybrid/whatis-hybrid-identity.md).
+- For an overview of Azure AD Connect, see [Integrate your on-premises directories with Azure Active Directory](../hybrid/whatis-hybrid-identity.md).
active-directory One Time Passcode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/one-time-passcode.md
Previously updated : 04/06/2021 Last updated : 06/30/2021 -+ # Email one-time passcode authentication
-This article describes how to enable email one-time passcode authentication for B2B guest users. The email one-time passcode feature authenticates B2B guest users when they can't be authenticated through other means like Azure AD, a Microsoft account (MSA), or Google federation. With one-time passcode authentication, there's no need to create a Microsoft account. When the guest user redeems an invitation or accesses a shared resource, they can request a temporary code, which is sent to their email address. Then they enter this code to continue signing in.
+The email one-time passcode feature is a way to authenticate B2B collaboration users when they can't be authenticated through other means, such as Azure AD, Microsoft account (MSA), or social identity providers. When a B2B guest user tries to redeem your invitation or sign in to your shared resources, they can request a temporary passcode, which is sent to their email address. Then they enter this passcode to continue signing in.
+
+You can enable this feature at any time in the Azure portal by configuring the Email one-time passcode (Preview) identity provider under your tenant's External Identities settings. You can choose to enable the feature, disable it, or wait for automatic enablement in October 2021.
![Email one-time passcode overview diagram](media/one-time-passcode/email-otp.png)
When a guest user redeems an invitation or uses a link to a resource that has be
- They do not have an Azure AD account - They do not have a Microsoft account-- The inviting tenant did not set up Google federation for @gmail.com and @googlemail.com users
+- The inviting tenant did not set up federation with social (like [Google](google-federation.md)) or other identity providers.
At the time of invitation, there's no indication that the user you're inviting will use one-time passcode authentication. But when the guest user signs in, one-time passcode authentication will be the fallback method if no other authentication methods can be used.
-You can see whether a guest user authenticates using one-time passcodes by viewing the **Source** property in the user's details. In the Azure portal, go to **Azure Active Directory** > **Users**, and then select the user to open the details page.
-
-![Screenshot showing a one-time passcode user with Source value of OTP](media/one-time-passcode/guest-user-properties.png)
> [!NOTE] > When a user redeems a one-time passcode and later obtains an MSA, Azure AD account, or other federated account, they'll continue to be authenticated using a one-time passcode. If you want to update the user's authentication method, you can [reset their redemption status](reset-redemption-status.md).
You can see whether a guest user authenticates using one-time passcodes by viewi
Guest user teri@gmail.com is invited to Fabrikam, which does not have Google federation set up. Teri does not have a Microsoft account. They'll receive a one-time passcode for authentication.
+## Enable email one-time passcode
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) as an Azure AD global administrator.
+
+2. In the navigation pane, select **Azure Active Directory**.
+
+3. Select **External Identities** > **All identity providers**.
+
+4. Select **Email one-time passcode** to open the configuration pane.
+
+5. Under **Email one-time passcode for guests**, select one of the following:
+
+ - **Automatically enable email one-time passcode for guests starting October 2021** if you don't want to enable the feature immediately and want to wait for the October 2021 automatic enablement date.
+ - **Enable email one-time passcode for guests effective now** to enable the feature now.
+ - **Yes** to enable the feature now if you see a Yes/No toggle (this toggle appears if the feature was previously disabled).
+
+ ![Email one-time passcode toggle enabled](media/one-time-passcode/enable-email-otp-options.png)
+
+5. Select **Save**.
+ ## Disable email one-time passcode Starting October 2021, the email one-time passcode feature will be turned on for all existing tenants and enabled by default for new tenants. At that time, Microsoft will no longer support the redemption of invitations by creating unmanaged ("viral" or "just-in-time") Azure AD accounts and tenants for B2B collaboration scenarios. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, you have the option of disabling this feature if you choose not to use it.
Starting October 2021, the email one-time passcode feature will be turned on for
3. Select **External Identities** > **All identity providers**.
-4. Select **Email one-time passcode**, and then select **Disable email one-time passcode for guests**.
+4. Select **Email one-time passcode**, and then under **Email one-time passcode for guests**, select **Disable email one-time passcode for guests** (or **No** if the feature was previously enabled, disabled, or opted into during preview).
+
+ ![Email one-time passcode toggle disabled](media/one-time-passcode/disable-email-otp-options.png)
> [!NOTE] > Email one-time passcode settings have moved in the Azure portal from **External collaboration settings** to **All identity providers**. > If you see a toggle instead of the email one-time passcode options, this means you've previously enabled, disabled, or opted into the preview of the feature. Select **No** to disable the feature.
- >
- >![Email one-time passcode toggle disabled](media/one-time-passcode/enable-email-otp-disabled.png)
5. Select **Save**.
active-directory User Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/user-properties.md
Typically, an Azure AD B2B user and guest user are synonymous. Therefore, an Azu
![Screenshot showing the filter for guest users](media/user-properties/filter-guest-users.png) ## Convert UserType
-It's possible to convert UserType from Member to Guest and vice-versa by using PowerShell. However, the UserType property represents the user's relationship to the organization. Therefore, you should change this property only if the relationship of the user to the organization changes. If the relationship of the user changes, should the user principal name (UPN) change? Should the user continue to have access to the same resources? Should a mailbox be assigned? We don't recommend changing the UserType by using PowerShell as an atomic activity. Also, in case this property becomes immutable by using PowerShell, we don't recommend taking a dependency on this value.
+It's possible to convert UserType from Member to Guest and vice-versa by using PowerShell. However, the UserType property represents the user's relationship to the organization. Therefore, you should change this property only if the relationship of the user to the organization changes. If the relationship of the user changes, should the user principal name (UPN) change? Should the user continue to have access to the same resources? Should a mailbox be assigned?
## Remove guest user limitations There may be cases where you want to give your guest users higher privileges. You can add a guest user to any role and even remove the default guest user restrictions in the directory to give a user the same privileges as members.
active-directory Entitlement Management Reprocess Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-reprocess-access-package-assignments.md
+
+ Title: Reprocess assignments for an access package in Azure AD entitlement management - Azure Active Directory
+description: Learn how to reprocess assignments for an access package in Azure Active Directory entitlement management.
+
+documentationCenter: ''
++
+editor:
++
+ na
+ms.devlang: na
++ Last updated : 06/25/2021++++
+#Customer intent: As a global administrator or access package manager, I want detailed information about how I can reprocess assignments for an access package in the event of a partial delivery, so that requestors have all of the resources they need to perform their job.
++
+# Reprocess assignments for an access package in Azure AD entitlement management
+
+As an access package manager, you can automatically reevaluate and enforce usersΓÇÖ original assignments in an access package using the reprocess functionality. Reprocessing eliminates the need for users to repeat the access package request process if their access to resources was impacted by changes outside of Entitlement Management.
+
+For example, a user may have been removed from a group manually, thereby causing that user to lose access to necessary resources.
+
+Entitlement Management does not block outside updates to the access packageΓÇÖs resources, so the Entitlement Management UI would not accurately display this change. Therefore, the userΓÇÖs assignment status would be shown as ΓÇ£DeliveredΓÇ¥ even though the user does not have access to the resources anymore. However, if the userΓÇÖs assignment is reprocessed, they will be added to the access packageΓÇÖs resources again. Reprocessing ensures that the access package assignments are up to date, that users have access to necessary resources, and that assignments are accurately reflected in the UI.
+
+This article describes how to reprocess assignments in an existing access package.
+
+## Prerequisites
+
+To use Azure AD entitlement management and assign users to access packages, you must have one of the following licenses:
+
+- Azure AD Premium P2
+- Enterprise Mobility + Security (EMS) E5 license
+
+## Open an existing access package and reprocess user assignments
+
+**Prerequisite role**: Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
+
+If you have users who are in the "Delivered" state but do not have access to resources that are a part of the access package, you will likely need to reprocess the assignments to reassign those users to the access package's resources. Follow these steps to reprocess assignments for an existing access package:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Click **Azure Active Directory**, and then click **Identity Governance**.
+
+1. In the left menu, click **Access packages** and then open the access package with the user assignment you want to reprocess.
+
+1. Underneath **Manage** on the left side, click **Assignments**.
+
+ ![Entitlement management in the Azure portal](./media/entitlement-management-reprocess-access-package-assignments/reprocess-access-package-assignment.png)
+
+1. Select all users whose assignments you wish to reprocess.
+
+1. Click **Reprocess**.
+
+## Next steps
+
+- [View, add, and remove assignments for an access package](entitlement-management-access-package-assignments.md)
+- [View reports and logs](entitlement-management-reports.md)
active-directory Entitlement Management Reprocess Access Package Requests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-reprocess-access-package-requests.md
+
+ Title: Reprocess requests for an access package in Azure AD entitlement management - Azure Active Directory
+description: Learn how to reprocess a request for an access package in Azure Active Directory entitlement management.
+
+documentationCenter: ''
++
+editor:
++
+ na
+ms.devlang: na
++ Last updated : 06/25/2021++++
+#Customer intent: As a global administrator or access package manager, I want detailed information about how I can repreocess a request for an access package if a request failed so that requestors have the resources in the access package they need to perform their job.
++
+# Reprocess requests for an access package in Azure AD entitlement management
+
+As an access package manager, you can automatically retry a userΓÇÖs request for access to an access package at any time by using the reprocess functionality. Reprocessing eliminates the need for users to repeat the access package request process if their access to resources is not successfully provisioned.
+
+> [!NOTE]
+> You can reprocess a request for up to 14 days from the time that the original request is completed. For requests that were completed more than 14 days ago, users will need to cancel and make new requests in MyAccess.
+
+This article describes how to reprocess requests for an existing access package.
+
+## Prerequisites
+
+To use Azure AD entitlement management and assign users to access packages, you must have one of the following licenses:
+
+- Azure AD Premium P2
+- Enterprise Mobility + Security (EMS) E5 license
+
+## Open an existing access package and reprocess user requests
+
+**Prerequisite role**: Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
+
+If you have a set of users whose requests are in the "Partially Delivered" or "Failed" state, you might need to reprocess some of those requests. Follow these steps to reprocess requests for an existing access package:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Click **Azure Active Directory**, and then click **Identity Governance**.
+
+1. In the left menu, click **Access packages** and then open the access package.
+
+1. Underneath **Manage** on the left side, click **Requests**.
+
+1. Select all users whose requests you wish to reprocess.
+
+1. Click **Reprocess**.
+
+## Next steps
+
+- [View requests for an access package](entitlement-management-access-package-requests.md)
+- [Approve or deny access requests](entitlement-management-request-approve.md)
active-directory How To Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-password-hash-synchronization.md
ms.assetid: 05f16c3e-9d23-45dc-afca-3d0fa9dbf501
Previously updated : 02/26/2020 Last updated : 07/01/2021 search.appverid:
A user must enter their corporate credentials a second time to authenticate to A
The following section describes, in-depth, how password hash synchronization works between Active Directory and Azure AD.
-![Detailed password flow](./media/how-to-connect-password-hash-synchronization/arch3b.png)
+[![Detailed password flow](./media/how-to-connect-password-hash-synchronization/arch3d.png)](./media/how-to-connect-password-hash-synchronization/arch3d.png#lightbox)
1. Every two minutes, the password hash synchronization agent on the AD Connect server requests stored password hashes (the unicodePwd attribute) from a DC. This request is via the standard [MS-DRSR](/openspecs/windows_protocols/ms-drsr/f977faaa-673e-4f66-b9bf-48c640241d47) replication protocol used to synchronize data between DCs. The service account must have Replicate Directory Changes and Replicate Directory Changes All AD permissions (granted by default on installation) to obtain the password hashes. 2. Before sending, the DC encrypts the MD4 password hash by using a key that is a [MD5](https://www.rfc-editor.org/rfc/rfc1321.txt) hash of the RPC session key and a salt. It then sends the result to the password hash synchronization agent over RPC. The DC also passes the salt to the synchronization agent by using the DC replication protocol, so the agent will be able to decrypt the envelope.
active-directory Reference Connect Device Disappearance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-device-disappearance.md
Previously updated : 09/25/2019 Last updated : 06/30/2021
Some customers may need to revisit [How To: Plan your hybrid Azure Active Direct
## How can I verify which devices are deleted with this update?
-To verify which devices are deleted, you can use this PowerShell script: https://gallery.technet.microsoft.com/scriptcenter/Export-Hybrid-Azure-AD-f8e51436
+To verify which devices are deleted, you can use the PowerShell script [below.](#powershell-certificate-report-script)
+ This script generates a report about certificates stored in Active Directory Computer objects, specifically, certificates issued by the Hybrid Azure AD join feature. It checks the certificates present in the UserCertificate property of a Computer object in AD and, for each non-expired certificate present, validates if the certificate was issued for the Hybrid Azure AD join feature (i.e. Subject Name matches CN={ObjectGUID}). Before, Azure AD Connect would synchronize to Azure AD any Computer that contained at least one valid certificate but starting on Azure AD Connect version 1.4, the synchronization engine can identify Hybrid Azure AD join certificates and will ΓÇÿcloudfilterΓÇÖ the computer object from synchronizing to Azure AD unless thereΓÇÖs a valid Hybrid Azure AD join certificate. Azure AD Devices that were already synchronized to AD but do not have a valid Hybrid Azure AD join certificate will be deleted (CloudFiltered=TRUE) by the sync engine.
+## PowerShell certificate report script
++
+ ```PowerShell
+<#
+
+Filename: Export-ADSyncToolsHybridAzureADjoinCertificateReport.ps1.
+
+DISCLAIMER:
+Copyright (c) Microsoft Corporation. All rights reserved. This script is made available to you without any express, implied or statutory warranty, not even the implied warranty of merchantability or fitness for a particular purpose, or the warranty of title or non-infringement. The entire risk of the use or the results from the use of this script remains with you.
+.Synopsis
+This script generates a report about certificates stored in Active Directory Computer objects, specifically,
+certificates issued by the Hybrid Azure AD join feature.
+.DESCRIPTION
+It checks the certificates present in the UserCertificate property of a Computer object in AD and, for each
+non-expired certificate present, validates if the certificate was issued for the Hybrid Azure AD join feature
+(i.e. Subject Name matches CN={ObjectGUID}).
+Before, Azure AD Connect would synchronize to Azure AD any Computer that contained at least one valid
+certificate but starting on Azure AD Connect version 1.4, the sync engine can identify Hybrid
+Azure AD join certificates and will ΓÇÿcloudfilterΓÇÖ the computer object from synchronizing to Azure AD unless
+thereΓÇÖs a valid Hybrid Azure AD join certificate.
+Azure AD Device objects that were already synchronized to AD but do not have a valid Hybrid Azure AD join
+certificate will be deleted (CloudFiltered=TRUE) by the sync engine.
+.EXAMPLE
+.\Export-ADSyncToolsHybridAzureADjoinCertificateReport.ps1 -DN 'CN=Computer1,OU=SYNC,DC=Fabrikam,DC=com'
+.EXAMPLE
+.\Export-ADSyncToolsHybridAzureADjoinCertificateReport.ps1 -OU 'OU=SYNC,DC=Fabrikam,DC=com' -Filename "MyHybridAzureADjoinReport.csv" -Verbose
+
+#>
+ [CmdletBinding()]
+ Param
+ (
+ # Computer DistinguishedName
+ [Parameter(ParameterSetName='SingleObject',
+ Mandatory=$true,
+ ValueFromPipelineByPropertyName=$true,
+ Position=0)]
+ [String]
+ $DN,
+
+ # AD OrganizationalUnit
+ [Parameter(ParameterSetName='MultipleObjects',
+ Mandatory=$true,
+ ValueFromPipelineByPropertyName=$true,
+ Position=0)]
+ [String]
+ $OU,
+
+ # Output CSV filename (optional)
+ [Parameter(Mandatory=$false,
+ ValueFromPipelineByPropertyName=$false,
+ Position=1)]
+ [String]
+ $Filename
+
+ )
+
+ # Generate Output filename if not provided
+ If ($Filename -eq "")
+ {
+ $Filename = [string] "$([string] $(Get-Date -Format yyyyMMddHHmmss))_ADSyncAADHybridJoinCertificateReport.csv"
+ }
+ Write-Verbose "Output filename: '$Filename'"
+
+ # Read AD object(s)
+ If ($PSCmdlet.ParameterSetName -eq 'SingleObject')
+ {
+ $directoryObjs = @(Get-ADObject $DN -Properties UserCertificate)
+ Write-Verbose "Starting report for a single object '$DN'"
+ }
+ Else
+ {
+ $directoryObjs = Get-ADObject -Filter { ObjectClass -like 'computer' } -SearchBase $OU -Properties UserCertificate
+ Write-Verbose "Starting report for $($directoryObjs.Count) computer objects in OU '$OU'"
+ }
+
+ Write-Host "Processing $($directoryObjs.Count) directory object(s). Please wait..."
+ # Check Certificates on each AD Object
+ $results = @()
+ ForEach ($obj in $directoryObjs)
+ {
+ # Read UserCertificate multi-value property
+ $objDN = [string] $obj.DistinguishedName
+ $objectGuid = [string] ($obj.ObjectGUID).Guid
+ $userCertificateList = @($obj.UserCertificate)
+ $validEntries = @()
+ $totalEntriesCount = $userCertificateList.Count
+ Write-verbose "'$objDN' ObjectGUID: $objectGuid"
+ Write-verbose "'$objDN' has $totalEntriesCount entries in UserCertificate property."
+ If ($totalEntriesCount -eq 0)
+ {
+ Write-verbose "'$objDN' has no Certificates - Skipped."
+ Continue
+ }
+
+ # Check each UserCertificate entry and build array of valid certs
+ ForEach($entry in $userCertificateList)
+ {
+ Try
+ {
+ $cert = [System.Security.Cryptography.X509Certificates.X509Certificate2] $entry
+ }
+ Catch
+ {
+ Write-verbose "'$objDN' has an invalid Certificate!"
+ Continue
+ }
+ Write-verbose "'$objDN' has a Certificate with Subject: $($cert.Subject); Thumbprint:$($cert.Thumbprint)."
+ $validEntries += $cert
+
+ }
+
+ $validEntriesCount = $validEntries.Count
+ Write-verbose "'$objDN' has a total of $validEntriesCount certificates (shown above)."
+
+ # Get non-expired Certs (Valid Certificates)
+ $validCerts = @($validEntries | Where-Object {$_.NotAfter -ge (Get-Date)})
+ $validCertsCount = $validCerts.Count
+ Write-verbose "'$objDN' has $validCertsCount valid certificates (not-expired)."
+
+ # Check for AAD Hybrid Join Certificates
+ $hybridJoinCerts = @()
+ $hybridJoinCertsThumbprints = [string] "|"
+ ForEach ($cert in $validCerts)
+ {
+ $certSubjectName = $cert.Subject
+ If ($certSubjectName.StartsWith($("CN=$objectGuid")) -or $certSubjectName.StartsWith($("CN={$objectGuid}")))
+ {
+ $hybridJoinCerts += $cert
+ $hybridJoinCertsThumbprints += [string] $($cert.Thumbprint) + '|'
+ }
+ }
+
+ $hybridJoinCertsCount = $hybridJoinCerts.Count
+ if ($hybridJoinCertsCount -gt 0)
+ {
+ $cloudFiltered = 'FALSE'
+ Write-verbose "'$objDN' has $hybridJoinCertsCount AAD Hybrid Join Certificates with Thumbprints: $hybridJoinCertsThumbprints (cloudFiltered=FALSE)"
+ }
+ Else
+ {
+ $cloudFiltered = 'TRUE'
+ Write-verbose "'$objDN' has no AAD Hybrid Join Certificates (cloudFiltered=TRUE)."
+ }
+
+ # Save results
+ $r = "" | Select ObjectDN, ObjectGUID, TotalEntriesCount, CertsCount, ValidCertsCount, HybridJoinCertsCount, CloudFiltered
+ $r.ObjectDN = $objDN
+ $r.ObjectGUID = $objectGuid
+ $r.TotalEntriesCount = $totalEntriesCount
+ $r.CertsCount = $validEntriesCount
+ $r.ValidCertsCount = $validCertsCount
+ $r.HybridJoinCertsCount = $hybridJoinCertsCount
+ $r.CloudFiltered = $cloudFiltered
+ $results += $r
+ }
+
+ # Export results to CSV
+ Try
+ {
+ $results | Export-Csv $Filename -NoTypeInformation -Delimiter ';'
+ Write-Host "Exported Hybrid Azure AD Domain Join Certificate Report to '$Filename'.`n"
+ }
+ Catch
+ {
+ Throw "There was an error saving the file '$Filename': $($_.Exception.Message)"
+ }
+
+ ```
+ ## Next Steps - [Azure AD Connect Version history](reference-connect-version-history.md)
active-directory Pim Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-troubleshoot.md
Title: Troubleshoot a problem with Privileged Identity Management - Azure Active Directory | Microsoft Docs
+ Title: Troubleshoot resource access denied in Privileged Identity Management - Azure Active Directory | Microsoft Docs
description: Learn how to troubleshoot system errors with roles in Azure AD Privileged Identity Management (PIM). documentationcenter: ''
Previously updated : 10/18/2019 Last updated : 06/30/2021
-# Troubleshoot a problem with Privileged Identity Management
+# Troubleshoot access to Azure resources denied in Privileged Identity Management
Are you having a problem with Privileged Identity Management (PIM) in Azure Active Directory (Azure AD)? The information that follows can help you to get things working again.
active-directory Reports Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/reports-faq.md
Title: Azure Active Directory Reports FAQ | Microsoft Docs
description: Frequently asked questions around Azure Active Directory reports. documentationcenter: ''--+ ms.assetid: 534da0b1-7858-4167-9986-7a62fbd10439
active-directory Manage Roles Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/manage-roles-portal.md
To grant access to users in Azure Active Directory (Azure AD), you assign Azure
- Privileged Role Administrator or Global Administrator - Azure AD Premium P2 license when using Privileged Identity Management (PIM) - AzureADPreview module when using PowerShell
+- Admin consent when using Graph explorer for Microsoft Graph API
For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md).
If PIM is enabled, you have additional capabilities, such as making a user eligi
$roleAssignmentEligible = Open-AzureADMSPrivilegedRoleAssignmentRequest -ProviderId 'aadRoles' -ResourceId $aadTenant.Id -RoleDefinitionId $roleDefinition.Id -SubjectId $user.objectId -Type 'AdminAdd' -AssignmentState 'Eligible' -schedule $schedule -reason "Review billing info" ```
+## Microsoft Graph API
+In this example, a security principal with objectID `f8ca5a85-489a-49a0-b555-0a6d81e56f0d` is assigned Billing Administrator role (role definition ID `b0f54661-2d74-4c50-afa3-1ec803f12efe`) at tenant scope. If you want to see the list of immutable role template IDs of all built-in roles, see this page [Azure AD built-in roles](permissions-reference.md)
+1. Sign in to the [Graph Explorer](https://aka.ms/ge).
+2. Select **POST** as the HTTP method from the dropdown.
+3. Select the API version to **beta**.
+4. Add following details to the URL and Request Body and select **Run query**.
+
+```HTTP
+POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
+Content-type: application/json
+
+{
+ "@odata.type": "#microsoft.graph.unifiedRoleAssignment",
+ "roleDefinitionId": "b0f54661-2d74-4c50-afa3-1ec803f12efe",
+ "principalId": "f8ca5a85-489a-49a0-b555-0a6d81e56f0d",
+ "directoryScopeId": "/"
+}
+```
+ ## Next steps - [List Azure AD role assignments](view-assignments.md)
active-directory Lanschool Air Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/lanschool-air-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with LanSchool Air | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and LanSchool Air.
++++++++ Last updated : 06/30/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with LanSchool Air
+
+In this tutorial, you'll learn how to integrate LanSchool Air with Azure Active Directory (Azure AD). When you integrate LanSchool Air with Azure AD, you can:
+
+* Control in Azure AD who has access to LanSchool Air.
+* Enable your users to be automatically signed-in to LanSchool Air with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* LanSchool Air single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* LanSchool Air supports **SP and IDP** initiated SSO.
+
+## Adding LanSchool Air from the gallery
+
+To configure the integration of LanSchool Air into Azure AD, you need to add LanSchool Air from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **LanSchool Air** in the search box.
+1. Select **LanSchool Air** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for LanSchool Air
+
+Configure and test Azure AD SSO with LanSchool Air using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in LanSchool Air.
+
+To configure and test Azure AD SSO with LanSchool Air, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure LanSchool Air SSO](#configure-lanschool-air-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create LanSchool Air test user](#create-lanschool-air-test-user)** - to have a counterpart of B.Simon in LanSchool Air that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **LanSchool Air** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://lanschoolair.lenovosoftware.com`
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to LanSchool Air.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **LanSchool Air**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure LanSchool Air SSO
+
+To configure single sign-on on **LanSchool Air** side, you need to send the **App Federation Metadata Url** to [LanSchool Air support team](mailto:support@lanschool.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create LanSchool Air test user
+
+In this section, you create a user called Britta Simon in LanSchool Air. Work with [LanSchool Air support team](mailto:support@lanschool.com) to add the users in the LanSchool Air platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to LanSchool Air Sign on URL where you can initiate the login flow.
+
+* Go to LanSchool Air Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the LanSchool Air for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the LanSchool Air tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the LanSchool Air for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
++
+## Next steps
+
+Once you configure LanSchool Air you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
++
active-directory Vonage Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/vonage-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Vonage for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Vonage.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: dfb7e9bb-c29e-4476-adad-4ab254658e83
+++
+ na
+ms.devlang: na
+ Last updated : 06/07/2021+++
+# Tutorial: Configure Vonage for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Vonage and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Vonage](https://www.vonage.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Vonage.
+> * Remove users in Vonage when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Vonage.
+> * [Single sign-on](vonage-tutorial.md) to Vonage (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A [Vonage](https://www.vonage.com/) tenant.
+* A user account in Vonage with Admin permission(Account Super User).
++
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Vonage](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Vonage to support provisioning with Azure AD
+
+1. Login to [Vonage admin portal](http://admin.vonage.com) with an admin user.
+
+ ![Log in to vonage admin portal](media/vonage-provisioning-tutorial/log-in.png)
+
+1. Navigate to **Account > Single Sign-On Settings** on the left side menu.
+
+ ![Single sign on settings](media/vonage-provisioning-tutorial/single-sign-on-settings.png)
+
+1. Select **User Settings** tab, toggle **Enable SCIM user provisioning** ON and click **Save**.
+
+![Enable scim](media/vonage-provisioning-tutorial/enable-scim.png)
+
+## Step 3. Add Vonage from the Azure AD application gallery
+++
+Add Vonage from the Azure AD application gallery to start managing provisioning to Vonage. If you have previously setup Vonage for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to Vonage, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
++
+## Step 5. Configure automatic user provisioning to Vonage
+
+> [!NOTE]
+> Any user that is added to Vonage must have first name, last name and email. Otherwise the integration will fail.
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Vonage based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Vonage in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **Vonage**.
+
+ ![The Vonage link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. Before the next step make sure you are authorized as Account Super User. For checking if the user is an Account Super User perform login at [Vonage admin portal](http://admin.vonage.com).
+ You should see on the top left side similar to the picture below.
+
+ ![Provisioning tab user](media/vonage-provisioning-tutorial/account-super-user.png)
+
+1. In the **Admin Credentials** section, click on Authorize , make sure that you enter your Account Super User credentials, if it doesn't ask you to enter credentials make sure that you logged in with the Account Super User (you can check it http://admin.vonage.com/ on the upper left side, bellow your name you need to see "Account Super User"). Click **Test Connection** to ensure Azure AD can connect to Vonage. If the connection fails , ensure your Vonage account has Admin permissions and try again.
+
+ ![Token](media/vonage-provisioning-tutorial/authorize.png)
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Vonage**.
+
+1. Review the user attributes that are synchronized from Azure AD to Vonage in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Vonage for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Vonage API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |userName|String|&check;
+ |active|Boolean|
+ |emails[type eq "work"].value|String|
+ |name.givenName|String|
+ |name.familyName|String|
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Vonage, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Vonage by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
analysis-services Analysis Services Connect Pbi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-connect-pbi.md
description: Learn how to connect to an Azure Analysis Services server by using
Previously updated : 5/25/2021 Last updated : 06/30/2021
Once you've created a server in Azure, and deployed a tabular model to it, users
If you have a Power BI model in [Mixed storage mode](/power-bi/transform-model/desktop-composite-models), the **Connect live** option is replaced by the **[DirectQuery](/power-bi/connect-data/desktop-directquery-datasets-azure-analysis-services)** option. Live connections are also automatically upgraded to DirectQuery if the model is switched from Import to Mixed storage mode.
-5. If prompted, enter your login credentials.
+5. When prompted to enter your credentials, select **Microsoft account**, and then click **Sign in**.
+
+ :::image type="content" source="media/analysis-services-connect-pbi/aas-sign-in.png" alt-text="Sign in to Azure AS":::
> [!NOTE]
- > One-time passcode (OTP) accounts aren't supported.
+ > Windows and Basic authentication are not supported.
6. In **Navigator**, expand the server, then select the model or perspective you want to connect to, and then click **Connect**. Click a model or perspective to show all objects for that view.
Once you've created a server in Azure, and deployed a tabular model to it, users
To safeguard the performance of the system, a memory limit is enforced for all queries issued by Power BI reports against Azure Analysis Services, regardless of the [Query Memory Limit](/analysis-services/server-properties/memory-properties?view=azure-analysis-services-current&preserve-view=true) configured on the Azure Analysis Services server. Users should consider simplifying the query or its calculations if the query is too memory intensive.
-| | Request Memory limit |
+|Query type| Request Memory limit |
|--|-| | Live connect from Power BI | 10 GB | | DirectQuery from Power BI report in Shared workspace | 1 GB |
app-service App Service Key Vault References https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-key-vault-references.md
description: Learn how to set up Azure App Service and Azure Functions to use Az
Previously updated : 05/25/2021 Last updated : 06/11/2021
In order to read secrets from Key Vault, you need to have a vault created and gi
1. Create a key vault by following the [Key Vault quickstart](../key-vault/secrets/quick-create-cli.md).
-1. Create a [system-assigned managed identity](overview-managed-identity.md) for your application.
+1. Create a [managed identity](overview-managed-identity.md) for your application.
- > [!NOTE]
- > Key Vault references currently only support system-assigned managed identities. User-assigned identities cannot be used.
+ Key Vault references will use the app's system assigned identity by default, but you can [specify a user-assigned identity](#access-vaults-with-a-user-assigned-identity).
1. Create an [access policy in Key Vault](../key-vault/general/security-features.md#privileged-access) for the application identity you created earlier. Enable the "Get" secret permission on this policy. Do not configure the "authorized application" or `applicationId` settings, as this is not compatible with a managed identity. ### Access network-restricted vaults
-> [!NOTE]
-> Linux-based applications are not presently able to resolve secrets from a network-restricted key vault unless the app is hosted within an [App Service Environment](./environment/intro.md).
- If your vault is configured with [network restrictions](../key-vault/general/overview-vnet-service-endpoints.md), you will also need to ensure that the application has network access. 1. Make sure the application has outbound networking capabilities configured, as described in [App Service networking features](./networking-features.md) and [Azure Functions networking options](../azure-functions/functions-networking-options.md). 2. Make sure that the vault's configuration accounts for the network or subnet through which your app will access it.
-> [!IMPORTANT]
-> Accessing a vault through virtual network integration is currently incompatible with [automatic updates for secrets without a specified version](#rotation).
+### Access vaults with a user-assigned identity
+
+Some apps need to reference secrets at creation time, when a system-assigned identity would not yet be available. In these cases, a user-assigned identity can be created and given access to the vault in advance.
+
+Once you have granted permissions to the user-assigned identity, follow these steps:
+
+1. [Assign the identity](./overview-managed-identity.md#add-a-user-assigned-identity) to your application if you haven't already.
+
+1. Configure the app to use this identity for Key Vault reference operations by setting the `keyVaultReferenceIdentity` property to the resource ID of the user-assigned identity.
+
+ ```azurecli-interactive
+ userAssignedIdentityResourceId=$(az identity show -g MyResourceGroupName -n MyUserAssignedIdentityName --query id -o tsv)
+ appResourceId=$(az webapp show -g MyResourceGroupName -n MyAppName --query id -o tsv)
+ az rest --method PATCH --uri "${appResourceId}?api-version=2021-01-01" --body "{'properties':{'keyVaultReferenceIdentity':'${userAssignedIdentityResourceId}'}}"
+ ```
+
+This configuration will apply to all references for the app.
## Reference syntax
Alternatively:
## Rotation
-> [!IMPORTANT]
-> [Accessing a vault through virtual network integration](#access-network-restricted-vaults) is currently incompatible with automatic updates for secrets without a specified version.
- If a version is not specified in the reference, then the app will use the latest version that exists in Key Vault. When newer versions become available, such as with a rotation event, the app will automatically update and begin using the latest version within one day. Any configuration changes made to the app will cause an immediate update to the latest versions of all referenced secrets. ## Source Application Settings from Key Vault
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-custom-container.md
In PowerShell:
Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -AppSettings @{"WEBSITE_MEMORY_LIMIT_MB"=2000} ```
-The value is defined in MB and must be less and equal to the total physical memory of the host. For example, in an App Service plan with 8 GB RAM, the cumulative total of `WEBSITE_MEMORY_LIMIT_MB` for all the apps must not exceed 8 GB. Information on how much memory is available for each pricing tier can be found in [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/windows/), in the **Premium Container (Windows) Plan** section.
+The value is defined in MB and must be less and equal to the total physical memory of the host. For example, in an App Service plan with 8 GB RAM, the cumulative total of `WEBSITE_MEMORY_LIMIT_MB` for all the apps must not exceed 8 GB. Information on how much memory is available for each pricing tier can be found in [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/windows/), in the **Premium v3 service plan** section.
## Customize the number of compute cores
Get-ComputerInfo | ft CsNumberOfLogicalProcessors # Total number of enabled logi
Get-ComputerInfo | ft CsNumberOfProcessors # Number of physical processors. ```
-The processors may be multicore or hyperthreading processors. Information on how many cores are available for each pricing tier can be found in [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/windows/), in the **Premium Container (Windows) Plan** section.
+The processors may be multicore or hyperthreading processors. Information on how many cores are available for each pricing tier can be found in [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/windows/), in the **Premium v3 service plan** section.
## Customize health ping behavior
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-java.md
adobe-target: true
Azure App Service lets Java developers to quickly build, deploy, and scale their Java SE, Tomcat, and JBoss EAP web applications on a fully managed service. Deploy applications with Maven plugins, from the command line, or in editors like IntelliJ, Eclipse, or Visual Studio Code.
-This guide provides key concepts and instructions for Java developers using App Service. If you've never used Azure App Service, you should read through the [Java quickstart](quickstart-java.md) first. General questions about using App Service that aren't specific to Java development are answered in the [App Service FAQ](faq-configuration-and-management.md).
+This guide provides key concepts and instructions for Java developers using App Service. If you've never used Azure App Service, you should read through the [Java quickstart](quickstart-java.md) first. General questions about using App Service that aren't specific to Java development are answered in the [App Service FAQ](faq-configuration-and-management.yml).
## Show Java version
app-service Configure Ssl Bindings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-ssl-bindings.md
Language specific configuration guides, such as the [Linux Node.js configuration
## More resources * [Use a TLS/SSL certificate in your code in Azure App Service](configure-ssl-certificate-in-code.md)
-* [FAQ : App Service Certificates](./faq-configuration-and-management.md)
+* [FAQ : App Service Certificates](./faq-configuration-and-management.yml)
app-service Configure Ssl Certificate In Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-ssl-certificate-in-code.md
To see how to load a TLS/SSL certificate from a file in Node.js, PHP, Python, Ja
* [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md) * [Enforce HTTPS](configure-ssl-bindings.md#enforce-https) * [Enforce TLS 1.1/1.2](configure-ssl-bindings.md#enforce-tls-versions)
-* [FAQ : App Service Certificates](./faq-configuration-and-management.md)
+* [FAQ : App Service Certificates](./faq-configuration-and-management.yml)
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-ssl-certificate.md
Now you can delete the App Service certificate. From the left navigation, select
* [Enforce HTTPS](configure-ssl-bindings.md#enforce-https) * [Enforce TLS 1.1/1.2](configure-ssl-bindings.md#enforce-tls-versions) * [Use a TLS/SSL certificate in your code in Azure App Service](configure-ssl-certificate-in-code.md)
-* [FAQ : App Service Certificates](./faq-configuration-and-management.md)
+* [FAQ : App Service Certificates](./faq-configuration-and-management.yml)
app-service Deploy Run Package https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-run-package.md
The command also restarts the app. Because `WEBSITE_RUN_FROM_PACKAGE` is set, Ap
## Run from external URL instead
-You can also run a package from an external URL, such as Azure Blob Storage. You can use the [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) to upload package files to your Blob storage account. You should use a private storage container with a [Shared Access Signature (SAS)](../vs-azure-tools-storage-manage-with-storage-explorer.md#generate-a-sas-in-storage-explorer) to enable the App Service runtime to access the package securely.
+You can also run a package from an external URL, such as Azure Blob Storage. You can use the [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) to upload package files to your Blob storage account. You should use a private storage container with a [Shared Access Signature (SAS)](../vs-azure-tools-storage-manage-with-storage-explorer.md#generate-a-sas-in-storage-explorer) or [use a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity) to enable the App Service runtime to access the package securely.
Once you upload your file to Blob storage and have an SAS URL for the file, set the `WEBSITE_RUN_FROM_PACKAGE` app setting to the URL. The following example does it by using Azure CLI:
az webapp config appsettings set --name <app-name> --resource-group <resource-gr
If you publish an updated package with the same name to Blob storage, you need to restart your app so that the updated package is loaded into App Service.
+### Fetch a package from Azure Blob Storage using a managed identity
++ ## Troubleshooting - Running directly from a package makes `wwwroot` read-only. Your app will receive an error if it tries to write files to this directory.
app-service Faq Availability Performance Application Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/faq-availability-performance-application-issues.md
- Title: Application performance FAQs
-description: Get answers to frequently asked questions about availability, performance, and application issues in Azure App Service.
--
-tags: top-support-issue
-- Previously updated : 10/31/2018----
-# Application performance FAQs for Web Apps in Azure
-
-> [!NOTE]
-> Some of the below guidelines might only work on Windows or Linux App Services. For example, Linux App Services run in 64-bit mode by default.
->
-
-This article has answers to frequently asked questions (FAQs) about application performance issues for the [Web Apps feature of Azure App Service](https://azure.microsoft.com/services/app-service/web/).
--
-## Why is my app slow?
-
-Multiple factors might contribute to slow app performance. For detailed troubleshooting steps, see [Troubleshoot slow web app performance](troubleshoot-performance-degradation.md).
-
-## How do I troubleshoot a high CPU-consumption scenario?
-
-In some high CPU-consumption scenarios, your app might truly require more computing resources. In that case, consider scaling to a higher service tier so the application gets all the resources it needs. Other times, high CPU consumption might be caused by a bad loop or by a coding practice. Getting insight into what's triggering increased CPU consumption is a two-part process. First, create a process dump, and then analyze the process dump. For more information, see [Capture and analyze a dump file for high CPU consumption for Web Apps](/archive/blogs/asiatech/how-to-capture-dump-when-intermittent-high-cpu-happens-on-azure-web-app).
-
-## How do I troubleshoot a high memory-consumption scenario?
-
-In some high memory-consumption scenarios, your app might truly require more computing resources. In that case, consider scaling to a higher service tier so the application gets all the resources it needs. Other times, a bug in the code might cause a memory leak. A coding practice also might increase memory consumption. Getting insight into what's triggering high memory consumption is a two-part process. First, create a process dump, and then analyze the process dump. Crash Diagnoser from the Azure Site Extension Gallery can efficiently perform both these steps. For more information, see [Capture and analyze a dump file for intermittent high memory for Web Apps](/archive/blogs/asiatech/how-to-capture-and-analyze-dump-for-intermittent-high-memory-on-azure-web-app).
-
-## How do I automate App Service web apps by using PowerShell?
-
-You can use PowerShell cmdlets to manage and maintain App Service web apps. In our blog post [Automate web apps hosted in Azure App Service by using PowerShell](/archive/blogs/puneetguptlets to automate common tasks. The blog post also has sample code for various web apps management tasks.
-For descriptions and syntax for all App Service web apps cmdlets, see [Az.Websites](/powershell/module/az.websites).
-
-## How do I view my web app's event logs?
-
-To view your web app's event logs:
-
-1. Sign in to your **Kudu website** (`https://*yourwebsitename*.scm.azurewebsites.net`).
-2. In the menu, select **Debug Console** > **CMD**.
-3. Select the **LogFiles** folder.
-4. To view event logs, select the pencil icon next to **eventlog.xml**.
-5. To download the logs, run the PowerShell cmdlet `Save-AzureWebSiteLog -Name webappname`.
-
-## How do I capture a user-mode memory dump of my web app?
-
-To capture a user-mode memory dump of your web app:
-
-1. Sign in to your **Kudu website** (`https://*yourwebsitename*.scm.azurewebsites.net`).
-2. Select the **Process Explorer** menu.
-3. Right-click the **w3wp.exe** process or your WebJob process.
-4. Select **Download Memory Dump** > **Full Dump**.
-
-## How do I view process-level info for my web app?
-
-You have two options for viewing process-level information for your web app:
-
-* In the Azure portal:
- 1. Open the **Process Explorer** for the web app.
- 2. To see the details, select the **w3wp.exe** process.
-* In the Kudu console:
- 1. Sign in to your **Kudu website** (`https://*yourwebsitename*.scm.azurewebsites.net`).
- 2. Select the **Process Explorer** menu.
- 3. For the **w3wp.exe** process, select **Properties**.
-
-## When I browse to my app, I see "Error 403 - This web app is stopped." How do I resolve this?
-
-Three conditions can cause this error:
-
-* The web app has reached a billing limit and your site has been disabled.
-* The web app has been stopped in the portal.
-* The web app has reached a resource quota limit that might apply to a Free or Shared scale service plan.
-
-To see what is causing the error and to resolve the issue, follow the steps in [Web Apps: "Error 403 ΓÇô This web app is stopped"](/archive/blogs/waws/azure-web-apps-error-403-this-web-app-is-stopped).
-
-## Where can I learn more about quotas and limits for various App Service plans?
-
-For information about quotas and limits, see [App Service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits).
-
-## How do I decrease the response time for the first request after idle time?
-
-By default, web apps are unloaded if they are idle for a set period of time. This way, the system can conserve resources. The downside is that the response to the first request after the web app is unloaded is longer, to allow the web app to load and start serving responses. In Basic and Standard service plans, you can turn on the **Always On** setting to keep the app always loaded. This eliminates longer load times after the app is idle. To change the **Always On** setting:
-
-1. In the Azure portal, go to your web app.
-2. Select **Configuration**
-3. Select **General settings**.
-4. For **Always On**, select **On**.
-
-## How do I turn on failed request tracing?
-
-To turn on failed request tracing:
-
-1. In the Azure portal, go to your web app.
-3. Select **All Settings** > **Diagnostics Logs**.
-4. For **Failed Request Tracing**, select **On**.
-5. Select **Save**.
-6. On the web app blade, select **Tools**.
-7. Select **Visual Studio Online**.
-8. If the setting is not **On**, select **On**.
-9. Select **Go**.
-10. Select **Web.config**.
-11. In system.webServer, add this configuration (to capture a specific URL):
-
- ```xml
- <system.webServer>
- <tracing> <traceFailedRequests>
- <remove path="*api*" />
- <add path="*api*">
- <traceAreas>
- <add provider="ASP" verbosity="Verbose" />
- <add provider="ASPNET" areas="Infrastructure,Module,Page,AppServices" verbosity="Verbose" />
- <add provider="ISAPI Extension" verbosity="Verbose" />
- <add provider="WWW Server" areas="Authentication,Security,Filter,StaticFile,CGI,Compression, Cache,RequestNotifications,Module,FastCGI" verbosity="Verbose" />
- </traceAreas>
- <failureDefinitions statusCodes="200-999" />
- </add> </traceFailedRequests>
- </tracing>
- ```
-12. To troubleshoot slow-performance issues, add this configuration (if the capturing request is taking more than 30 seconds):
- ```xml
- <system.webServer>
- <tracing> <traceFailedRequests>
- <remove path="*" />
- <add path="*">
- <traceAreas> <add provider="ASP" verbosity="Verbose" />
- <add provider="ASPNET" areas="Infrastructure,Module,Page,AppServices" verbosity="Verbose" />
- <add provider="ISAPI Extension" verbosity="Verbose" />
- <add provider="WWW Server" areas="Authentication,Security,Filter,StaticFile,CGI,Compression, Cache,RequestNotifications,Module,FastCGI" verbosity="Verbose" />
- </traceAreas>
- <failureDefinitions timeTaken="00:00:30" statusCodes="200-999" />
- </add> </traceFailedRequests>
- </tracing>
- ```
-13. To download the failed request traces, in the [portal](https://portal.azure.com), go to your website.
-15. Select **Tools** > **Kudu** > **Go**.
-18. In the menu, select **Debug Console** > **CMD**.
-19. Select the **LogFiles** folder, and then select the folder with a name that starts with **W3SVC**.
-20. To see the XML file, select the pencil icon.
-
-## I see the message "Worker Process requested recycle due to 'Percent Memory' limit." How do I address this issue?
-
-The maximum available amount of memory for a 32-bit process (even on a 64-bit operating system) is 2 GB. By default, the worker process is set to 32-bit in App Service (for compatibility with legacy web applications).
-
-Consider switching to 64-bit processes so you can take advantage of the additional memory available in your Web Worker role. This triggers a web app restart, so schedule accordingly.
-
-Also note that a 64-bit environment requires a Basic or Standard service plan. Free and Shared plans always run in a 32-bit environment.
-
-For more information, see [Configure web apps in App Service](configure-common.md).
-
-## Why does my request time out after 230 seconds?
-
-Azure Load Balancer has a default idle timeout setting of four minutes. This is generally a reasonable response time limit for a web request. If your web app requires background processing, we recommend using Azure WebJobs. The Azure web app can call WebJobs and be notified when background processing is finished. You can choose from multiple methods for using WebJobs, including queues and triggers.
-
-WebJobs is designed for background processing. You can do as much background processing as you want in a WebJob. For more information about WebJobs, see [Run background tasks with WebJobs](webjobs-create.md).
-
-## ASP.NET Core applications that are hosted in App Service sometimes stop responding. How do I fix this issue?
-
-A known issue with an earlier [Kestrel version](https://github.com/aspnet/KestrelHttpServer/issues/1182) might cause an ASP.NET Core 1.0 app that's hosted in App Service to intermittently stop responding. You also might see this message: "The specified CGI Application encountered an error and the server terminated the process."
-
-This issue is fixed in Kestrel version 1.0.2. This version is included in the ASP.NET Core 1.0.3 update. To resolve this issue, make sure you update your app dependencies to use Kestrel 1.0.2. Alternatively, you can use one of two workarounds that are described in the blog post [ASP.NET Core 1.0 slow perf issues in App Service web apps](/archive/blogs/waws/asp-net-core-slow-perf-issues-on-azure-websites).
--
-## I can't find my log files in the file structure of my web app. How can I find them?
-
-If you use the Local Cache feature of App Service, the folder structure of the LogFiles and Data folders for your App Service instance are affected. When Local Cache is used, subfolders are created in the storage LogFiles and Data folders. The subfolders use the naming pattern "unique identifier" + time stamp. Each subfolder corresponds to a VM instance in which the web app is running or has run.
-
-To determine whether you are using Local Cache, check your App Service **Application settings** tab. If Local Cache is being used, the app setting `WEBSITE_LOCAL_CACHE_OPTION` is set to `Always`.
-
-If you are not using Local Cache and are experiencing this issue, submit a support request.
-
-## I see the message "An attempt was made to access a socket in a way forbidden by its access permissions." How do I resolve this?
-
-This error typically occurs if the outbound TCP connections on the VM instance are exhausted. In App Service, limits are enforced for the maximum number of outbound connections that can be made for each VM instance. For more information, see [Cross-VM numerical limits](https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#cross-vm-numerical-limits).
-
-This error also might occur if you try to access a local address from your application. For more information, see [Local address requests](https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#local-address-requests).
-
-For more information about outbound connections in your web app, see the blog post about [outgoing connections to Azure websites](https://www.freekpaans.nl/2015/08/starving-outgoing-connections-on-windows-azure-web-sites/).
-
-## How do I use Visual Studio to remote debug my App Service web app?
-
-For a detailed walkthrough that shows you how to debug your web app by using Visual Studio, see [Remote debug your App Service web app](/archive/blogs/benjaminperkins/remote-debug-your-azure-app-service-web-app).
app-service Faq Configuration And Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/faq-configuration-and-management.md
- Title: Configuration FAQs
-description: Get answers to frequently asked questions about configuration and management issues for Azure App Service.
--
-tags: top-support-issue
-- Previously updated : 10/30/2018----
-# Configuration and management FAQs for Web Apps in Azure
-
-This article has answers to frequently asked questions (FAQs) about configuration and management issues for the [Web Apps feature of Azure App Service](https://azure.microsoft.com/services/app-service/web/).
--
-## Are there limitations I should be aware of if I want to move App Service resources?
-
-If you plan to move App Service resources to a new resource group or subscription, there are a few limitations to be aware of. For more information, see [App Service limitations](../azure-resource-manager/management/move-limitations/app-service-move-limitations.md).
-
-## How do I use a custom domain name for my web app?
-
-For answers to common questions about using a custom domain name with your Azure web app, see our seven-minute video [Add a custom domain name](https://channel9.msdn.com/blogs/Azure-App-Service-Self-Help/Add-a-Custom-Domain-Name). The video offers a walkthrough of how to add a custom domain name. It describes how to use your own URL instead of the *.azurewebsites.net URL with your App Service web app. You also can see a detailed walkthrough of [how to map a custom domain name](app-service-web-tutorial-custom-domain.md).
--
-## How do I purchase a new custom domain for my web app?
-
-To learn how to purchase and set up a custom domain for your App Service web app, see [Buy and configure a custom domain name in App Service](manage-custom-dns-buy-domain.md).
--
-## How do I upload and configure an existing TLS/SSL certificate for my web app?
-
-To learn how to upload and set up an existing custom TLS/SSL certificate, see [Add a TLS/SSL certificate to your App Service app](configure-ssl-certificate.md).
--
-## How do I purchase and configure a new TLS/SSL certificate in Azure for my web app?
-
-To learn how to purchase and set up a TLS/SSL certificate for your App Service web app, see [Add a TLS/SSL certificate to your App Service app](configure-ssl-certificate.md).
--
-## How do I move Application Insights resources?
-
-Currently, Azure Application Insights doesn't support the move operation. If your original resource group includes an Application Insights resource, you cannot move that resource. If you include the Application Insights resource when you try to move an App Service app, the entire move operation fails. However, Application Insights and the App Service plan do not need to be in the same resource group as the app for the app to function correctly.
-
-For more information, see [App Service limitations](../azure-resource-manager/management/move-limitations/app-service-move-limitations.md).
-
-## Where can I find a guidance checklist and learn more about resource move operations?
-
-[App Service limitations](../azure-resource-manager/management/move-limitations/app-service-move-limitations.md) shows you how to move resources to either a new subscription or to a new resource group in the same subscription. You can get information about the resource move checklist, learn which services support the move operation, and learn more about App Service limitations and other topics.
-
-## How do I set the server time zone for my web app?
-
-To set the server time zone for your web app:
-
-1. In the Azure portal, in your App Service subscription, go to the **Application settings** menu.
-2. Under **App settings**, add this setting:
- * Key = WEBSITE_TIME_ZONE
- * Value = *The time zone you want*
-3. Select **Save**.
-
-For the App services that run on Windows, see the output from the Windows `tzutil /L` command. Use the value from the second line of each entry. For example: "Tonga Standard Time". Some of these values are also listed in the **Timezone** column in [Default Time Zones](/windows-hardware/manufacture/desktop/default-time-zones).
-
-For the App services that run on Linux, set a value from the [IANA TZ database](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). For example: "America/Adak".
-
-## Why do my continuous WebJobs sometimes fail?
-
-By default, web apps are unloaded if they are idle for a set period of time. This lets the system conserve resources. In Basic and Standard plans, you can turn on the **Always On** setting to keep the web app loaded all the time. If your web app runs continuous WebJobs, you should turn on **Always On**, or the WebJobs might not run reliably. For more information, see [Create a continuously running WebJob](webjobs-create.md#CreateContinuous).
-
-## How do I get the outbound IP address for my web app?
-
-To get the list of outbound IP addresses for your web app:
-
-1. In the Azure portal, on your web app blade, go to the **Properties** menu.
-2. Search for **outbound ip addresses**.
-
-The list of outbound IP addresses appears.
-
-To learn how to get the outbound IP address if your website is hosted in an App Service Environment, see [Outbound network addresses](environment/app-service-app-service-environment-network-architecture-overview.md#outbound-network-addresses).
-
-## How do I get a reserved or dedicated inbound IP address for my web app?
-
-To set up a dedicated or reserved IP address for inbound calls made to your Azure app website, install and configure an IP-based TLS/SSL certificate.
-
-Note that to use a dedicated or reserved IP address for inbound calls, your App Service plan must be in a Basic or higher service plan.
-
-## Can I export my App Service certificate to use outside Azure, such as for a website hosted elsewhere?
-
-Yes, you can export them to use outside Azure. For more information, see [FAQs for App Service certificates and custom domains](https://social.msdn.microsoft.com/Forums/azure/f3e6faeb-5ed4-435a-adaa-987d5db43b80/faq-on-app-service-certificates-and-custom-domains?forum=windowsazurewebsitespreview).
-
-## Can I export my App Service certificate to use with other Azure cloud services?
-
-The portal provides a first-class experience for deploying an App Service certificate through Azure Key Vault to App Service apps. However, we have been receiving requests from customers to use these certificates outside the App Service platform, for example, with Azure Virtual Machines. To learn how to create a local PFX copy of your App Service certificate so you can use the certificate with other Azure resources, see [Create a local PFX copy of an App Service certificate](https://blogs.msdn.microsoft.com/appserviceteam/2017/02/24/creating-a-local-pfx-copy-of-app-service-certificate/).
-
-For more information, see [FAQs for App Service certificates and custom domains](https://social.msdn.microsoft.com/Forums/azure/f3e6faeb-5ed4-435a-adaa-987d5db43b80/faq-on-app-service-certificates-and-custom-domains?forum=windowsazurewebsitespreview).
--
-## Why do I see the message "Partially Succeeded" when I try to back up my web app?
-
-A common cause of backup failure is that some files are in use by the application. Files that are in use are locked while you perform the backup. This prevents these files from being backed up and might result in a "Partially Succeeded" status. You can potentially prevent this from occurring by excluding files from the backup process. You can choose to back up only what is needed. For more information, see [Backup just the important parts of your site with Azure web apps](https://zainrizvi.io/blog/creating-partial-backups-of-your-site-with-azure-web-apps/).
-
-## How do I remove a header from the HTTP response?
-
-To remove the headers from the HTTP response, update your site's web.config file. For more information, see [Remove standard server headers on your Azure websites](https://azure.microsoft.com/blog/removing-standard-server-headers-on-windows-azure-web-sites/).
-
-## Is App Service compliant with PCI Standard 3.0 and 3.1?
-
-Currently, the Web Apps feature of Azure App Service is in compliance with PCI Data Security Standard (DSS) version 3.0 Level 1. PCI DSS version 3.1 is on our roadmap. Planning is already underway for how adoption of the latest standard will proceed.
-
-PCI DSS version 3.1 certification requires disabling Transport Layer Security (TLS) 1.0. Currently, disabling TLS 1.0 is not an option for most App Service plans. However, If you use App Service Environment or are willing to migrate your workload to App Service Environment, you can get greater control of your environment. This involves disabling TLS 1.0 by contacting Azure Support. In the near future, we plan to make these settings accessible to users.
-
-For more information, see [Microsoft Azure App Service web app compliance with PCI Standard 3.0 and 3.1](https://support.microsoft.com/help/3124528).
-
-## How do I use the staging environment and deployment slots?
-
-In Standard and Premium App Service plans, when you deploy your web app to App Service, you can deploy to a separate deployment slot instead of to the default production slot. Deployment slots are live web apps that have their own host names. Web app content and configuration elements can be swapped between two deployment slots, including the production slot.
-
-For more information about using deployment slots, see [Set up a staging environment in App Service](deploy-staging-slots.md).
-
-## How do I access and review WebJob logs?
-
-To review WebJob logs:
-
-1. Sign in to your **Kudu website** (`https://*yourwebsitename*.scm.azurewebsites.net`).
-2. Select the WebJob.
-3. Select the **Toggle Output** button.
-4. To download the output file, select the **Download** link.
-5. For individual runs, select **Individual Invoke**.
-6. Select the **Toggle Output** button.
-7. Select the download link.
-
-## I'm trying to use Hybrid Connections with SQL Server. Why do I see the message "System.OverflowException: Arithmetic operation resulted in an overflow"?
-
-If you use Hybrid Connections to access SQL Server, a Microsoft .NET update on May 10, 2016, might cause connections to fail. You might see this message:
-
-```
-Exception: System.Data.Entity.Core.EntityException: The underlying provider failed on Open. ΓÇö> System.OverflowException: Arithmetic operation resulted in an overflow. or (64 bit Web app) System.OverflowException: Array dimensions exceeded supported range, at System.Data.SqlClient.TdsParser.ConsumePreLoginHandshake
-```
-
-### Resolution
-
-The exception was caused by an issue with the Hybrid Connection Manager that has since been fixed. Be sure to [update your Hybrid Connection Manager](https://go.microsoft.com/fwlink/?LinkID=841308) to resolve this issue.
-
-## How do I add a URL rewrite rule?
-
-To add a URL rewrite rule, create a web.config file with the relevant config entries in the **wwwroot** folder. For more information, see [Azure App
-
-## How do I control inbound traffic to App Service?
-
-At the site level, you have two options for controlling inbound traffic to App Service:
-
-* Turn on dynamic IP restrictions. To learn how to turn on dynamic IP restrictions, see [IP and domain restrictions for Azure websites](https://azure.microsoft.com/blog/ip-and-domain-restrictions-for-windows-azure-web-sites/).
-* Turn on Module Security. To learn how to turn on Module Security, see [ModSecurity web application firewall on Azure websites](https://azure.microsoft.com/blog/modsecurity-for-azure-websites/).
-
-If you use App Service Environment, you can use [Barracuda firewall](https://azure.microsoft.com/blog/configuring-barracuda-web-application-firewall-for-azure-app-service-environment/).
-
-## How do I block ports in an App Service web app?
-
-In the App Service shared tenant environment, it is not possible to block specific ports because of the nature of the infrastructure. TCP ports 4020, 4022, and 4024 also might be open for Visual Studio remote debugging.
-
-In App Service Environment, you have full control over inbound and outbound traffic. You can use Network Security Groups to restrict or block specific ports. For more information about App Service Environment, see [Introducing App Service Environment](https://azure.microsoft.com/blog/introducing-app-service-environment/).
-
-## How do I capture an F12 trace?
-
-You have two options for capturing an F12 trace:
-
-* F12 HTTP trace
-* F12 console output
-
-### F12 HTTP trace
-
-1. In Internet Explorer, go to your website. It's important to sign in before you do the next steps. Otherwise, the F12 trace captures sensitive sign-in data.
-2. Press F12.
-3. Verify that the **Network** tab is selected, and then select the green **Play** button.
-4. Do the steps that reproduce the issue.
-5. Select the red **Stop** button.
-6. Select the **Save** button (disk icon), and save the HAR file (in Internet Explorer and Microsoft Edge) *or* right-click the HAR file, and then select **Save as HAR with content** (in Chrome).
-
-### F12 console output
-
-1. Select the **Console** tab.
-2. For each tab that contains more than zero items, select the tab (**Error**, **Warning**, or **Information**). If the tab isn't selected, the tab icon is gray or black when you move the cursor away from it.
-3. Right-click in the message area of the pane, and then select **Copy all**.
-4. Paste the copied text in a file, and then save the file.
-
-To view an HAR file, you can use the [HAR viewer](http://www.softwareishard.com/har/viewer/).
-
-## Why do I get an error when I try to connect an App Service web app to a virtual network that is connected to ExpressRoute?
-
-If you try to connect an Azure web app to a virtual network that's connected to Azure ExpressRoute, it fails. The following message appears: "Gateway is not a VPN gateway."
-
-Currently, you cannot have point-to-site VPN connections to a virtual network that is connected to ExpressRoute. A point-to-site VPN and ExpressRoute cannot coexist for the same virtual network. For more information, see [ExpressRoute and site-to-site VPN connections limits and limitations](../expressroute/expressroute-howto-coexist-classic.md#limits-and-limitations).
-
-## How do I connect an App Service web app to a virtual network that has a static routing (policy-based) gateway?
-
-Currently, connecting an App Service web app to a virtual network that has a static routing (policy-based) gateway is not supported. If your target virtual network already exists, it must have point-to-site VPN enabled, with a dynamic routing gateway, before it can be connected to an app. If your gateway is set to static routing, you cannot enable a point-to-site VPN.
-
-For more information, see [Integrate an app with an Azure virtual network](web-sites-integrate-with-vnet.md).
-
-## In my App Service Environment, why can I create only one App Service plan, even though I have two workers available?
-
-To provide fault tolerance, App Service Environment requires that each worker pool needs at least one additional compute resource. The additional compute resource cannot be assigned a workload.
-
-For more information, see [How to create an App Service Environment](environment/app-service-web-how-to-create-an-app-service-environment.md).
-
-## Why do I see timeouts when I try to create an App Service Environment?
-
-Sometimes, creating an App Service Environment fails. In that case, you see the following error in the Activity logs:
-```
-ResourceID: /subscriptions/{SubscriptionID}/resourceGroups/Default-Networking/providers/Microsoft.Web/hostingEnvironments/{ASEname}
-Error:{"error":{"code":"ResourceDeploymentFailure","message":"The resource provision operation did not complete within the allowed timeout period."}}
-```
-
-To resolve this, make sure that none of the following conditions are true:
-* The subnet is too small.
-* The subnet is not empty.
-* ExpressRoute prevents the network connectivity requirements of an App Service Environment.
-* A bad Network Security Group prevents the network connectivity requirements of an App Service Environment.
-* Forced tunneling is turned on.
-
-For more information, see [Frequent issues when deploying (creating) a new Azure App Service Environment](/archive/blogs/waws/most-frequent-issues-when-deploying-creating-a-new-azure-app-service-environment-ase).
-
-## Why can't I delete my App Service plan?
-
-You can't delete an App Service plan if any App Service apps are associated with the App Service plan. Before you delete an App Service plan, remove all associated App Service apps from the App Service plan.
-
-## How do I schedule a WebJob?
-
-You can create a scheduled WebJob by using Cron expressions:
-
-1. Create a settings.job file.
-2. In this JSON file, include a schedule property by using a Cron expression:
- ```json
- { "schedule": "{second}
- {minute} {hour} {day}
- {month} {day of the week}" }
- ```
-
-For more information about scheduled WebJobs, see [Create a scheduled WebJob by using a Cron expression](webjobs-create.md#CreateScheduledCRON).
-
-## How do I perform penetration testing for my App Service app?
-
-To perform penetration testing, [submit a request](https://portal.msrc.microsoft.com/engage/pentest).
-
-## How do I configure a custom domain name for an App Service web app that uses Traffic Manager?
-
-To learn how to use a custom domain name with an App Service app that uses Azure Traffic Manager for load balancing, see [Configure a custom domain name for an Azure web app with Traffic Manager](configure-domain-traffic-manager.md).
-
-## My App Service certificate is flagged for fraud. How do I resolve this?
--
-During the domain verification of an App Service certificate purchase, you might see the following message:
-
-"Your certificate has been flagged for possible fraud. The request is currently under review. If the certificate does not become usable within 24 hours, please contact Azure Support."
-
-As the message indicates, this fraud verification process might take up to 24 hours to complete. During this time, you'll continue to see the message.
-
-If your App Service certificate continues to show this message after 24 hours, please run the following PowerShell script. The script contacts the [certificate provider](https://www.godaddy.com/) directly to resolve the issue.
-
-```powershell
-Connect-AzAccount
-Set-AzContext -SubscriptionId <subId>
-$actionProperties = @{
- "Name"= "<Customer Email Address>"
- };
-Invoke-AzResourceAction -ResourceGroupName "<App Service Certificate Resource Group Name>" -ResourceType Microsoft.CertificateRegistration/certificateOrders -ResourceName "<App Service Certificate Resource Name>" -Action resendRequestEmails -Parameters $actionProperties -ApiVersion 2015-08-01 -Force
-```
-
-## How do authentication and authorization work in App Service?
-
-For detailed documentation for authentication and authorization in App Service, see docs for various identify provider sign-ins:
-* [Azure Active Directory](configure-authentication-provider-aad.md)
-* [Facebook](configure-authentication-provider-facebook.md)
-* [Google](configure-authentication-provider-google.md)
-* [Microsoft Account](configure-authentication-provider-microsoft.md)
-* [Twitter](configure-authentication-provider-twitter.md)
-
-## How do I redirect the default *.azurewebsites.net domain to my Azure web app's custom domain?
-
-When you create a new website by using Web Apps in Azure, a default *sitename*.azurewebsites.net domain is assigned to your site. If you add a custom host name to your site and don't want users to be able to access your default *.azurewebsites.net domain, you can redirect the default URL. To learn how to redirect all traffic from your website's default domain to your custom domain, see [Redirect the default domain to your custom domain in Azure web apps](https://zainrizvi.io/blog/block-default-azure-websites-domain/).
-
-## How do I determine which version of .NET version is installed in App Service?
-
-The quickest way to find the version of Microsoft .NET that's installed in App Service is by using the Kudu console. You can access the Kudu console from the portal or by using the URL of your App Service app. For detailed instructions, see [Determine the installed .NET version in App Service](/archive/blogs/waws/how-to-determine-the-installed-net-version-in-azure-app-services).
-
-## Why isn't Autoscale working as expected?
-
-If Azure Autoscale hasn't scaled in or scaled out the web app instance as you expected, you might be running into a scenario in which we intentionally choose not to scale to avoid an infinite loop due to "flapping." This usually happens when there isn't an adequate margin between the scale-out and scale-in thresholds. To learn how to avoid "flapping" and to read about other Autoscale best practices, see [Autoscale best practices](../azure-monitor/autoscale/autoscale-best-practices.md#autoscale-best-practices).
-
-## Why does Autoscale sometimes scale only partially?
-
-Autoscale is triggered when metrics exceed preconfigured boundaries. Sometimes, you might notice that the capacity is only partially filled compared to what you expected. This might occur when the number of instances you want are not available. In that scenario, Autoscale partially fills in with the available number of instances. Autoscale then runs the rebalance logic to get more capacity. It allocates the remaining instances. Note that this might take a few minutes.
-
-If you don't see the expected number of instances after a few minutes, it might be because the partial refill was enough to bring the metrics within the boundaries. Or, Autoscale might have scaled down because it reached the lower metrics boundary.
-
-If none of these conditions apply and the problem persists, submit a support request.
-
-## How do I turn on HTTP compression for my content?
-
-To turn on compression both for static and dynamic content types, add the following code to the application-level web.config file:
-
-```xml
-<system.webServer>
- <urlCompression doStaticCompression="true" doDynamicCompression="true" />
-</system.webServer>
-```
-
-You also can specify the specific dynamic and static MIME types that you want to compress. For more information, see our response to a forum question in [httpCompression settings on a simple Azure website](https://social.msdn.microsoft.com/Forums/azure/890b6d25-f7dd-4272-8970-da7798bcf25d/httpcompression-settings-on-a-simple-azure-website?forum=windowsazurewebsitespreview).
-
-## How do I migrate from an on-premises environment to App Service?
-
-To migrate sites from Windows and Linux web servers to App Service, you can use Azure App Service Migration Assistant. The migration tool creates web apps and databases in Azure as needed, and then publishes the content. For more information, see [Azure App Service Migration Assistant](https://appmigration.microsoft.com/).
-
-## Why is my certificate issued for 11 months and not for a full year?
-
-For all certificates issued after 9/1/2020, the maximum duration is now 397 days. Certificates issued before 9/1/2020 have a maximum validity of 825 days until they are renewed, rekeyed etc. Any certificate renewed after 9/1/2020 will be affected by this change and users may notice a shorter validity on their renewed certificates.
-GoDaddy has implemented a subscription service that both meets the new requirements while honoring existing customer certificates. Thirty days before the newly-issued certificate expires, the service automatically issues a second certificate that extends the duration to the original expiration date. App Service is working with GoDaddy to address this change and make sure that our customers receive the full duration of their certificates.
app-service Faq Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/faq-deployment.md
- Title: Deployment FAQs - Azure App Service | Microsoft Docs
-description: Get answers to frequently asked questions about deployment for the Web Apps feature of Azure App Service.
--
-tags: top-support-issue
-- Previously updated : 11/01/2018----
-# Deployment FAQs for Web Apps in Azure
-
-This article has answers to frequently asked questions (FAQs) about deployment issues for the [Web Apps feature of Azure App Service](https://azure.microsoft.com/services/app-service/web/).
--
-## I am just getting started with App Service web apps. How do I publish my code?
-
-Here are some options for publishing your web app code:
-
-* Deploy by using Visual Studio. If you have the Visual Studio solution, right-click the web application project, and then select **Publish**.
-* Deploy by using an FTP client. In the Azure portal, download the publish profile for the web app that you want to deploy your code to. Then, upload the files to \site\wwwroot by using the same publish profile FTP credentials.
-
-For more information, see [Deploy your app to App Service](deploy-local-git.md).
-
-## I see an error message when I try to deploy from Visual Studio. How do I resolve this error?
-
-If you see the following message, you might be using an older version of the SDK: ΓÇ£Error during deployment for resource 'YourResourceName' in resource group 'YourResourceGroup': MissingRegistrationForLocation: The subscription is not registered for the resource type 'components' in the location 'Central US'. Re-register for this provider in order to have access to this location.ΓÇ¥
-
-To resolve this error, upgrade to the [latest SDK](https://azure.microsoft.com/downloads/). If you see this message and you have the latest SDK, submit a support request.
-
-## How do I deploy an ASP.NET application from Visual Studio to App Service?
-<a id="deployasp"></a>
-
-The tutorial [Create your first ASP.NET web app in Azure in five minutes](quickstart-dotnetcore.md) shows you how to deploy an ASP.NET web application to a web app in App Service by using Visual Studio.
-
-## What are the different types of deployment credentials?
-
-App Service supports two types of credentials for local Git deployment and FTP/S deployment. For more information about how to configure deployment credentials, see [Configure deployment credentials for App Service](deploy-configure-credentials.md).
-
-## What is the file or directory structure of my App Service web app?
-
-For information about the file structure of your App Service app, see [File structure in Azure](https://github.com/projectkudu/kudu/wiki/File-structure-on-azure).
-
-## How do I resolve "FTP Error 550 - There is not enough space on the disk" when I try to FTP my files?
-
-If you see this message, it's likely that you're running into a disk quota in the service plan for your web app. You might need to scale up to a higher service tier based on your disk space needs. For more information about pricing plans and resource limits, see [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/).
-
-## How do I set up continuous deployment for my App Service web app?
-
-You can set up continuous deployment from several resources, including Azure DevOps, OneDrive, GitHub, Bitbucket, Dropbox, and other Git repositories. These options are available in the portal. [Continuous deployment to App Service](deploy-continuous-deployment.md) is a helpful tutorial that explains how to set up continuous deployment.
-
-## How do I troubleshoot issues with continuous deployment from GitHub and Bitbucket?
-
-For help investigating issues with continuous deployment from GitHub or Bitbucket, see [Investigating continuous deployment](https://github.com/projectkudu/kudu/wiki/Investigating-continuous-deployment).
-
-## I can't FTP to my site and publish my code. How do I resolve this issue?
-
-To resolve FTP issues:
-
-1. Verify that you're entering the correct host name and credentials. For detailed information about different types of credentials and how to use them, see [Deployment credentials](https://github.com/projectkudu/kudu/wiki/Deployment-credentials).
-2. Verify that the FTP ports are not blocked by a firewall. The ports should have these settings:
- * FTP control connection port: 21
- * FTP data connection port: 989, 10001-10300
-
-## How do I publish my code to App Service?
-
-The Azure Quickstart is designed to help you deploy your app by using the deployment stack and method of your choice. To use the Quickstart, in the Azure portal, go to your app service, under **Deployment**, select **Quickstart**.
-
-## Why does my app sometimes restart after deployment to App Service?
-
-To learn about the circumstances under which an application deployment might result in a restart, see [Deployment vs. runtime issues](https://github.com/projectkudu/kudu/wiki/Deployment-vs-runtime-issues#deployments-and-web-app-restarts"). As the article describes, App Service deploys files to the wwwroot folder. It never directly restarts your app.
-
-## How do I integrate Azure DevOps code with App Service?
-
-You have two options for using continuous deployment with Azure DevOps:
-
-* Use a Git project. Connect via App Service by using the Deployment Center.
-* Use a Team Foundation Version Control (TFVC) project. Deploy by using the build agent for App Service.
-
-Continuous code deployment for both these options depends on existing developer workflows and check-in procedures. For more information, see these articles:
-
-* [Implement continuous deployment of your app to an Azure website](https://www.visualstudio.com/docs/release/examples/azure/azure-web-apps-from-build-and-release-hubs)
-* [Set up an Azure DevOps organization so it can deploy to a web app](https://github.com/projectkudu/kudu/wiki/Setting-up-a-VSTS-account-so-it-can-deploy-to-a-Web-App)
-
-## How do I use FTP or FTPS to deploy my app to App Service?
-
-For information about using FTP or FTPS to deploy your web app to App Service, see [Deploy your app to App Service by using FTP/S](deploy-ftp.md).
app-service Faq Open Source Technologies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/faq-open-source-technologies.md
- Title: Open-source technologies FAQs
-description: Get answers to frequently asked questions about open-source technologies in Azure App Service.
--
-tags: top-support-issue
-- Previously updated : 10/31/2018------
-# Open-source technologies FAQs for Web Apps in Azure
-
-This article has answers to frequently asked questions (FAQs) about issues with open-source technologies for the [Web Apps feature of Azure App Service](https://azure.microsoft.com/services/app-service/web/).
--
-## How do I turn on PHP logging to troubleshoot PHP issues?
-
-To turn on PHP logging:
-
-1. Sign in to your **Kudu website** (`https://*yourwebsitename*.scm.azurewebsites.net`).
-2. In the top menu, select **Debug Console** > **CMD**.
-3. Select the **Site** folder.
-4. Select the **wwwroot** folder.
-5. Select the **+** icon, and then select **New File**.
-6. Set the file name to **.user.ini**.
-7. Select the pencil icon next to **.user.ini**.
-8. In the file, add this code: `log_errors=on`
-9. Select **Save**.
-10. Select the pencil icon next to **wp-config.php**.
-11. Change the text to the following code:
- ```php
- //Enable WP_DEBUG modedefine('WP_DEBUG', true);//Enable debug logging to /wp-content/debug.logdefine('WP_DEBUG_LOG', true);
- //Suppress errors and warnings to screendefine('WP_DEBUG_DISPLAY', false);//Suppress PHP errors to screenini_set('display_errors', 0);
- ```
-12. In the Azure portal, in the web app menu, restart your web app.
-
-For more information, see [Enable WordPress error logs](/archive/blogs/azureossds/logging-php-errors-in-wordpress-2).
-
-## How do I log Python application errors in apps that are hosted in App Service?
-
-## How do I change the version of the Node.js application that is hosted in App Service?
-
-To change the version of the Node.js application, you can use one of the following options:
-
-* In the Azure portal, use **App settings**.
- 1. In the Azure portal, go to your web app.
- 2. On the **Settings** blade, select **Application settings**.
- 3. In **App settings**, you can include WEBSITE_NODE_DEFAULT_VERSION as the key, and the version of Node.js you want as the value.
- 4. Go to your **Kudu console** (`https://*yourwebsitename*.scm.azurewebsites.net`).
- 5. To check the Node.js version, enter the following command:
- ```
- node -v
- ```
-* Modify the iisnode.yml file. Changing the Node.js version in the iisnode.yml file only sets the runtime environment that iisnode uses. Your Kudu cmd and others still use the Node.js version that is set in **App settings** in the Azure portal.
-
- To set the iisnode.yml manually, create an iisnode.yml file in your app root folder. In the file, include the following line:
- ```yml
- nodeProcessCommandLine: "D:\Program Files (x86)\nodejs\5.9.1\node.exe"
- ```
-
-* Set the iisnode.yml file by using package.json during source control deployment.
- The Azure source control deployment process involves the following steps:
- 1. Moves content to the Azure web app.
- 2. Creates a default deployment script, if there isnΓÇÖt one (deploy.cmd, .deployment files) in the web app root folder.
- 3. Runs a deployment script in which it creates an iisnode.yml file if you mention the Node.js version in the package.json file > engine `"engines": {"node": "5.9.1","npm": "3.7.3"}`
- 4. The iisnode.yml file has the following line of code:
- ```yml
- nodeProcessCommandLine: "D:\Program Files (x86)\nodejs\5.9.1\node.exe"
- ```
-
-## I see the message "Error establishing a database connection" in my WordPress app that's hosted in App Service. How do I troubleshoot this?
-
-If you see this error in your Azure WordPress app, to enable php_errors.log and debug.log, complete the steps detailed in [Enable WordPress error logs](/archive/blogs/azureossds/logging-php-errors-in-wordpress-2).
-
-When the logs are enabled, reproduce the error, and then check the logs to see if you are running out of connections:
-```
-[09-Oct-2015 00:03:13 UTC] PHP Warning: mysqli_real_connect(): (HY000/1226): User ΓÇÿabcdefghijk79' has exceeded the ΓÇÿmax_user_connectionsΓÇÖ resource (current value: 4) in D:\home\site\wwwroot\wp-includes\wp-db.php on line 1454
-```
-
-If you see this error in your debug.log or php_errors.log files, your app is exceeding the number of connections. If youΓÇÖre hosting on ClearDB, verify the number of connections that are available in your [service plan](https://www.cleardb.com/pricing.view).
-
-## How do I debug a Node.js app that's hosted in App Service?
-
-1. Go to your **Kudu console** (`https://*yourwebsitename*.scm.azurewebsites.net/DebugConsole`).
-2. Go to your application logs folder (D:\home\LogFiles\Application).
-3. In the logging_errors.txt file, check for content.
-
-## How do I install native Python modules in an App Service web app or API app?
-
-Some packages might not install by using pip in Azure. The package might not be available on the Python Package Index, or a compiler might be required (a compiler is not available on the computer that is running the web app in App Service). For information about installing native modules in App Service web apps and API apps, see [Install Python modules in App Service](/archive/blogs/azureossds/install-native-python-modules-on-azure-web-apps-api-apps).
-
-## How do I deploy a Django app to App Service by using Git and the new version of Python?
-
-For information about installing Django, see [Deploying a Django app to App Service](/archive/blogs/azureossds/deploying-django-app-to-azure-app-services-using-git-and-new-version-of-python).
-
-## Where are the Tomcat log files located?
-
-For Azure Marketplace and custom deployments:
-
-* Folder location: D:\home\site\wwwroot\bin\apache-tomcat-8.0.33\logs
-* Files of interest:
- * catalina.*yyyy-mm-dd*.log
- * host-manager.*yyyy-mm-dd*.log
- * localhost.*yyyy-mm-dd*.log
- * manager.*yyyy-mm-dd*.log
- * site_access_log.*yyyy-mm-dd*.log
--
-For portal **App settings** deployments:
-
-* Folder location: D:\home\LogFiles
-* Files of interest:
- * catalina.*yyyy-mm-dd*.log
- * host-manager.*yyyy-mm-dd*.log
- * localhost.*yyyy-mm-dd*.log
- * manager.*yyyy-mm-dd*.log
- * site_access_log.*yyyy-mm-dd*.log
-
-## How do I troubleshoot JDBC driver connection errors?
-
-You might see the following message in your Tomcat logs:
-
-```
-The web application[ROOT] registered the JDBC driver [com.mysql.jdbc.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak,the JDBC Driver has been forcibly unregistered
-```
-
-To resolve the error:
-
-1. Remove the sqljdbc*.jar file from your app/lib folder.
-2. If you are using the custom Tomcat or Azure Marketplace Tomcat web server, copy this .jar file to the Tomcat lib folder.
-3. If you are enabling Java from the Azure portal (select **Java 1.8** > **Tomcat server**), copy the sqljdbc.* jar file in the folder that's parallel to your app. Then, add the following classpath setting to the web.config file:
-
- ```xml
- <httpPlatform>
- <environmentVariables>
- <environmentVariablename ="JAVA_OPTS" value=" -Djava.net.preferIPv4Stack=true
- -Xms128M -classpath %CLASSPATH%;[Path to the sqljdbc*.jarfile]" />
- </environmentVariables>
- </httpPlatform>
- ```
-
-## Why do I see errors when I attempt to copy live log files?
-
-If you try to copy live log files for a Java app (for example, Tomcat), you might see this FTP error:
-
-```
-Error transferring file [filename] Copying files from remote side failed.
-
-The process cannot access the file because it is being used by another process.
-```
-
-The error message might vary, depending on the FTP client.
-
-All Java apps have this locking issue. Only Kudu supports downloading this file while the app is running.
-
-Stopping the app allows FTP access to these files.
-
-Another workaround is to write a WebJob that runs on a schedule and copies these files to a different directory. For a sample project, see the [CopyLogsJob](https://github.com/kamilsykora/CopyLogsJob) project.
-
-## Where do I find the log files for Jetty?
-
-For Marketplace and custom deployments, the log file is in the D:\home\site\wwwroot\bin\jetty-distribution-9.1.2.v20140210\logs folder. Note that the folder location depends on the version of Jetty you are using. For example, the path provided here is for Jetty 9.1.2. Look for jetty_*YYYY_MM_DD*.stderrout.log.
-
-For portal App Setting deployments, the log file is in D:\home\LogFiles. Look for jetty_*YYYY_MM_DD*.stderrout.log
-
-## Can I send email from my Azure web app?
-
-App Service doesn't have a built-in email feature. For some good alternatives for sending email from your app, see this [Stack Overflow discussion](https://stackoverflow.com/questions/17666161/sending-email-from-azure).
-
-## Why does my WordPress site redirect to another URL?
-
-If you have recently migrated to Azure, WordPress might redirect to the old domain URL. This is caused by a setting in the MySQL database.
-
-WordPress Buddy+ is an Azure Site Extension that you can use to update the redirection URL directly in the database. For more information about using WordPress Buddy+, see [WordPress tools and MySQL migration with WordPress Buddy+](https://www.electrongeek.com/blog/2016/12/21/wordpress-buddy-site-extension-for-app-service-on-windows).
-
-Alternatively, if you prefer to manually update the redirection URL by using SQL queries or PHPMyAdmin, see [WordPress: Redirecting to wrong URL](/archive/blogs/azureossds/wordpress-redirecting-to-wrong-url).
-
-## How do I change my WordPress sign-in password?
-
-If you have forgotten your WordPress sign-in password, you can use WordPress Buddy+ to update it. To reset your password, install the WordPress Buddy+ Azure Site Extension, and then complete the steps described in [WordPress tools and MySQL migration with WordPress Buddy+](https://www.electrongeek.com/blog/2016/12/21/wordpress-buddy-site-extension-for-app-service-on-windows).
-
-## I can't sign in to WordPress. How do I resolve this?
-
-If you find yourself locked out of WordPress after recently installing a plugin, you might have a faulty plugin. WordPress Buddy+ is an Azure Site Extension that can help you disable plugins in WordPress. For more information, see [WordPress tools and MySQL migration with WordPress Buddy+](https://www.electrongeek.com/blog/2016/12/21/wordpress-buddy-site-extension-for-app-service-on-windows).
-
-## How do I migrate my WordPress database?
-
-You have multiple options for migrating the MySQL database that's connected to your WordPress website:
-
-* Developers: Use the [command prompt or PHPMyAdmin](/archive/blogs/azureossds/migrating-data-between-mysql-databases-using-kudu-console-azure-app-service)
-* Non-developers: Use [WordPress Buddy+](https://www.electrongeek.com/blog/2016/12/21/wordpress-buddy-site-extension-for-app-service-on-windows)
-
-## How do I help make WordPress more secure?
-
-To learn about security best practices for WordPress, see [Best practices for WordPress security in Azure](/archive/blogs/azureossds/best-practices-for-wordpress-security-on-azure).
-
-## I am trying to use PHPMyAdmin, and I see the message ΓÇ£Access denied.ΓÇ¥ How do I resolve this?
-
-You might experience this issue if the MySQL in-app feature isn't running yet in this App Service instance. To resolve the issue, try to access your website. This starts the required processes, including the MySQL in-app process. To verify that MySQL in-app is running, in Process Explorer, ensure that mysqld.exe is listed in the processes.
-
-After you ensure that MySQL in-app is running, try to use PHPMyAdmin.
-
-## I get an HTTP 403 error when I try to import or export my MySQL in-app database by using PHPMyadmin. How do I resolve this?
-
-If you are using an older version of Chrome, you might be experiencing a known bug. To resolve the issue, upgrade to a newer version of Chrome. Also try using a different browser, like Internet Explorer or Microsoft Edge, where the issue does not occur.
app-service Monitor App Service Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/monitor-app-service-reference.md
+
+ Title: Monitoring App Service data reference
+description: Important reference material needed when you monitor App Service
+++++ Last updated : 04/16/2021++
+# Monitoring App Service data reference
+
+See [Monitoring App Service](monitor-app-service.md) for details on collecting and analyzing monitoring data for App Service.
+
+## Metrics
+
+This section lists all the automatically collected platform metrics collected for App Service.
+
+|Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
+|-|--|
+| App Service Plans | [Microsoft.Web/serverfarms](/azure/azure-monitor/essentials/metrics-supported#microsoftwebserverfarms)
+| Web apps | [Microsoft.Web/sites](/azure/azure-monitor/essentials/metrics-supported#microsoftwebsites) |
+| Staging slots | [Microsoft.Web/sites/slots](/azure/azure-monitor/essentials/metrics-supported#microsoftwebsitesslots)
+| App Service Environment | [Microsoft.Web/hostingEnvironments](/azure/azure-monitor/essentials/metrics-supported#microsoftwebhostingenvironments)
+| App Service Environment Front-end | [Microsoft.Web/hostingEnvironments/multiRolePools](/azure/azure-monitor/essentials/metrics-supported#microsoftwebhostingenvironmentsmultirolepools)
++
+For more information, see a list of [all platform metrics supported in Azure Monitor](../azure-monitor/platform/metrics-supported.md).
++
+## Metric Dimensions
+
+For more information on what metric dimensions are, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
+
+App Service doesn't have any metrics that contain dimensions.
+
+## Resource logs
+
+> [!NOTE]
+> Azure Monitor integration with App Service is in [preview](https://aka.ms/appsvcblog-azmon).
+>
+
+This section lists the types of resource logs you can collect for App Service.
+
+| Log type | Windows | Windows Container | Linux | Linux Container | Description |
+|-|-|-|-|-|-|
+| AppServiceConsoleLogs | Java SE & Tomcat | Yes | Yes | Yes | Standard output and standard error |
+| AppServiceHTTPLogs | Yes | Yes | Yes | Yes | Web server logs |
+| AppServiceEnvironmentPlatformLogs | Yes | N/A | Yes | Yes | App Service Environment: scaling, configuration changes, and status logs|
+| AppServiceAuditLogs | Yes | Yes | Yes | Yes | Login activity via FTP and Kudu |
+| AppServiceFileAuditLogs | Yes | Yes | TBA | TBA | File changes made to the site content; **only available for Premium tier and above** |
+| AppServiceAppLogs | ASP.NET | ASP.NET | Java SE & Tomcat Images <sup>1</sup> | Java SE & Tomcat Blessed Images <sup>1</sup> | Application logs |
+| AppServiceIPSecAuditLogs | Yes | Yes | Yes | Yes | Requests from IP Rules |
+| AppServicePlatformLogs | TBA | Yes | Yes | Yes | Container operation logs |
+| AppServiceAntivirusScanAuditLogs | Yes | Yes | Yes | Yes | [Anti-virus scan logs](https://azure.github.io/AppService/2020/12/09/AzMon-AppServiceAntivirusScanAuditLogs.html) using Microsoft Defender; **only available for Premium tier** |
+
+<sup>1</sup> For Java SE apps, add "$WEBSITE_AZMON_PREVIEW_ENABLED" to the app settings and set it to 1 or to true.
+
+For reference, see a list of [all resource logs category types supported in Azure Monitor](/azure/azure-monitor/platform/resource-logs-schema).
+
+## Azure Monitor Logs tables
+
+Azure App Service uses Kusto tables from Azure Monitor Logs. You can query these tables with Log analytics. For a list of App Service tables used by Kusto, see the [Azure Monitor Logs table reference - App Service tables](/azure/azure-monitor/reference/tables/tables-resourcetype#app-services).
+
+## Activity log
+
+The following table lists common operations related to App Service that may be created in the Activity log. This is not an exhaustive list.
+
+| Operation | Description |
+|:|:|
+|Create or Update Web App| App was created or updated|
+|Delete Web App| App was deleted |
+|Create Web App Backup| Backup of app|
+|Get Web App Publishing Profile| Download of publishing profile |
+|Publish Web App| App deployed |
+|Restart Web App| App restarted|
+|Start Web App| App started |
+|Stop Web App| App stopped|
+|Swap Web App Slots| Slots were swapped|
+|Get Web App Slots Differences| Slot differences|
+|Apply Web App Configuration| Applied configuration changes|
+|Reset Web App Configuration| Configuration changes reset|
+|Approve Private Endpoint Connections| Approved private endpoint connections|
+|Network Trace Web Apps| Started network trace|
+|Newpassword Web Apps| New password created |
+|Get Zipped Container Logs for Web App| Get container logs |
+|Restore Web App From Backup Blob| App restored from backup|
+
+For more information on the schema of Activity Log entries, see [Activity Log schema](/azure/azure-monitor/essentials/activity-log-schema).
+
+## See Also
+
+- See [Monitoring Azure App Service](monitor-app-service.md) for a description of monitoring Azure App Service.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
app-service Monitor App Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/monitor-app-service.md
+
+ Title: Monitoring App Service
+description: Start here to learn how to monitor App Service
+++++ Last updated : 04/16/2021++
+# Monitoring App Service
+
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by App Service. App Service uses [Azure Monitor](/azure/azure-monitor/overview). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
+
+To monitor resources with Azure Monitor, you can also use built-in diagnostics to assist with debugging an App Service app. You'll find more on this capability in [enable diagnostic logging for apps in Azure App Service](troubleshoot-diagnostic-logs.md).
+
+> [!NOTE]
+> Azure Monitor integration with App Service is in [preview](https://aka.ms/appsvcblog-azmon).
+>
+
+## Monitoring data
+
+App Service collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/insights/monitor-azure-resource#monitoring-data-from-Azure-resources).
+
+See [Monitoring *App Service* data reference](monitor-app-service-reference.md) for detailed information on the metrics and logs metrics created by App Service.
+
+App Service also provides built-in diagnostics to assist with debugging apps. See [Enable diagnostics logging](troubleshoot-diagnostic-logs.md) for more information on enabling the built-in logs. To monitor App Service instances, see [Monitor App Service instances using Health check](monitor-instances-health-check.md).
+
+## Collection and routing
+
+Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+
+Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
+
+See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for *App Service* are listed in [App Service monitoring data reference](monitor-app-service-reference.md#resource-logs).
+
+The metrics and logs you can collect are discussed in the following sections.
+
+## Analyzing metrics
+
+You can analyze metrics for *App Service* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](/azure/azure-monitor/platform/metrics-getting-started) for details on using this tool.
+
+For a list of platform metrics collected for App Service, see [Monitoring App Service data reference metrics](monitor-app-service-reference.md#metrics)
+
+For reference, you can see a list of [all resource metrics supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
+
+## Analyzing logs
+
+Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
+
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor-preview).
+
+The [Activity log](/azure/azure-monitor/platform/activity-log) is a type of platform log that provides insight into subscription-level events. You can view it independently or route to Azure Monitor Logs. Routing to Azure Monitor Logs gives the benefit of using Log Analytics to run complex queries.
+
+For a list of types of resource logs collected for App Service, see [Monitoring App Service data reference](monitor-app-service-reference.md#resource-logs)
+
+For a list of queryable tables used by Azure Monitor Logs and Log Analytics, see [Monitoring App Service data reference](monitor-app-service-reference.md#azure-monitor-logs-tables)
+
+### Sample Kusto queries
+
+> [!IMPORTANT]
+> When you select **Logs** from the App Service menu, Log Analytics is opened with the query scope set to the current resource. This means that log queries will only include data from that resource. If you want to run a query that includes data from other [resource] or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](/azure/azure-monitor/log-query/scope/) for details.
+
+The following sample query can help you monitor app logs using `AppServiceAppLogs`:
+
+```Kusto
+AppServiceAppLogs
+| project CustomLevel, _ResourceId
+| summarize count() by CustomLevel, _ResourceId
+```
+
+The following sample query can help you monitor HTTP logs using `AppServiceHTTPLogs` where the `HTTP response code` is `500` or higher:
+
+```Kusto
+AppServiceHTTPLogs
+//| where ResourceId = "MyResourceId" // Uncomment to get results for a specific resource Id when querying over a group of Apps
+| where ScStatus >= 500
+| reduce by strcat(CsMethod, ':\\', CsUriStem)
+```
+
+The following sample query can help you monitor HTTP 500 errors by joining `AppServiceConsoleLogs` and `AppserviceHTTPLogs`:
+
+```Kusto
+let myHttp = AppServiceHTTPLogs | where ScStatus == 500 | project TimeGen=substring(TimeGenerated, 0, 19), CsUriStem, ScStatus;
+
+let myConsole = AppServiceConsoleLogs | project TimeGen=substring(TimeGenerated, 0, 19), ResultDescription;
+
+myHttp | join myConsole on TimeGen | project TimeGen, CsUriStem, ScStatus, ResultDescription;
+```
+
+See [Azure Monitor queries for App Service](https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/App%20Services/Queries) for more sample queries.
+
+## Alerts
+
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/platform/alerts-metric-overview), [logs](/azure/azure-monitor/platform/alerts-unified-log), and the [activity log](/azure/azure-monitor/platform/activity-log-alerts).
+
+If you're running an application on App Service [Azure Monitor Application Insights](/azure/azure-monitor/overview#application-insights) may offer additional types of alerts.
+
+The following table lists common and recommended alert rules for App Service.
+
+| Alert type | Condition | Examples |
+|:|:|:|
+| Metric | Average connections| When number of connections exceed a set value|
+| Metric | HTTP 404| When HTTP 404 responses exceed a set value|
+| Metric | HTTP Server Errors| When HTTP 5xx errors exceed a set value|
+| Activity Log | Create or Update Web App | When app is created or updated|
+| Activity Log | Delete Web App | When app is deleted|
+| Activity Log | Restart Web App| When app is restarted|
+| Activity Log | Stop Web App| When app is stopped|
+
+## Next steps
+
+- See [Monitoring App Service data reference](monitor-app-service-reference.md) for a reference of metrics, logs, and other important values created by App Service.
+
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resource) for details on monitoring Azure resources.
app-service Quickstart Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-custom-container.md
Title: 'Quickstart: Run a custom container on App Service'
description: Get started with containers on Azure App Service by deploying your first custom container. Previously updated : 10/21/2019 Last updated : 06/30/2021 zone_pivot_groups: app-service-containers-windows-linux
Create an ASP.NET web app by following these steps:
1. Choose **Create a resource** in the upper left-hand corner of the Azure portal.
-1. In the search box above the list of Azure Marketplace resources, search for **Web App for Containers**, and select **Create**.
+1. Under **Popular services**, select **Create** under **Web App**.
-1. In **Web App Create**, choose your subscription and a **Resource Group**. You can create a new resource group if needed.
+1. In **Create Web App**, choose your subscription and a **Resource Group**. You can create a new resource group if needed.
-1. Provide an app name, such as *win-container-demo* and choose **Windows** for **Operating System**. Select **Next: Docker** to continue.
+1. Provide an app name, such as *win-container-demo*. Choose **Docker Container** for **Publish** and **Windows** for **Operating System**. Select **Next: Docker** to continue.
- ![Create a Web App for Containers](media/quickstart-custom-container/create-web-app-continer.png)
+ ![Create a Web App for Containers](media/quickstart-custom-container/create-web-app-container.png)
1. For **Image Source**, choose **Docker Hub** and for **Image and tag**, enter the repository name you copied in [Publish to Docker Hub](#publish-to-docker-hub).
- ![Configure your a Web App for Containers](media/quickstart-custom-container/configure-web-app-continer.png)
+ ![Configure your a Web App for Containers](media/quickstart-custom-container/configure-web-app-container.png)
If you have a custom image elsewhere for your web application, such as in [Azure Container Registry](../container-registry/index.yml) or in any other private repository, you can configure it here.
App Service on Linux provides pre-defined application stacks on Linux with suppo
* The [Azure App Service extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureappservice). You can use this extension to create, manage, and deploy Linux Web Apps on the Azure Platform as a Service (PaaS). * The [Docker extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-docker). You can use this extension to simplify the management of local Docker images and commands and to deploy built app images to Azure.
-## Create an image
+## Create a container registry
-To complete this quickstart, you will need a suitable web app image stored in an [Azure Container Registry](../container-registry/index.yml). Follow the instructions in [Quickstart: Create a private container registry using the Azure portal](../container-registry/container-registry-get-started-portal.md), but use the `mcr.microsoft.com/azuredocs/go` image instead of the `hello-world` image. For reference, the [sample Dockerfile is found in Azure Samples repo](https://github.com/Azure-Samples/go-docs-hello-world).
+This quickstart uses Azure Container Registry as the registry of choice. You're free to use other registries, but the steps may differ slightly.
+
+Create a container registry by following the instructions in [Quickstart: Create a private container registry using the Azure portal](../container-registry/container-registry-get-started-portal.md).
> [!IMPORTANT]
-> Be sure to set the **Admin User** option to **Enable** when you create the container registry. You can also set it from the **Access keys** section of your registry page in the Azure portal. This setting is required for App Service access.
+> Be sure to set the **Admin User** option to **Enable** when you create the Azure container registry. You can also set it from the **Access keys** section of your registry page in the Azure portal. This setting is required for App Service access.
## Sign in
-Next, launch VS Code and log into your Azure account using the App Service extension. To do this, select the Azure logo in the Activity Bar, navigate to the **APP SERVICE** explorer, then select **Sign in to Azure** and follow the instructions.
+1. Launch Visual Studio Code.
+1. Select the **Azure** logo in the [Activity Bar](https://code.visualstudio.com/docs/getstarted/userinterface), navigate to the **APP SERVICE** explorer, then select **Sign in to Azure** and follow the instructions.
-![sign in to Azure](./media/quickstart-docker/sign-in.png)
+ ![sign in to Azure](./media/quickstart-docker/sign-in.png)
-## Check prerequisites
+1. In the [Status Bar](https://code.visualstudio.com/docs/getstarted/userinterface) at the bottom, verify that your Azure account email address. In the **APP SERVICE** explorer, your subscription should be displayed.
+
+1. In the Activity Bar, select the **Docker** logo. In the **REGISTRIES** explorer, verify that the container registry you created appears.
-Now you can check whether you have all the prerequisites installed and configured properly.
+ ![Screenshot shows the Registries value with Azure expanded.](./media/quickstart-docker/registries.png)
-In VS Code, you should see your Azure email address in the Status Bar and your subscription in the **APP SERVICE** explorer.
+## Check prerequisites
-Next, verify that you have Docker installed and running. The following command will display the Docker version if it is running.
+Verify that you have Docker installed and running. The following command will display the Docker version if it is running.
```bash docker --version ```
-Finally, ensure that your Azure Container Registry is connected. To do this, select the Docker logo in the Activity Bar, then navigate to **REGISTRIES**.
+## Create and build image
+
+1. In Visual Studio Code, open an empty folder and add a file called `Dockerfile`. In the Dockerfile, paste in the content based on your desired language framework:
+
+# [.NET](#tab/dotnet)
+
+<!-- https://mcr.microsoft.com/v2/appsvc%2Fdotnetcore/tags/list -->
+```dockerfile
+FROM mcr.microsoft.com/appsvc/dotnetcore:lts
+
+ENV PORT 8080
+EXPOSE 8080
+
+ENV ASPNETCORE_URLS "http://*:${PORT}"
+
+ENTRYPOINT ["dotnet", "/defaulthome/hostingstart/hostingstart.dll"]
+```
+
+In this Dockerfile, the parent image is one of the built-in .NET containers of App Service. You can find the source files for it [in the Azure-App-Service/ImageBuilder GitHub repository, under GenerateDockerFiles/dotnetcore](https://github.com/Azure-App-Service/ImageBuilder/tree/master/GenerateDockerFiles/dotnetcore). Its [Dockerfile](https://github.com/Azure-App-Service/ImageBuilder/blob/master/GenerateDockerFiles/dotnetcore/debian-9/Dockerfile) copies a simple .NET app into `/defaulthome/hostingstart`. Your Dockerfile simply starts that app.
+
+# [Node.js](#tab/node)
+
+<!-- https://mcr.microsoft.com/v2/appsvc%2Fnode/tags/list -->
+```dockerfile
+FROM mcr.microsoft.com/appsvc/node:10-lts
+
+ENV HOST 0.0.0.0
+ENV PORT 8080
+EXPOSE 8080
+
+ENTRYPOINT ["pm2", "start", "--no-daemon", "/opt/startup/default-static-site.js"]
+```
+
+In this Dockerfile, the parent image is one of the built-in Node.js containers of App Service. You can find the source files for it [in the Azure-App-Service/ImageBuilder GitHub repository, under GenerateDockerFiles/node/node-template](https://github.com/Azure-App-Service/ImageBuilder/tree/master/GenerateDockerFiles/node/node-template). Its [Dockerfile](https://github.com/Azure-App-Service/ImageBuilder/blob/master/GenerateDockerFiles/node/node-template/Dockerfile) copies a simple Node.js app into `/opt/startup`. Your Dockerfile simply starts that app using PM2, which is already installed by the parent image.
+
+# [Python](#tab/python)
+
+<!-- https://mcr.microsoft.com/v2/appsvc%2Fpython/tags/list -->
+```dockerfile
+FROM mcr.microsoft.com/appsvc/python:latest
+
+ENV PORT 8080
+EXPOSE 8080
+
+ENTRYPOINT ["gunicorn", "--timeout", "600", "--access-logfile", "'-'", "--error-logfile", "'-'", "--chdir=/opt/defaultsite", "application:app"]
+```
+
+In this Dockerfile, the parent image is one of the built-in Python containers of App Service. You can find the source files for it [in the Azure-App-Service/ImageBuilder GitHub repository, under GenerateDockerFiles/python/template-3.9](https://github.com/Azure-App-Service/ImageBuilder/tree/master/GenerateDockerFiles/python/template-3.9). Its [Dockerfile](https://github.com/Azure-App-Service/ImageBuilder/blob/master/GenerateDockerFiles/python/template-3.9/Dockerfile) copies a simple Python app into `/opt/defaultsite`. Your Dockerfile simply starts that app using Gunicorn, which is already installed by the parent image.
-![Screenshot shows the Registries value with Azure expanded and a file with the dot i o filename extension.](./media/quickstart-docker/registries.png)
+# [Java](#tab/java)
-## Deploy the image to Azure App Service
+<!-- https://mcr.microsoft.com/v2/azure-app-service%2Fjava/tags/list -->
+```dockerfile
+FROM mcr.microsoft.com/azure-app-service/java:11-java11_stable
-Now that everything is configured, you can deploy your image to [Azure App Service](https://azure.microsoft.com/services/app-service/) directly from the Docker extension explorer.
+ENV PORT 80
+EXPOSE 80
-Find the image under the **Registries** node in the **DOCKER** explorer, and expand it to show its tags. Right-click a tag and then select **Deploy Image to Azure App Service**.
+ENTRYPOINT ["java", "-Dserver.port=80", "-jar", "/tmp/appservice/parkingpage.jar"]
+```
+
+In this Dockerfile, the parent image is one of the built-in Java containers of App Service. You can find the source files for it [in the Azure-App-Service/java GitHub repository, under java/tree/dev/java11-alpine](https://github.com/Azure-App-Service/java/tree/dev/java11-alpine). Its [Dockerfile](https://github.com/Azure-App-Service/java/blob/dev/java11-alpine/Dockerfile) copies a simple Java app into `/tmp/appservice`. Your Dockerfile simply starts that app.
+
+--
+
+2. [Open the Command Palette](https://code.visualstudio.com/docs/getstarted/userinterface#_command-palette), and type **Docker Images: Build Image**. Type **Enter** to run the command.
+
+3. In the image tag box, specify the tag you want in the following format: `<acr-name>.azurecr.io/<image-name>/<tag>`, where `<acr-name>` is the name of the container registry you created. Press **Enter**.
+
+4. When the image finishes building, click **Refresh** at the top of the **IMAGES** explorer and verify the image is built successfully.
-From here, follow the prompts to choose a subscription, a globally unique app name, a Resource Group, and an App Service Plan. Choose **B1 Basic** for the pricing tier, and a region.
+ ![Screenshot shows the built image with tag.](./media/quickstart-docker/built-image.png)
-After deployment, your app is available at `http://<app name>.azurewebsites.net`.
+## Deploy to container registry
+
+1. In the Activity Bar, click the **Docker** icon. In the **IMAGES** explorer, find the image you just built.
+1. Expand the image, right-click on the tag you want, and click **Push**.
+1. Make sure the image tag begins with `<acr-name>.azurecr.io` and press **Enter**.
+1. When Visual Studio Code finishes pushing the image to your container registry, click **Refresh** at the top of the **REGISTRIES** explorer and verify the image is pushed successfully.
+
+ ![Screenshot shows the image deployed to Azure container registry.](./media/quickstart-docker/image-in-registry.png)
+
+## Deploy to App Service
+
+1. In the **REGISTRIES** explorer, expand the image, right-click the tag, and click **Deploy image to Azure App Service**.
+1. Follow the prompts to choose a subscription, a globally unique app name, a resource group, and an App Service plan. Choose **B1 Basic** for the pricing tier, and a region near you.
+
+After deployment, your app is available at `http://<app-name>.azurewebsites.net`.
A **Resource Group** is a named collection of all your application's resources in Azure. For example, a Resource Group can contain a reference to a website, a database, and an Azure Function.
An **App Service Plan** defines the physical resources that will be used to host
## Browse the website
-The **Output** panel will open during deployment to indicate the status of the operation. When the operation completes, find the app you created in the **APP SERVICE** explorer, right-click it, then select **Browse Website** to open the site in your browser.
+The **Output** panel shows the status of the deployment operations. When the operation completes, click **Open Site** in the pop-up notification to open the site in your browser.
> [!div class="nextstepaction"] > [I ran into an issue](https://www.research.net/r/PWZWZ52?tutorial=quickstart-docker&step=deploy-app) ## Next steps
-Congratulations, you've successfully completed this quickstart!
+Congratulations, you've successfully completed this quickstart.
+
+The App Service app pulls from the container registry every time it starts. If you rebuild your image, you just need to push it to your container registry, and the app pulls in the updated image when it restarts. To tell your app to pull in the updated image immediately, restart it.
-Next, check out the other Azure extensions.
+> [!div class="nextstepaction"]
+> [Configure custom container](configure-custom-container.md)
+
+> [!div class="nextstepaction"]
+> [Custom container tutorial](tutorial-custom-container.md)
+
+> [!div class="nextstepaction"]
+> [Multi-container app tutorial](tutorial-multi-container-app.md)
+
+Other Azure extensions:
* [Cosmos DB](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-cosmosdb) * [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) * [Azure CLI Tools](https://marketplace.visualstudio.com/items?itemName=ms-vscode.azurecli) * [Azure Resource Manager Tools](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools)-
-Or get them all by installing the
-[Azure Tools](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) extension pack.
-
-Check out other resources:
-
-> [!div class="nextstepaction"]
-> [Configure custom container](configure-custom-container.md)
+* [Azure Tools](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) extension pack includes all the extensions above.
::: zone-end
app-service Webjobs Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/webjobs-create.md
ms.assetid: af01771e-54eb-4aea-af5f-f883ff39572b Previously updated : 10/16/2018 Last updated : 6/25/2021
adobe-target-content: ./webjobs-create-ieux
# Run background tasks with WebJobs in Azure App Service
-This article shows how to deploy WebJobs by using the [Azure portal](https://portal.azure.com) to upload an executable or script. For information about how to develop and deploy WebJobs by using Visual Studio, see [Deploy WebJobs using Visual Studio](webjobs-dotnet-deploy-vs.md).
+Deploy WebJobs by using the [Azure portal](https://portal.azure.com) to upload an executable or script. You can run background tasks in the Azure App Service.
+
+If instead of the Azure App Service you are using Visual Studio 2019 to develop and deploy WebJobs, see [Deploy WebJobs using Visual Studio](webjobs-dotnet-deploy-vs.md).
## Overview WebJobs is a feature of [Azure App Service](index.yml) that enables you to run a program or script in the same instance as a web app, API app, or mobile app. There is no additional cost to use WebJobs.
-> [!IMPORTANT]
-> WebJobs is not yet supported for App Service on Linux.
-
-The Azure WebJobs SDK can be used with WebJobs to simplify many programming tasks. For more information, see [What is the WebJobs SDK](https://github.com/Azure/azure-webjobs-sdk/wiki).
+You can use the Azure WebJobs SDK with WebJobs to simplify many programming tasks. WebJobs is not yet supported for App Service on Linux. For more information, see [What is the WebJobs SDK](https://github.com/Azure/azure-webjobs-sdk/wiki).
Azure Functions provides another way to run programs and scripts. For a comparison between WebJobs and Functions, see [Choose between Flow, Logic Apps, Functions, and WebJobs](../azure-functions/functions-compare-logic-apps-ms-flow-webjobs.md).
The following table describes the differences between *continuous* and *triggere
|Continuous |Triggered | |||
-| Starts immediately when the WebJob is created. To keep the job from ending, the program or script typically does its work inside an endless loop. If the job does end, you can restart it. | Starts only when triggered manually or on a schedule. |
+| Starts immediately when the WebJob is created. To keep the job from ending, the program or script typically does its work inside an endless loop. If the job does end, you can restart it. Typically used with WebJobs SDK. | Starts only when triggered manually or on a schedule. |
| Runs on all instances that the web app runs on. You can optionally restrict the WebJob to a single instance. |Runs on a single instance that Azure selects for load balancing.| | Supports remote debugging. | Doesn't support remote debugging.|
+| Code is deployed under `\site\wwwroot\app_data\Jobs\Continuous`. | Code is deployed under `\site\wwwroot\app_data\Jobs\Triggered`. |
[!INCLUDE [webjobs-always-on-note](../../includes/webjobs-always-on-note.md)]
when making changes in one don't forget the other two.
--> > [!IMPORTANT]
-> If you have source control configured with your application, the Webjobs should be deployed as part of the source control integration. Once source control is configured with your application a WebJob cannot be add from the Azure Portal.
+> When you have source control configured for your application, Webjobs should be deployed as part of the source control integration. After source control is configured for your application, a WebJob can't be added from the Azure portal.
1. In the [Azure portal](https://portal.azure.com), go to the **App Service** page of your App Service web app, API app, or mobile app.
-2. Select **WebJobs**.
+1. In the left pane of your app's **App Service** page, search for and select **WebJobs**.
![Select WebJobs](./media/web-sites-create-web-jobs/select-webjobs.png)
-2. In the **WebJobs** page, select **Add**.
+1. On the **WebJobs** page, select **Add**.
![WebJob page](./media/web-sites-create-web-jobs/wjblade.png)
-3. Use the **Add WebJob** settings as specified in the table.
+1. Fill in the **Add WebJob** settings as specified in the table.
![Screenshot that shows the Add WebJob settings that you need to configure.](./media/web-sites-create-web-jobs/addwjcontinuous.png)
when making changes in one don't forget the other two.
| **Type** | Continuous | The [WebJob types](#webjob-types) are described earlier in this article. | | **Scale** | Multi instance | Available only for Continuous WebJobs. Determines whether the program or script runs on all instances or just one instance. The option to run on multiple instances doesn't apply to the Free or Shared [pricing tiers](https://azure.microsoft.com/pricing/details/app-service/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). |
-4. Click **OK**.
+1. Select **OK**.
- The new WebJob appears on the **WebJobs** page.
+ The new WebJob appears on the **WebJobs** page. If you see a message that says the WebJob was added, but you don't see it, select **Refresh**.
- ![List of WebJobs](./media/web-sites-create-web-jobs/listallwebjobs.png)
+ ![List of WebJobs](./media/web-sites-create-web-jobs/list-continuous-webjob.png)
-2. To stop or restart a continuous WebJob, right-click the WebJob in the list and click **Stop** or **Start**.
+1. To stop or restart a continuous WebJob, right-click the WebJob in the list and select **Stop** or **Start**.
![Stop a continuous WebJob](./media/web-sites-create-web-jobs/continuousstop.png)
Several steps in the three "Create..." sections are identical;
when making changes in one don't forget the other two. -->
-1. In the [Azure portal](https://portal.azure.com), go to the **App Service** page of your App Service web app, API app, or mobile app.
+1. In the [Azure portal](https://portal.azure.com), search for and select **App Services**.
+
+1. Select your web app, API app, or mobile app from the list.
-2. Select **WebJobs**.
+1. In the left pane of your app's **App Service** page, select **WebJobs**.
![Select WebJobs](./media/web-sites-create-web-jobs/select-webjobs.png)
-2. In the **WebJobs** page, select **Add**.
+2. On the **WebJobs** page, select **Add**.
![WebJob page](./media/web-sites-create-web-jobs/wjblade.png)
-3. Use the **Add WebJob** settings as specified in the table.
+1. Fill in the **Add WebJob** settings as specified in the table.
![Screenshot that shows the settings that need to be set for creating a manually triggered WebJob.](./media/web-sites-create-web-jobs/addwjtriggered.png)
when making changes in one don't forget the other two.
| | -- | | | **Name** | myTriggeredWebJob | A name that is unique within an App Service app. Must start with a letter or a number and cannot contain special characters other than "-" and "_".| | **File Upload** | ConsoleApp.zip | A *.zip* file that contains your executable or script file as well as any supporting files needed to run the program or script. The supported executable or script file types are listed in the [Supported file types](#acceptablefiles) section. |
- | **Type** | Triggered | The [WebJob types](#webjob-types) are described earlier in this article. |
+ | **Type** | Triggered | The [WebJob types](#webjob-types) are described previously in this article. |
| **Triggers** | Manual | |
-4. Click **OK**.
+4. Select **OK**.
- The new WebJob appears on the **WebJobs** page.
+ The new WebJob appears on the **WebJobs** page. If you see a message that says the WebJob was added, but you don't see it, select **Refresh**.
- ![List of WebJobs](./media/web-sites-create-web-jobs/listallwebjobs.png)
+ ![List of WebJobs-triggered](./media/web-sites-create-web-jobs/list-triggered-webjob.png)
-7. To run the WebJob, right-click its name in the list and click **Run**.
+7. To run the WebJob, right-click its name in the list and select **Run**.
![Run WebJob](./media/web-sites-create-web-jobs/runondemand.png) ## <a name="CreateScheduledCRON"></a> Create a scheduled WebJob
+A scheduled Webjob is also triggered. You can schedule the trigger to occur automatically on the schedule you specify.
+
<!-- Several steps in the three "Create..." sections are identical; when making changes in one don't forget the other two. -->
-1. In the [Azure portal](https://portal.azure.com), go to the **App Service** page of your App Service web app, API app, or mobile app.
+1. In the [Azure portal](https://portal.azure.com), search for and select **App Services**.
+
+1. Select your web app, API app, or mobile app from the list.
-2. Select **WebJobs**.
+1. In the left pane of your app's **App Service** page, select **WebJobs**.
![Select WebJobs](./media/web-sites-create-web-jobs/select-webjobs.png)
-2. In the **WebJobs** page, select **Add**.
+1. On the **WebJobs** page, select **Add**.
![WebJob page](./media/web-sites-create-web-jobs/wjblade.png)
-3. Use the **Add WebJob** settings as specified in the table.
+3. Fill in the **Add WebJob** settings as specified in the table.
![Add WebJob page](./media/web-sites-create-web-jobs/addwjscheduled.png)
when making changes in one don't forget the other two.
| **Triggers** | Scheduled | For the scheduling to work reliably, enable the Always On feature. Always On is available only in the Basic, Standard, and Premium pricing tiers.| | **CRON Expression** | 0 0/20 * * * * | [CRON expressions](#ncrontab-expressions) are described in the following section. |
-4. Click **OK**.
+4. Select **OK**.
- The new WebJob appears on the **WebJobs** page.
+ The new WebJob appears on the **WebJobs** page. If you see a message that says the WebJob was added, but you don't see it, select **Refresh**.
- ![List of WebJobs](./media/web-sites-create-web-jobs/listallwebjobs.png)
+ ![List of WebJobs-scheduled](./media/web-sites-create-web-jobs/list-scheduled-webjob.png)
## NCRONTAB expressions
To learn more, see [Scheduling a triggered WebJob](webjobs-dotnet-deploy-vs.md#s
[!INCLUDE [webjobs-cron-timezone-note](../../includes/webjobs-cron-timezone-note.md)]
+## Manage WebJobs
+
+You can manage the running state individual WebJobs running in your site in the [Azure portal](https://portal.azure.com). Just go to **Settings** > **WebJobs**, choose the WebJob, and you can start and stop the WebJob. You can also view and modify the password of the webhook that runs the WebJob.
+
+You can also [add an application setting](configure-common.md#configure-app-settings) named `WEBJOB_STOPPED` with a value of `1` to stop all WebJobs running on your site. This can be handy as a way to prevent conflicting WebJobs from running both in staging and production slots. You can similarly use a value of `1` for the `WEBJOBS_DISABLE_SCHEDULE` setting to disable triggered WebJobs in the site or a staging slot. For slots, remember to enable the **Deployment slot setting** option so that the setting itself doesn't get swapped.
+ ## <a name="ViewJobHistory"></a> View the job history
-1. Select the WebJob you want to see history for, and then select the **Logs** button.
+1. Select the WebJob and then to see the history, select **Logs**.
![Logs button](./media/web-sites-create-web-jobs/wjbladelogslink.png)
app-service Webjobs Dotnet Deploy Vs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/webjobs-dotnet-deploy-vs.md
ms.assetid: a3a9d320-1201-4ac8-9398-b4c9535ba755 Previously updated : 07/30/2020 Last updated : 06/24/2021
app-service Webjobs Sdk Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/webjobs-sdk-get-started.md
Title: Get started with the WebJobs SDK
-description: Introduction to the WebJobs SDK for event-driven background processing. Learn how to access data in Azure services and third-party services.
+ Title: Tutorial for event-driven background processing with the WebJobs SDK
+description: Learn how to enable your web apps to run background tasks. Use this tutorial to get started with the WebJobs SDK.
- ms.devlang: dotnet - Previously updated : 02/18/2019 Last updated : 06/25/2021 ++ #Customer intent: As an App Services developer, I want use the Azure portal to add scheduled tasks to my web app in Azure.
-# Get started with the Azure WebJobs SDK for event-driven background processing
+# Tutorial: Get started with the Azure WebJobs SDK for event-driven background processing
+
+Get started with the Azure WebJobs SDK for Azure App Service to enable your web apps to run background tasks, scheduled tasks, and respond to events.
+
+Use Visual Studio 2019 to create a .NET core console app that uses the WebJobs SDK to respond to Azure Storage Queue messages, run the project locally, and finally deploy it to Azure.
-This article shows how to use Visual Studio 2019 to create an Azure WebJobs SDK project, run it locally, and then deploy it to [Azure App Service](overview.md). Version 3.x of the WebJobs SDK supports both .NET Core and .NET Framework console apps. To learn more about working with the WebJobs SDK, see [How to use the Azure WebJobs SDK for event-driven background processing](webjobs-sdk-how-to.md).
+In this tutorial, you will learn how to:
-This article shows you how to deploy WebJobs as a .NET Core console app. To deploy WebJobs as a .NET Framework console app, see [WebJobs as .NET Framework console apps](webjobs-dotnet-deploy-vs.md#webjobs-as-net-framework-console-apps). If you are interested in WebJobs SDK version 2.x, which only supports .NET Framework, see [Develop and deploy WebJobs using Visual Studio - Azure App Service](webjobs-dotnet-deploy-vs.md).
+> [!div class="checklist"]
+> * Create a console app
+> * Add a function
+> * Test locally
+> * Deploy to Azure
+> * Enable Application Insights logging
+> * Add input/output bindings
## Prerequisites
-* [Install Visual Studio 2019](/visualstudio/install/) with the **Azure development** workload. If you already have Visual Studio but don't have that workload, add the workload by selecting **Tools > Get Tools and Features**.
+* Visual Studio 2019 with the **Azure development** workload. [Install Visual Studio 2019](/visualstudio/install/).
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+
+## Create a console app
+In this section, you start by creating a project in Visual Studio 2019. Next, you'll add tools for Azure development, code publishing, and functions that listen for triggers and call functions. Last, you'll set up console logging that disables a legacy monitoring tool and enables a console provider with default filtering.
+
+>[!NOTE]
+>The procedures in this article are verified for creating a .NET Core console app that runs on .NET Core 3.1.
+
+### Create a project
-* You must have [an Azure account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) to publish your WebJobs SDK project to Azure.
+1. In Visual Studio, select **File** > **New** > **Project**.
-## Create a project
+1. Under **Create a new project**, select **Console Application (C#)**, and then select **Next**.
-1. In Visual Studio, select **Create a New Project**.
+1. Under **Configure your new project**, name the project *WebJobsSDKSample*, and then select **Next**.
-2. Select **Console App (.NET Core)**.
+1. Choose your **Target framework** and select **Create**. This tutorial has been verified using .NET Core 3.1.
-3. Name the project *WebJobsSDKSample*, and then select **Create**.
+### Install WebJobs NuGet packages
- ![New Project dialog](./media/webjobs-sdk-get-started/new-project.png)
+Install the latest WebJobs NuGet package. This package includes Microsoft.Azure.WebJobs (WebJobs SDK), which lets you publish your function code to WebJobs in Azure App Service.
-## WebJobs NuGet packages
+1. Get the latest stable 3.x version of the [Microsoft.Azure.WebJobs.Extensions NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions/).
-1. Install the latest stable 3.x version of the [`Microsoft.Azure.WebJobs.Extensions` NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions/), which includes `Microsoft.Azure.WebJobs`.
+2. In Visual Studio, go to **Tools** > **NuGet Package Manager**.
- Here's the **Package Manager Console** command:
+3. Select **Package Manager Console**. You'll see a list of NuGet cmdlets, a link to documentation, and a `PM>` entry point.
+
+4. In the following command, replace `<3_X_VERSION>` with the current version number you found in step 1.
```powershell Install-Package Microsoft.Azure.WebJobs.Extensions -version <3_X_VERSION> ```-
- In this command, replace `<3_X_VERSION>` with a supported version of the package.
-
-## Create the Host
+5. In the **Package Manager Console**, execute the command. The extension list appears and automatically installs.
+
+### Create the Host
The host is the runtime container for functions that listens for triggers and calls functions. The following steps create a host that implements [`IHost`](/dotnet/api/microsoft.extensions.hosting.ihost), which is the Generic Host in ASP.NET Core.
-1. In *Program.cs*, add these `using` statements:
+1. Select the **Program.cs** tab and add these `using` statements:
```cs using System.Threading.Tasks; using Microsoft.Extensions.Hosting; ```
-1. Replace the `Main` method with the following code:
+1. Also under **Program.cs**, replace the `Main` method with the following code:
```cs static async Task Main()
The host is the runtime container for functions that listens for triggers and ca
} ```
-In ASP.NET Core, host configurations are set by calling methods on the [`HostBuilder`](/dotnet/api/microsoft.extensions.hosting.hostbuilder) instance. For more information, see [.NET Generic Host](/aspnet/core/fundamentals/host/generic-host). The `ConfigureWebJobs` extension method initializes the WebJobs host. In `ConfigureWebJobs`, you initialize specific WebJobs extensions and set properties of those extensions.
+In ASP.NET Core, host configurations are set by calling methods on the [`HostBuilder`](/dotnet/api/microsoft.extensions.hosting.hostbuilder) instance. For more information, see [.NET Generic Host](/aspnet/core/fundamentals/host/generic-host). The `ConfigureWebJobs` extension method initializes the WebJobs host. In `ConfigureWebJobs`, initialize specific binding extensions, such as the Storage binding extension, and set properties of those extensions.
-## Enable console logging
+### Enable console logging
-In this section, you set up console logging that uses the [ASP.NET Core logging framework](/aspnet/core/fundamentals/logging).
+Set up console logging that uses the [ASP.NET Core logging framework](/aspnet/core/fundamentals/logging). This framework, Microsoft.Extensions.Logging, includes an API that works with a variety of built-in and third-party logging providers.
-1. Install the latest stable version of the [`Microsoft.Extensions.Logging.Console` NuGet package](https://www.nuget.org/packages/Microsoft.Extensions.Logging.Console/), which includes `Microsoft.Extensions.Logging`.
+1. Get the latest stable version of the [`Microsoft.Extensions.Logging.Console` NuGet package](https://www.nuget.org/packages/Microsoft.Extensions.Logging.Console/), which includes `Microsoft.Extensions.Logging`.
- Here's the **Package Manager Console** command:
+2. In the following command, replace `<3_X_VERSION>` with the current version number you found in step 1. Each type of NuGet Package has a unique version number.
```powershell Install-Package Microsoft.Extensions.Logging.Console -version <3_X_VERSION> ```
- In this command, replace `<3_X_VERSION>` with a supported 3.x version of the package.
+3. In the **Package Manager Console**, fill in the current version number and execute the command. The extension list appears and automatically installs.
-1. In *Program.cs*, add a `using` statement:
+4. Under the tab **Program.cs**, add this `using` statement:
```cs using Microsoft.Extensions.Logging; ```-
-1. Call the [`ConfigureLogging`](/dotnet/api/microsoft.aspnetcore.hosting.webhostbuilderextensions.configurelogging) method on [`HostBuilder`](/dotnet/api/microsoft.extensions.hosting.hostbuilder). The [`AddConsole`](/dotnet/api/microsoft.extensions.logging.consoleloggerextensions.addconsole) method adds console logging to the configuration.
+5. Continuing under **Program.cs**, add the [`ConfigureLogging`](/dotnet/api/microsoft.aspnetcore.hosting.webhostbuilderextensions.configurelogging) method to [`HostBuilder`](/dotnet/api/microsoft.extensions.hosting.hostbuilder), before the `Build` command. The [`AddConsole`](/dotnet/api/microsoft.extensions.logging.consoleloggerextensions.addconsole) method adds console logging to the configuration.
```cs builder.ConfigureLogging((context, b) =>
In this section, you set up console logging that uses the [ASP.NET Core logging
} ```
- This update does the following:
+ This addition makes these changes:
* Disables [dashboard logging](https://github.com/Azure/azure-webjobs-sdk/wiki/Queues#logs). The dashboard is a legacy monitoring tool, and dashboard logging is not recommended for high-throughput production scenarios. * Adds the console provider with default [filtering](webjobs-sdk-how-to.md#log-filtering). Now, you can add a function that is triggered by messages arriving in an Azure Storage queue.
-## Install the Storage binding extension
+## Add a function
+
+A function is unit of code that runs on a schedule, is triggered based on events, or is run on demand. A trigger listens to a service event. In the context of the WebJobs SDK, triggered doesn't refer to the deployment mode. Event-driven or scheduled WebJobs created using the SDK should always be deployed as continuous WebJobs with "Always on" enabled.
+
+In this section, you create a function triggered by messages in an Azure Storage queue. First, you need to add a binding extension to connect to Azure Storage.
+
+### Install the Storage binding extension
-Starting with version 3.x, you must explicitly install the Storage binding extension required by the WebJobs SDK. In prior versions, the Storage bindings were included in the SDK.
+Starting with version 3 of the WebJobs SDK, to connect to Azure Storage services you must install a separate Storage binding extension package.
-1. Install the latest stable version of the [Microsoft.Azure.WebJobs.Extensions.Storage](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage) NuGet package, version 3.x.
+1. Get the latest stable version of the [Microsoft.Azure.WebJobs.Extensions.Storage](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage) NuGet package, version 3.x.
- Here's the **Package Manager Console** command:
+1. In the following command, replace `<3_X_VERSION>` with the current version number you found in step 1. Each type of NuGet Package has a unique version number.
```powershell Install-Package Microsoft.Azure.WebJobs.Extensions.Storage -Version <3_X_VERSION> ```
-
- In this command, replace `<3_X_VERSION>` with a supported version of the package.
+1. In the **Package Manager Console**, execute the command with the current version number at the `PM>` entry point.
-2. In the `ConfigureWebJobs` extension method, call the `AddAzureStorage` method on the [`HostBuilder`](/dotnet/api/microsoft.extensions.hosting.hostbuilder) instance to initialize the Storage extension. At this point, the `ConfigureWebJobs` method looks like the following example:
+1. Continuing in **Program.cs**, in the `ConfigureWebJobs` extension method, add the `AddAzureStorage` method on the [`HostBuilder`](/dotnet/api/microsoft.extensions.hosting.hostbuilder) instance (before the `Build` command) to initialize the Storage extension. At this point, the `ConfigureWebJobs` method looks like this:
```cs builder.ConfigureWebJobs(b =>
- {
- b.AddAzureStorageCoreServices();
- b.AddAzureStorage();
- });
+ {
+ b.AddAzureStorageCoreServices();
+ b.AddAzureStorage();
+ });
+ ```
+1. Add the following code in the `Main` method after the `builder` is instantiated:
+
+ ```csharp
+ builder.UseEnvironment(EnvironmentName.Development);
+ ```
+
+ Running in [development mode](webjobs-sdk-how-to.md#host-development-settings) reduces the [queue polling exponential backoff](../azure-functions/functions-bindings-storage-queue-trigger.md?tabs=csharp#polling-algorithm) that can significantly delay the amount of time it takes for the runtime to find the message and invoke the function. You should remove this line of code or switch to `Production` when you're done with development and testing.
+
+ The `Main` method should now look like the following example:
+
+ ```csharp
+ static async Task Main()
+ {
+ var builder = new HostBuilder();
+ builder.UseEnvironment(EnvironmentName.Development);
+ builder.ConfigureWebJobs(b =>
+ {
+ b.AddAzureStorageCoreServices();
+ });
+ builder.ConfigureLogging((context, b) =>
+ {
+ b.AddConsole();
+ });
+ builder.ConfigureWebJobs(b =>
+ {
+ b.AddAzureStorageCoreServices();
+ b.AddAzureStorage();
+ });
+ var host = builder.Build();
+ using (host)
+ {
+ await host.RunAsync();
+ }
+ }
```
-## Create a function
+### Create a queue triggered function
+
+The `QueueTrigger` attribute tells the runtime to call this function when a new message is written on an Azure Storage queue called `queue`. The contents of the queue message are provided to the method code in the `message` parameter. The body of the method is where you process the trigger data. In this example, the code just logs the message.
+
+1. In Solution Explorer, right-click the project, select **Add** > **New Item**, and then select **Class**.
-1. Right-click the project, select **Add** > **New Item...**, choose **Class**, name the new C# class file *Functions.cs*, and select **Add**.
+2. Name the new C# class file *Functions.cs* and select **Add**.
-1. In Functions.cs, replace the generated template with the following code:
+3. In *Functions.cs*, replace the generated template with the following code:
```cs using Microsoft.Azure.WebJobs;
Starting with version 3.x, you must explicitly install the Storage binding exten
} ```
- The `QueueTrigger` attribute tells the runtime to call this function when a new message is written on an Azure Storage queue called `queue`. The contents of the queue message are provided to the method code in the `message` parameter. The body of the method is where you process the trigger data. In this example, the code just logs the message.
-
- The `message` parameter doesn't have to be a string. You can also bind to a JSON object, a byte array, or a [CloudQueueMessage](/dotnet/api/microsoft.azure.storage.queue.cloudqueuemessage) object. [See Queue trigger usage](../azure-functions/functions-bindings-storage-queue-trigger.md?tabs=csharp#usage). Each binding type (such as queues, blobs, or tables) has a different set of parameter types that you can bind to.
-
-## Create a storage account
-
-The Azure Storage Emulator that runs locally doesn't have all of the features that the WebJobs SDK needs. So in this section you create a storage account in Azure and configure the project to use it. If you already have a storage account, skip down to step 6.
+ When a message is added to a queue named `queue`, the function executes and the `message` string is written to the logs. The queue being monitored is in the default Azure Storage account, which you create next.
+
+The `message` parameter doesn't have to be a string. You can also bind to a JSON object, a byte array, or a [CloudQueueMessage](/dotnet/api/microsoft.azure.storage.queue.cloudqueuemessage) object. [See Queue trigger usage](/azure/azure-functions/functions-bindings-storage-queue-trigger?tabs=csharp#usage). Each binding type (such as queues, blobs, or tables) has a different set of parameter types that you can bind to.
-1. Open **Server Explorer** in Visual studio and sign in to Azure. Right-click the **Azure** node, and then select **Connect to Microsoft Azure Subscription**.
+### Create an Azure storage account
- ![Sign in to Azure](./media/webjobs-sdk-get-started/sign-in.png)
+The Azure Storage Emulator that runs locally doesn't have all of the features that the WebJobs SDK needs. You'll create a storage account in Azure and configure the project to use it.
-1. Under the **Azure** node in **Server Explorer**, right-click **Storage**, and then select **Create Storage account**.
+To learn how to create a general-purpose v2 storage account, see [Create an Azure Storage account](../storage/common/storage-account-create.md?tabs=azure-portal).
- ![Create Storage account menu](./media/webjobs-sdk-get-started/create-storage-account-menu.png)
+### Locate and copy your connection string
+A connection string is required to configure storage. Keep this connection string for the next steps.
-1. In the **Create Storage Account** dialog box, enter a unique name for the storage account.
+1. In the [Azure portal](https://portal.azure.com), navigate to your storage account and select **Settings**.
+1. In **Settings**, select **Access keys**.
+1. For the **Connection string** under **key1**, select the **Copy to clipboard** icon.
-1. Choose the same **Region** that you created your App Service app in, or a region close to you.
+ ![key](./media/webjobs-sdk-get-started/connection-key.png)
-1. Select **Create**.
-
- ![Create Storage account](./media/webjobs-sdk-get-started/create-storage-account.png)
-
-1. Under the **Storage** node in **Server Explorer**, select the new Storage account. In the **Properties** window, select the ellipsis (**...**) at the right of the **Connection String** value field.
-
- ![Connection String ellipsis](./media/webjobs-sdk-get-started/conn-string-ellipsis.png)
-
-1. Copy the connection string, and save this value somewhere that you can copy it again readily.
-
- ![Copy connection string](./media/webjobs-sdk-get-started/copy-key.png)
-
-## Configure storage to run locally
+### Configure storage to run locally
The WebJobs SDK looks for the storage connection string in the Application Settings in Azure. When you run locally, it looks for this value in the local configuration file or in environment variables.
-1. Right-click the project, select **Add** > **New Item...**, choose **JavaScript JSON configuration file**, name the new file *appsettings.json* file, and select **Add**.
+1. Right-click the project, select **Add** > **New Item**, select **JavaScript JSON configuration file**, name the new file *appsettings.json* file, and select **Add**.
1. In the new file, add a `AzureWebJobsStorage` field, as in the following example:
The WebJobs SDK looks for the storage connection string in the Application Setti
} ```
-1. Replace *{storage connection string}* with the connection string that you copied earlier.
+1. Replace *{storage connection string}* with the connection string that you copied previously.
-1. Select the *appsettings.json* file in Solution Explorer and in the **Properties** window, set **Copy to Output Directory** to **Copy if newer**.
+1. Select the *appsettings.json* file in Solution Explorer and in the **Properties** window, set the **Copy to Output Directory** action to **Copy if newer**.
-Later, you'll add the same connection string app setting in your app in Azure App Service.
+Because this file contains a connection string secret, you shouldn't store the file in a remote code repository. After publishing your project to Azure, you can add the same connection string app setting in your app in Azure App Service.
## Test locally
-In this section, you build and run the project locally and trigger the function by creating a queue message.
-
-1. Press **Ctrl+F5** to run the project.
+Build and run the project locally and create a message queue to trigger the function.
- The console shows that the runtime found your function and is waiting for queue messages to trigger it. The following output is generated by the v3.x host:
-
- ```console
- info: Microsoft.Azure.WebJobs.Hosting.JobHostService[0]
- Starting JobHost
- info: Host.Startup[0]
- Found the following functions:
- WebJobsSDKSample.Functions.ProcessQueueMessage
-
- info: Host.Startup[0]
- Job host started
- Application started. Press Ctrl+C to shut down.
- Hosting environment: Development
- Content root path: C:\WebJobsSDKSample\WebJobsSDKSample\bin\Debug\netcoreapp2.1\
- ```
-
-1. Close the console window.
-
-1. In **Server Explorer** in Visual Studio, expand the node for your new storage account, and then right-click **Queues**.
+1. In **Cloud Explorer** in Visual Studio, expand the node for your new storage account, and then right-click **Queues**.
1. Select **Create Queue**.
In this section, you build and run the project locally and trigger the function
![Screenshot that shows where you create the queue and name it "queue". ](./media/webjobs-sdk-get-started/create-queue.png)
-1. Right-click the node for the new queue, and then select **View Queue**.
+1. Right-click the node for the new queue, and then select **Open**.
1. Select the **Add Message** icon.
In this section, you build and run the project locally and trigger the function
![Create queue](./media/webjobs-sdk-get-started/hello-world-text.png)
-1. Run the project again.
-
- Because you used the `QueueTrigger` attribute in the `ProcessQueueMessage` function, the WeJobs SDK runtime listens for queue messages when it starts up. It finds a new queue message in the queue named *queue* and calls the function.
-
- Due to [queue polling exponential backoff](../azure-functions/functions-bindings-storage-queue-trigger.md?tabs=csharp#polling-algorithm), it might take as long as 2 minutes for the runtime to find the message and invoke the function. This wait time can be reduced by running in [development mode](webjobs-sdk-how-to.md#host-development-settings).
+1. Press **Ctrl+F5** to run the project.
- The console output looks like this:
+ The console shows that the runtime found your function. Because you used the `QueueTrigger` attribute in the `ProcessQueueMessage` function, the WebJobs runtime listens for messages in the queue named `queue`. When it finds a new message in this queue, the runtime calls the function, passing in the message string value.
- ```console
- info: Function.ProcessQueueMessage[0]
- Executing 'Functions.ProcessQueueMessage' (Reason='New queue message detected on 'queue'.', Id=2c319369-d381-43f3-aedf-ff538a4209b8)
- info: Function.ProcessQueueMessage[0]
- Trigger Details: MessageId: b00a86dc-298d-4cd2-811f-98ec39545539, DequeueCount: 1, InsertionTime: 1/18/2019 3:28:51 AM +00:00
- info: Function.ProcessQueueMessage.User[0]
- Hello World!
- info: Function.ProcessQueueMessage[0]
- Executed 'Functions.ProcessQueueMessage' (Succeeded, Id=2c319369-d381-43f3-aedf-ff538a4209b8)
- ```
+1. Go back to the **Queue** window and refresh it. The message is gone, since it has been processed by your function running locally.
1. Close the console window.
-1. Go back to the Queue window and refresh it. The message is gone, since it has been processed by your function running locally.
+It's now time to publish your WebJobs SDK project to Azure.
-## Add Application Insights logging
+## <a name="deploy-as-a-webjob"></a>Deploy to Azure
-When the project runs in Azure, you can't monitor function execution by viewing console output. The monitoring solution we recommend is [Application Insights](../azure-monitor/app/app-insights-overview.md). For more information, see [Monitor Azure Functions](../azure-functions/functions-monitoring.md).
+During deployment, you create an app service instance where you'll run your functions. When you publish a .NET Core console app to App Service in Azure, it automatically runs as a WebJob. To learn more about publishing, see [Develop and deploy WebJobs using Visual Studio](webjobs-dotnet-deploy-vs.md).
-In this section, you do the following tasks to set up Application Insights logging before you deploy to Azure:
+### Create Azure resources
-* Make sure you have an App Service app and an Application Insights instance to work with.
-* Configure the App Service app to use the Application Insights instance and the storage account that you created earlier.
-* Set up the project for logging to Application Insights.
-### Create App Service app and Application Insights instance
+### Enable Always On
-1. If you don't already have an App Service app that you can use, [create one](./quickstart-dotnetcore.md?tabs=netframework48). When you create your app, you can also create a connected Application Insights resource. When you do this, the `APPINSIGHTS_INSTRUMENTATIONKEY` is set for you in your app.
+For a continuous WebJob, you should enable the Always on setting in the site so that your WebJobs run correctly. If you don't enable Always on, the runtime goes idle after a few minutes of inactivity.
-1. If you don't already have an Application Insights resource that you can use, [create one](../azure-monitor/app/create-new-resource.md ). Set **Application type** to **General**, and skip the sections that follow **Copy the instrumentation key**.
+1. In the **Publish** page, select the three dots above **Hosting** to show **Hosting profile section actions** and choose **Open in Azure portal**.
-1. If you already have an Application Insights resource that you want to use, [copy the instrumentation key](../azure-monitor/app/create-new-resource.md#copy-the-instrumentation-key).
+1. Under **Settings**, choose **Configuration** > **General settings**, set **Always on** to **On**, and then select **Save** and **Continue** to restart the site.
-### Configure app settings
+### Publish the project
-1. In **Server Explorer** in Visual Studio, expand the **App Service** node under **Azure**.
+With the web app created in Azure, it's time to publish the WebJobs project.
-1. Expand the resource group that your App Service app is in, and then right-click your App Service app.
+1. In the **Publish** page under **Hosting**, select the edit button and change the **WebJob Type** to `Continuous` and select **Save**. This makes sure that the WebJob is running when messages are added to the queue. Triggered WebJobs are typically used only for manual webhooks.
-1. Select **View Settings**.
+1. Select the **Publish** button at the top right corner of the **Publish** page. When the operation completes, your WebJob is running on Azure.
-1. In the **Connection Strings** box, add the following entry.
+### Create a storage connection app setting
- |Name |connection String |Database Type|
- ||||
- |AzureWebJobsStorage | {the Storage connection string that you copied earlier}|Custom|
+You need to create the same storage connection string setting in Azure that you used locally in your appsettings.json config file. This lets you more securely store the connection string and
-1. If the **Application Settings** box doesn't have an Application Insights instrumentation key, add the one that you copied earlier. (The instrumentation key may already be there, depending on how you created the App Service app.)
+1. In your **Publish** profile page, select the three dots above **Hosting** to show **Hosting profile section actions** and choose **Manage Azure App Service settings**.
- |Name |Value |
- |||
- |APPINSIGHTS_INSTRUMENTATIONKEY | {instrumentation key} |
+1. In **Application settings**, choose **+ Add setting**.
-1. Replace *{instrumentation key}* with the instrumentation key from the Application Insights resource that you're using.
+1. In **New app setting name**, type `AzureWebJobsStorage` and select **OK**.
+
+1. In **Remote**, paste in the connection string from your local setting and select **OK**.
-1. Select **Save**.
+The connection string is now set in your app in Azure.
-1. Add the Application Insights connection to the project so that you can run it locally. In the *appsettings.json* file, add an `APPINSIGHTS_INSTRUMENTATIONKEY` field, as in the following example:
+### Trigger the function in Azure
- ```json
- {
- "AzureWebJobsStorage": "{storage connection string}",
- "APPINSIGHTS_INSTRUMENTATIONKEY": "{instrumentation key}"
- }
- ```
+1. Make sure you're not running locally. Close the console window if it's still open. Otherwise, the local instance might be the first to process any queue messages you create.
- Replace *{instrumentation key}* with the instrumentation key from the Application Insights resource that you're using.
+1. In the **Queue** page in Visual Studio, add a message to the queue as before.
-1. Save your changes.
+1. Refresh the **Queue** page, and the new message disappears because it has been processed by the function running in Azure.
-### Add Application Insights logging provider
+## Enable Application Insights logging
-To take advantage of [Application Insights](../azure-monitor/app/app-insights-overview.md) logging, update your logging code to do the following:
+When the WebJob runs in Azure, you can't monitor function execution by viewing console output. To be able to monitor your WebJob, you should create an associated [Application Insights](../azure-monitor/app/app-insights-overview.md) instance when you publish your project.
-* Add an Application Insights logging provider with default [filtering](webjobs-sdk-how-to.md#log-filtering). When running locally, all Information and higher-level logs are written to both the console and Application Insights.
-* Put the [LoggerFactory](./webjobs-sdk-how-to.md#logging-and-monitoring) object in a `using` block to ensure that log output is flushed when the host exits.
+### Create an Application Insights instance
-1. Install the latest stable 3.x version of the [`Microsoft.Azure.WebJobs.Logging.ApplicationInsights` NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Logging.ApplicationInsights/).
+1. In your **Publish** profile page, select the three dots above **Hosting** to show **Hosting profile section actions** and choose **Open in Azure Portal**.
- Here's the **Package Manager Console** command:
+1. In the web app under **Settings**, choose **Application Insights**, and select **Turn on Application Insights**.
- ```powershell
- Install-Package Microsoft.Azure.WebJobs.Logging.ApplicationInsights -Version <3_X_VERSION>
- ```
- In this command, replace `<3_X_VERSION>` with a supported version of the package.
+1. Verify the generated **Resource name** for the instance and the **Location**, and select **Apply**.
-1. Open *Program.cs* and replace the code in the `Main` method with the following code:
+1. Under **Settings**, choose **Configuration** and verify that a new `APPINSIGHTS_INSTRUMENTATIONKEY` was created. This key is used to connect your WebJob instance to Application Insights.
- ```cs
- static async Task Main()
- {
- var builder = new HostBuilder();
- builder.UseEnvironment(EnvironmentName.Development);
- builder.ConfigureWebJobs(b =>
- {
- b.AddAzureStorageCoreServices();
- b.AddAzureStorage();
- });
- builder.ConfigureLogging((context, b) =>
- {
- b.AddConsole();
+To take advantage of [Application Insights](../azure-monitor/app/app-insights-overview.md) logging, you need to update your logging code as well.
- // If the key exists in settings, use it to enable Application Insights.
- string instrumentationKey = context.Configuration["APPINSIGHTS_INSTRUMENTATIONKEY"];
- if (!string.IsNullOrEmpty(instrumentationKey))
- {
- b.AddApplicationInsightsWebJobs(o => o.InstrumentationKey = instrumentationKey);
- }
- });
- var host = builder.Build();
- using (host)
- {
- await host.RunAsync();
- }
- }
- ```
-
- This adds the Application Insights provider to the logging, using the key you added earlier to your app settings.
-
-## Test Application Insights logging
-
-In this section, you run locally again to verify that logging data is now going to Application Insights as well as to the console.
-
-1. Use **Server Explorer** in Visual Studio to create a queue message like you did [earlier](#test-locally), except enter *Hello App Insights!* as the message text.
+### Install the Application Insights extension
-1. Run the project.
+1. Get the latest stable version of the [Microsoft.Azure.WebJobs.Logging.ApplicationInsights](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Logging.ApplicationInsights) NuGet package, version 3.x.
- The WebJobs SDK processes the queue message, and you see the logs in the console window.
+2. In the following command, replace `<3_X_VERSION>` with the current version number you found in step 1. Each type of NuGet Package has a unique version number.
-1. Close the console window.
-
-1. Go to the [Azure portal](https://portal.azure.com/) to view your Application Insights resource. Search for and select **Application Insights**.
+ ```powershell
+ Install-Package Microsoft.Azure.WebJobs.Logging.ApplicationInsights -Version <3_X_VERSION>
+ ```
+3. In the **Package Manager Console**, execute the command with the current version number at the `PM>` entry point.
-1. Choose your Application Insights instance.
+### Initialize the Application Insights logging provider
-1. Select **Search**.
+Open *Program.cs* and add the following initializer in the `ConfigureLogging` after the call to `AddConsole`:
- ![Select Search](./media/webjobs-sdk-get-started/select-search.png)
+```csharp
+// If the key exists in settings, use it to enable Application Insights.
+string instrumentationKey = context.Configuration["APPINSIGHTS_INSTRUMENTATIONKEY"];
+if (!string.IsNullOrEmpty(instrumentationKey))
+{
+ b.AddApplicationInsightsWebJobs(o => o.InstrumentationKey = instrumentationKey);
+}
+```
-1. If you don't see the *Hello App Insights!* message, select **Refresh** periodically for several minutes. (Logs don't appear immediately, because it takes a while for the Application Insights client to flush the logs it processes.)
+The `Main` method code should now look like the following example:
- ![Logs in Application Insights](./media/webjobs-sdk-get-started/logs-in-ai.png)
-
-1. Close the console window.
+```csharp
+static async Task Main()
+{
+ var builder = new HostBuilder();
+ builder.UseEnvironment(EnvironmentName.Development);
+ builder.ConfigureWebJobs(b =>
+ {
+ b.AddAzureStorageCoreServices();
+ b.AddAzureStorage();
+ });
+ builder.ConfigureLogging((context, b) =>
+ {
+ b.AddConsole();
-## <a name="deploy-as-a-webjob"></a>Deploy to Azure
+ // If the key exists in settings, use it to enable Application Insights.
+ string instrumentationKey = context.Configuration["APPINSIGHTS_INSTRUMENTATIONKEY"];
+ if (!string.IsNullOrEmpty(instrumentationKey))
+ {
+ b.AddApplicationInsightsWebJobs(o => o.InstrumentationKey = instrumentationKey);
+ }
+ });
+ var host = builder.Build();
+ using (host)
+ {
+ await host.RunAsync();
+ }
+}
+```
-During deployment, you create an app service instance in which to run your functions. When you publish a .NET Core console app to App Service in Azure, it automatically gets run as a WebJob. To learn more about publishing, see [Develop and deploy WebJobs using Visual Studio](webjobs-dotnet-deploy-vs.md).
+This initializes the Application Insights logging provider with default [filtering](webjobs-sdk-how-to.md#log-filtering). When running locally, all Information and higher-level logs are written to both the console and Application Insights.
+### Republish the project and trigger the function again
-## Trigger the function in Azure
+1. In **Solution Explorer**, right-click the project and select **Publish**.
-1. Make sure you're not running locally (close the console window if it's still open). Otherwise the local instance might be the first to process any queue messages you create.
+1. As before, use **Cloud Explorer** in Visual Studio to create a queue message like you did [earlier](#test-locally), except enter *Hello App Insights!* as the message text.
-1. In the **Queue** page in Visual Studio, add a message to the queue as before.
+1. In your **Publish** profile page, select the three dots above **Hosting** to show **Hosting profile section actions** and choose **Open in Azure Portal**.
-1. Refresh the **Queue** page, and the new message disappears because it has been processed by the function running in Azure.
+1. In the web app under **Settings**, choose **Application Insights**, and select **View Application Insights data**.
- > [!TIP]
- > When you're testing in Azure, use [development mode](webjobs-sdk-how-to.md#host-development-settings) to ensure that a queue trigger function is invoked right away and avoid delays due to [queue polling exponential backoff](../azure-functions/functions-bindings-storage-queue-trigger.md?tabs=csharp#polling-algorithm).
+1. Select **Search** and then select **See all data in the last 24 hours**.
-### View logs in Application Insights
+ ![Select Search](./media/webjobs-sdk-get-started/select-search.png)
-1. Open the [Azure portal](https://portal.azure.com/), and go to your Application Insights resource.
+1. If you don't see the *Hello App Insights!* message, select **Refresh** periodically for several minutes. Logs don't appear immediately, because it takes a while for the Application Insights client to flush the logs it processes.
-1. Select **Search**.
+ ![Logs in Application Insights](./media/webjobs-sdk-get-started/logs-in-ai.png)
-1. If you don't see the *Hello Azure!* message, select **Refresh** periodically for several minutes.
+## Add input/output bindings
- You see the logs from the function running in a WebJob, including the *Hello Azure!* text that you entered in the preceding section.
+Bindings simplify code that reads and writes data. Input bindings simplify code that reads data. Output bindings simplify code that writes data.
-## Add an input binding
+### Add input binding
-Input bindings simplify code that reads data. For this example, the queue message will be a blob name and you'll use the blob name to find and read a blob in Azure Storage.
+Input bindings simplify code that reads data. For this example, the queue message is the name of a blob, which you'll use to find and read a blob in Azure Storage.
1. In *Functions.cs*, replace the `ProcessQueueMessage` method with the following code:
Input bindings simplify code that reads data. For this example, the queue messag
1. Create a blob container in your storage account.
- a. In **Server Explorer** in Visual Studio, expand the node for your storage account, right-click **Blobs**, and then select **Create Blob Container**.
+ a. In **Cloud Explorer** in Visual Studio, expand the node for your storage account, right-click **Blobs**, and then select **Create Blob Container**.
- b. In the **Create Blob Container** dialog, enter *container* as the container name, and then click **OK**.
+ b. In the **Create Blob Container** dialog, enter *container* as the container name, and then select **OK**.
1. Upload the *Program.cs* file to the blob container. (This file is used here as an example; you could upload any text file and create a queue message with the file's name.)
- a. In **Server Explorer**, double-click the node for the container you created.
+ a. In **Cloud Explorer**, double-click the node for the container you created.
b. In the **Container** window, select the **Upload** button.
Input bindings simplify code that reads data. For this example, the queue messag
Size: 532 bytes Executed 'Functions.ProcessQueueMessage' (Succeeded, Id=5a2ac479-de13-4f41-aae9-1361f291ff88) ```-
-## Add an output binding
+### Add an output binding
Output bindings simplify code that writes data. This example modifies the previous one by writing a copy of the blob instead of logging its size. Blob storage bindings are included in the Azure Storage extension package that we installed previously.
Output bindings simplify code that writes data. This example modifies the previo
The queue message triggers the function, which then reads the blob, logs its length, and creates a new blob. The console output is the same, but when you go to the blob container window and select **Refresh**, you see a new blob named *copy-Program.cs.*
-## Republish the updates to Azure
+### Republish the project
1. In **Solution Explorer**, right-click the project and select **Publish**.
-1. In the **Publish** dialog, make sure that the current profile is selected and then choose **Publish**. Results of the publish are detailed in the **Output** window.
+1. In the **Publish** dialog, make sure that the current profile is selected and then select **Publish**. Results of the publish are detailed in the **Output** window.
1. Verify the function in Azure by again uploading a file to the blob container and adding a message to the queue that is the name of the uploaded file. You see the message get removed from the queue and a copy of the file created in the blob container. ## Next steps
-This article showed you how to create, run, and deploy a WebJobs SDK 3.x project.
+This tutorial showed you how to create, run, and deploy a WebJobs SDK 3.x project.
> [!div class="nextstepaction"]
-> [Learn more about the WebJobs SDK](webjobs-sdk-how-to.md)
+> [Learn more about the WebJobs SDK](webjobs-sdk-how-to.md)
app-service Webjobs Sdk How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/webjobs-sdk-how-to.md
ms.devlang: dotnet Previously updated : 02/18/2019 Last updated : 06/24/2021 #Customer intent: As an App Services developer, I want use the WebJobs SDK to be able to execute event-driven code in Azure. # How to use the Azure WebJobs SDK for event-driven background processing
-This article provides guidance on how to work with the Azure WebJobs SDK. To get started with WebJobs right away, see [Get started with the Azure WebJobs SDK for event-driven background processing](webjobs-sdk-get-started.md).
+This article provides guidance on how to work with the Azure WebJobs SDK. To get started with WebJobs right away, see [Get started with the Azure WebJobs SDK](webjobs-sdk-get-started.md).
## WebJobs SDK versions These are the key differences between version 3.*x* and version 2.*x* of the WebJobs SDK: * Version 3.*x* adds support for .NET Core.
-* In version 3.*x*, you need to explicitly install the Storage binding extension required by the WebJobs SDK. In version 2.*x*, the Storage bindings were included in the SDK.
-* Visual Studio tooling for .NET Core (3.*x*) projects differs from tooling for .NET Framework (2.*x*) projects. To learn more, see [Develop and deploy WebJobs using Visual Studio - Azure App Service](webjobs-dotnet-deploy-vs.md).
+* In version 3.*x*, you'll install the Storage binding extension required by the WebJobs SDK. In version 2.*x*, the Storage bindings are included in the SDK.
+* Visual Studio 2019 tooling for .NET Core (3.*x*) projects differs from tooling for .NET Framework (2.*x*) projects. To learn more, see [Develop and deploy WebJobs using Visual Studio - Azure App Service](webjobs-dotnet-deploy-vs.md).
-When possible, examples are provided for both version 3.*x* and version 2.*x*.
+Several descriptions in this article provide examples for both WebJobs version 3.*x* and WebJobs version 2.*x*.
-> [!NOTE]
-> [Azure Functions](../azure-functions/functions-overview.md) is built on the WebJobs SDK, and this article provides links to Azure Functions documentation for some topics. Note these differences between Functions and the WebJobs SDK:
-> * Azure Functions version 2.*x* corresponds to WebJobs SDK version 3.*x*, and Azure Functions 1.*x* corresponds to WebJobs SDK 2.*x*. Source code repositories use the WebJobs SDK numbering.
-> * Sample code for Azure Functions C# class libraries is like WebJobs SDK code, except you don't need a `FunctionName` attribute in a WebJobs SDK project.
-> * Some binding types are supported only in Functions, like HTTP (Webhooks) and Event Grid (which is based on HTTP).
->
-> For more information, see [Compare the WebJobs SDK and Azure Functions](../azure-functions/functions-compare-logic-apps-ms-flow-webjobs.md#compare-functions-and-webjobs).
+[Azure Functions](../azure-functions/functions-overview.md) is built on the WebJobs SDK.
+
+ * Azure Functions version 2.*x* is built on WebJobs SDK version 3.*x*.
+ * Azure Functions version 1.*x* is built on WebJobs SDK version 2.*x*.
+
+Source code repositories for both Azure Functions and WebJobs SDK use the WebJobs SDK numbering. Several sections of this how-to article link to Azure Functions documentation.
+
+For more information, see [Compare the WebJobs SDK and Azure Functions](../azure-functions/functions-compare-logic-apps-ms-flow-webjobs.md#compare-functions-and-webjobs)
## WebJobs host
-The host is a runtime container for functions. It listens for triggers and calls functions. In version 3.*x*, the host is an implementation of `IHost`. In version 2.*x*, you use the `JobHost` object. You create a host instance in your code and write code to customize its behavior.
+The host is a runtime container for functions. The Host listens for triggers and calls functions. In version 3.*x*, the host is an implementation of `IHost`. In version 2.*x*, you use the `JobHost` object. You create a host instance in your code and write code to customize its behavior.
-This is a key difference between using the WebJobs SDK directly and using it indirectly through Azure Functions. In Azure Functions, the service controls the host, and you can't customize the host by writing code. Azure Functions lets you customize host behavior through settings in the host.json file. Those settings are strings, not code, and this limits the kinds of customizations you can do.
+This is a key difference between using the WebJobs SDK directly and using it indirectly through Azure Functions. In Azure Functions, the service controls the host, and you can't customize the host by writing code. Azure Functions lets you customize host behavior through settings in the host.json file. Those settings are strings, not code, and use of these strings limits the kinds of customizations you can do.
### Host connection strings
-The WebJobs SDK looks for Azure Storage and Azure Service Bus connection strings in the local.settings.json file when you run locally, or in the environment of the WebJob when you run in Azure. By default, a storage connection string setting named `AzureWebJobsStorage` is required.
+The WebJobs SDK looks for Azure Storage and Azure Service Bus connection strings in the local.settings.json file when you run locally or in the environment of the WebJob when you run in Azure. By default, the WebJobs SDK requires a storage connection string setting with the name `AzureWebJobsStorage`.
-Version 2.*x* of the SDK lets you use your own names for these connection strings or store them elsewhere. You can set names in code using the [`JobHostConfiguration`], as shown here:
+Version 2.*x* of the SDK doesn't require a specific name. Version 2.*x* lets you use your own names for these connection strings and allows you to store them elsewhere. You can set names in code using the [`JobHostConfiguration`], like this:
```cs static void Main(string[] args)
static void Main(string[] args)
### Host development settings
-You can run the host in development mode to make local development more efficient. Here are some of the settings that are changed when you run in development mode:
+You can run the host in development mode to make local development more efficient. Here are some of the settings that automatically change when you run in development mode:
| Property | Development setting | | - | - |
static async Task Main()
#### Version 2.*x*
-The `JobHostConfiguration` class has a `UseDevelopmentSettings` method that enables development mode. The following example shows how to use development settings. To make `config.IsDevelopment` return `true` when it runs locally, set a local environment variable named `AzureWebJobsEnv` with the value `Development`.
+The `JobHostConfiguration` class has a `UseDevelopmentSettings` method that enables development mode. The following example shows how to use development settings. To make `config.IsDevelopment` return `true` when it runs locally, set a local environment variable named `AzureWebJobsEnv` with the value `Development`.
```cs static void Main()
In version 2.*x*, you control the number of concurrent connections to a host by
All outgoing HTTP requests that you make from a function by using `HttpClient` flow through `ServicePointManager`. After you reach the value set in `DefaultConnectionLimit`, `ServicePointManager` starts queueing requests before sending them. Suppose your `DefaultConnectionLimit` is set to 2 and your code makes 1,000 HTTP requests. Initially, only two requests are allowed through to the OS. The other 998 are queued until there's room for them. That means your `HttpClient` might time out because it appears to have made the request, but the request was never sent by the OS to the destination server. So you might see behavior that doesn't seem to make sense: your local `HttpClient` is taking 10 seconds to complete a request, but your service is returning every request in 200 ms.
-The default value for ASP.NET applications is `Int32.MaxValue`, and that's likely to work well for WebJobs running in a Basic or higher App Service Plan. WebJobs typically need the Always On setting, and that's supported only by Basic and higher App Service Plans.
+The default value for ASP.NET applications is `Int32.MaxValue`, and that's likely to work well for WebJobs running in a Basic or higher App Service Plan. WebJobs typically need the **Always On** setting, and that's supported only by Basic and higher App Service Plans.
If your WebJob is running in a Free or Shared App Service Plan, your application is restricted by the App Service sandbox, which currently has a [connection limit of 300](https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#per-sandbox-per-appper-site-numerical-limits). With an unbound connection limit in `ServicePointManager`, it's more likely that the sandbox connection threshold will be reached and the site will shut down. In that case, setting `DefaultConnectionLimit` to something lower, like 50 or 100, can prevent this from happening and still allow for sufficient throughput.
static void Main(string[] args)
## Triggers
+WebJobs SDK supports the same set of triggers and binding used by [Azure Functions](../azure-functions/functions-triggers-bindings.md). Please note that in the WebJobs SDK, triggers are function-specific and not related to the WebJob deployment type. WebJobs with event-triggered functions created using the SDK should always be published as a _continuous_ WebJob, with _Always on_ enabled.
+ Functions must be public methods and must have one trigger attribute or the [`NoAutomaticTrigger`](#manual-triggers) attribute. ### Automatic triggers
-Automatic triggers call a function in response to an event. Consider this example of a function that's triggered by a message added to Azure Queue storage. It responds by reading a blob from Azure Blob storage:
+Automatic triggers call a function in response to an event. Consider this example of a function that's triggered by a message added to Azure Queue storage. The function responds by reading a blob from Azure Blob storage:
```cs public static void Run(
public static void Run(
} ```
-The `QueueTrigger` attribute tells the runtime to call the function whenever a queue message appears in the `myqueue-items` queue. The `Blob` attribute tells the runtime to use the queue message to read a blob in the *sample-workitems* container. The name of the blob item in the `samples-workitems` container is obtained directly from the queue trigger as a binding expression (`{queueTrigger}`).
+The `QueueTrigger` attribute tells the runtime to call the function whenever a queue message appears in `myqueue-items`. The `Blob` attribute tells the runtime to use the queue message to read a blob in the *sample-workitems* container. The name of the blob item in the `samples-workitems` container is obtained directly from the queue trigger as a binding expression (`{queueTrigger}`).
[!INCLUDE [webjobs-always-on-note](../../includes/webjobs-always-on-note.md)]
Input bindings provide a declarative way to make data from Azure or third-party
You can use a method return value for an output binding by applying the attribute to the method return value. See the example in [Using the Azure Function return value](../azure-functions/functions-bindings-return-value.md).
-## Binding types
+### Binding types
The process for installing and managing binding types depends on whether you're using version 3.*x* or version 2.*x* of the SDK. You can find the package to install for a particular binding type in the "Packages" section of that binding type's Azure Functions [reference article](#binding-reference-information). An exception is the Files trigger and binding (for the local file system), which isn't supported by Azure Functions.
static async Task Main()
} ```
-To use the Timer trigger or the Files binding, which are part of core services, call the `AddTimers` or `AddFiles` extension methods, respectively.
+To use the Timer trigger or the Files binding, which are part of core services, call the `AddTimers` or `AddFiles` extension methods.
#### Version 2.*x*
class Program
} ```
-## Binding configuration
+### Binding configuration
You can configure the behavior of some triggers and bindings. The process for configuring them depends on the SDK version.
You can configure the following bindings:
* [SendGrid binding](#sendgrid-binding-configuration-version-3x) * [Service Bus trigger](#service-bus-trigger-configuration-version-3x)
-### Azure CosmosDB trigger configuration (version 3.*x*)
+#### Azure CosmosDB trigger configuration (version 3.*x*)
This example shows how to configure the Azure Cosmos DB trigger:
static async Task Main()
} ```
-For more details, see the [Azure CosmosDB binding](../azure-functions/functions-bindings-cosmosdb-v2-output.md#hostjson-settings) article.
+For more information, see the [Azure CosmosDB binding](../azure-functions/functions-bindings-cosmosdb-v2-output.md#hostjson-settings) article.
-### Event Hubs trigger configuration (version 3.*x*)
+#### Event Hubs trigger configuration (version 3.*x*)
This example shows how to configure the Event Hubs trigger:
static async Task Main()
} ```
-For more details, see the [Event Hubs binding](../azure-functions/functions-bindings-event-hubs.md#host-json) article.
+For more information, see the [Event Hubs binding](../azure-functions/functions-bindings-event-hubs.md#hostjson-settings) article.
### Queue storage trigger configuration
-These examples show how to configure the Queue storage trigger:
+The following examples show how to configure the Queue storage trigger.
#### Version 3.*x*
static async Task Main()
} ```
-For more details, see the [Queue storage binding](../azure-functions/functions-bindings-storage-queue-trigger.md#hostjson-properties) article.
+For more information, see the [Queue storage binding](../azure-functions/functions-bindings-storage-queue-trigger.md#hostjson-properties) article.
#### Version 2.*x*
static void Main(string[] args)
} ```
-For more details, see the [host.json v1.x reference](../azure-functions/functions-host-json-v1.md#queues).
+For more information, see the [host.json v1.x reference](../azure-functions/functions-host-json-v1.md#queues).
### SendGrid binding configuration (version 3.*x*)
static async Task Main()
} ```
-For more details, see the [SendGrid binding](../azure-functions/functions-bindings-sendgrid.md#hostjson-settings) article.
+For more information, see the [SendGrid binding](../azure-functions/functions-bindings-sendgrid.md#hostjson-settings) article.
### Service Bus trigger configuration (version 3.*x*)
For more details, see the [Service Bus binding](../azure-functions/functions-bin
### Configuration for other bindings
-Some trigger and binding types define their own custom configuration types. For example, the File trigger lets you specify the root path to monitor, as in these examples:
+Some trigger and binding types define their own custom configuration types. For example, the File trigger lets you specify the root path to monitor, as in the following examples.
#### Version 3.*x*
static void Main()
} ```
-## Binding expressions
+### Binding expressions
In attribute constructor parameters, you can use expressions that resolve to values from various sources. For example, in the following code, the path for the `BlobTrigger` attribute creates an expression named `filename`. When used for the output binding, `filename` resolves to the name of the triggering blob.
Pass your `NameResolver` class in to the `JobHost` object, as shown here:
Azure Functions implements `INameResolver` to get values from app settings, as shown in the example. When you use the WebJobs SDK directly, you can write a custom implementation that gets placeholder replacement values from whatever source you prefer.
-## Binding at runtime
+### Binding at runtime
If you need to do some work in your function before you use a binding attribute like `Queue`, `Blob`, or `Table`, you can use the `IBinder` interface.
public static void CreateQueueMessage(
For more information, see [Binding at runtime](../azure-functions/functions-dotnet-class-library.md#binding-at-runtime) in the Azure Functions documentation.
-## Binding reference information
+### Binding reference information
The Azure Functions documentation provides reference information about each binding type. You'll find the following information in each binding reference article. (This example is based on Storage queue.)
The Azure Functions documentation provides reference information about each bind
* [Attributes](../azure-functions/functions-bindings-storage-queue-trigger.md#attributes-and-annotations). The attributes to use for the binding type. * [Configuration](../azure-functions/functions-bindings-storage-queue-trigger.md#configuration). Explanations of the attribute properties and constructor parameters. * [Usage](../azure-functions/functions-bindings-storage-queue-trigger.md#usage). The types you can bind to and information about how the binding works. For example: polling algorithm, poison queue processing.+
+> [!NOTE]
+> The HTTP, Webhooks, and Event Grid bindings are supported only by Azure Functions, not by the WebJobs SDK.
-For a list of binding reference articles, see "Supported bindings" in the [Triggers and bindings](../azure-functions/functions-triggers-bindings.md#supported-bindings) article for Azure Functions. In that list, the HTTP, Webhooks, and Event Grid bindings are supported only by Azure Functions, not by the WebJobs SDK.
+For a full list of bindings supported in Azure Functions runtime, see [Supported bindings](../azure-functions/functions-triggers-bindings.md#supported-bindings).
+
+## Attributes for Disable, Timeout, and Singleton
+With these attributes, you can control function triggering, cancel functions, and ensure that only one instance of a function runs.
-## Disable attribute
+### Disable attribute
The [`Disable`](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/DisableAttribute.cs) attribute lets you control whether a function can be triggered.
When you change app setting values in the Azure portal, the WebJob restarts to p
The attribute can be declared at the parameter, method, or class level. The setting name can also contain binding expressions.
-## Timeout attribute
+### Timeout attribute
The [`Timeout`](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/TimeoutAttribute.cs) attribute causes a function to be canceled if it doesn't finish within a specified amount of time. In the following example, the function would run for one day without the Timeout attribute. Timeout causes the function to be canceled after 15 seconds.
public static async Task TimeoutJob(
You can apply the Timeout attribute at the class or method level, and you can specify a global timeout by using `JobHostConfiguration.FunctionTimeout`. Class-level or method-level timeouts override global timeouts.
-## Singleton attribute
+### Singleton attribute
-The [`Singleton`](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/SingletonAttribute.cs) attribute ensures that only one instance of a function runs, even when there are multiple instances of the host web app. It does this by using [distributed locking](#viewing-lease-blobs).
+The [`Singleton`](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/SingletonAttribute.cs) attribute ensures that only one instance of a function runs, even when there are multiple instances of the host web app. The Singleton attribute uses [distributed locking](#viewing-lease-blobs) to ensure that one instance runs.
In this example, only a single instance of the `ProcessImage` function runs at any given time:
public static async Task ProcessImage([BlobTrigger("images")] Stream image)
} ```
-### SingletonMode.Listener
+#### SingletonMode.Listener
Some triggers have built-in support for concurrency management:
You can use these settings to ensure that your function runs as a singleton on a
> [!NOTE] > See this [GitHub Repo](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/SingletonMode.cs) to learn more about how the SingletonMode.Function works.
-### Scope values
+#### Scope values
-You can specify a *scope expression/value* on a singleton. The expression/value ensures that all executions of the function at a specific scope will be serialized. Implementing more granular locking in this way can allow for some level of parallelism for your function while serializing other invocations as dictated by your requirements. For example, in the following code, the scope expression binds to the `Region` value of the incoming message. When the queue contains three messages in regions East, East, and West respectively, the messages that have region East are run serially while the message with region West is run in parallel with those in East.
+You can specify a *scope expression/value* on a singleton. The expression/value ensures that all executions of the function at a specific scope will be serialized. Implementing more granular locking in this way can allow for some level of parallelism for your function while serializing other invocations as dictated by your requirements. For example, in the following code, the scope expression binds to the `Region` value of the incoming message. When the queue contains three messages in regions East, East, and West, the messages that have region East are run serially. The message with region West is run in parallel with those in region East.
```csharp [Singleton("{Region}")]
public class WorkItem
} ```
-### SingletonScope.Host
+#### SingletonScope.Host
The default scope for a lock is `SingletonScope.Function`, meaning the lock scope (the blob lease path) is tied to the fully qualified function name. To lock across functions, specify `SingletonScope.Host` and use a scope ID name that's the same across all functions that you don't want to run simultaneously. In the following example, only one instance of `AddItem` or `RemoveItem` runs at a time:
public static void RemoveItem([QueueTrigger("remove-item")] string message)
} ```
-### Viewing lease blobs
+## Viewing lease blobs
The WebJobs SDK uses [Azure blob leases](../storage/blobs/concurrency-manage.md#pessimistic-concurrency-for-blobs) under the covers to implement distributed locking. The lease blobs used by Singleton can be found in the `azure-webjobs-host` container in the `AzureWebJobsStorage` storage account under the path "locks". For example, the lease blob path for the first `ProcessImage` example shown earlier might be `locks/061851c758f04938a4426aa9ab3869c0/WebJobs.Functions.ProcessImage`. All paths include the JobHost ID, in this case 061851c758f04938a4426aa9ab3869c0.
config.LoggerFactory = new LoggerFactory()
### Custom telemetry for Application Insights
-The process for implementing custom telemetry for [Application Insights](../azure-monitor/app/app-insights-overview.md) depends on the SDK version. To learn how to configure Application Insights, see [Add Application Insights logging](webjobs-sdk-get-started.md#add-application-insights-logging).
+The process for implementing custom telemetry for [Application Insights](../azure-monitor/app/app-insights-overview.md) depends on the SDK version. To learn how to configure Application Insights, see [Add Application Insights logging](webjobs-sdk-get-started.md#enable-application-insights-logging).
#### Version 3.*x*
This article has provided code snippets that show how to handle common scenarios
[`ConfigureServices`]: /dotnet/api/microsoft.extensions.hosting.hostinghostbuilderextensions.configureservices [`ITelemetryInitializer`]: /dotnet/api/microsoft.applicationinsights.extensibility.itelemetryinitializer [`TelemetryConfiguration`]: /dotnet/api/microsoft.applicationinsights.extensibility.telemetryconfiguration
-[`JobHostConfiguration`]: https://github.com/Azure/azure-webjobs-sdk/blob/v2.x/src/Microsoft.Azure.WebJobs.Host/JobHostConfiguration.cs
+[`JobHostConfiguration`]: https://github.com/Azure/azure-webjobs-sdk/blob/v2.x/src/Microsoft.Azure.WebJobs.Host/JobHostConfiguration.cs
automation Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/how-to/private-link-security.md
To understand & configure Update Management review [About Update Management](../
If you want your machines configured for Update management to connect to Automation & Log Analytics workspace in a secure manner over Private Link channel, you have to enable Private Link for the Log Analytics workspace linked to the Automation Account configured with Private Link.
-You can control how a Log Analytics workspace can be reached from outside of the Private Link scopes by following the steps described in [Configure Log Analytics](../../azure-monitor/logs/private-link-security.md#configure-log-analytics). If you set **Allow public network access for ingestion** to **No**, then machines outside of the connected scopes cannot upload data to this workspace. If you set **Allow public network access for queries** to **No**, then machines outside of the scopes cannot access data in this workspace.
+You can control how a Log Analytics workspace can be reached from outside of the Private Link scopes by following the steps described in [Configure Log Analytics](../../azure-monitor/logs/private-link-security.md#configure-access-to-your-resources). If you set **Allow public network access for ingestion** to **No**, then machines outside of the connected scopes cannot upload data to this workspace. If you set **Allow public network access for queries** to **No**, then machines outside of the scopes cannot access data in this workspace.
Use **DSCAndHybridWorker** target sub-resource to enable Private Link for user & system hybrid workers.
azure-cache-for-redis Cache Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-configure.md
New Azure Cache for Redis instances are configured with the following default Re
* P4 (53 GB - 530 GB) - up to 64 databases * All premium caches with Redis cluster enabled - Redis cluster only supports use of database 0 so the `databases` limit for any premium cache with Redis cluster enabled is effectively 1 and the [Select](https://redis.io/commands/select) command is not allowed. For more information, see [Do I need to make any changes to my client application to use clustering?](cache-how-to-premium-clustering.md#do-i-need-to-make-any-changes-to-my-client-application-to-use-clustering)
-For more information about databases, see [What are Redis databases?](cache-development-faq.md#what-are-redis-databases)
+For more information about databases, see [What are Redis databases?](cache-development-faq.yml#what-are-redis-databases-)
> [!NOTE] > The `databases` setting can be configured only during cache creation and only using PowerShell, CLI, or other management clients. For an example of configuring `databases` during cache creation using PowerShell, see [New-AzRedisCache](cache-how-to-manage-redis-cache-powershell.md#databases).
For information on moving resources from one resource group to another, and from
## Next steps
-* For more information on working with Redis commands, see [How can I run Redis commands?](cache-development-faq.md#how-can-i-run-redis-commands)
+* For more information on working with Redis commands, see [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-)
azure-cache-for-redis Cache Development Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-development-faq.md
- Title: Azure Cache for Redis development FAQs
-description: Learn the answers to common questions that help you develop for Azure Cache for Redis
----- Previously updated : 08/06/2020-
-# Azure Cache for Redis development FAQs
-
-This article provides answers to common questions about how to develop for Azure Cache for Redis.
-
-## Common questions and answers
-
-This section covers the following FAQs:
-
-* [How can I get started with Azure Cache for Redis?](#how-can-i-get-started-with-azure-cache-for-redis)
-* [What do the StackExchange.Redis configuration options do?](#what-do-the-stackexchangeredis-configuration-options-do)
-* [What Azure Cache for Redis clients can I use?](#what-azure-cache-for-redis-clients-can-i-use)
-* [Is there a local emulator for Azure Cache for Redis?](#is-there-a-local-emulator-for-azure-cache-for-redis)
-* [How can I run Redis commands?](#how-can-i-run-redis-commands)
-* [Why doesn't Azure Cache for Redis have an MSDN class library reference?](#why-doesnt-azure-cache-for-redis-have-an-msdn-class-library-reference)
-* [Can I use Azure Cache for Redis as a PHP session cache?](#can-i-use-azure-cache-for-redis-as-a-php-session-cache)
-* [What are Redis databases?](#what-are-redis-databases)
-
-### How can I get started with Azure Cache for Redis?
-
-There are several ways you can get started with Azure Cache for Redis.
-
-* You can check out one of our tutorials available for [.NET](cache-dotnet-how-to-use-azure-redis-cache.md), [ASP.NET](cache-web-app-howto.md), [Java](cache-java-get-started.md), [Node.js](cache-nodejs-get-started.md), and [Python](cache-python-get-started.md).
-* You can watch [How to Build High-Performance Apps Using Microsoft Azure Cache for Redis](https://azure.microsoft.com/documentation/videos/how-to-build-high-performance-apps-using-microsoft-azure-cache/).
-* You can check out the client documentation for the example clients that match the development language you use in your project. There are many Redis clients that can be used with Azure Cache for Redis. For a list of Redis clients, see [https://redis.io/clients](https://redis.io/clients).
-
-If you don't already have an Azure account, you can:
-
-* [Open an Azure account for free](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=redis_cache_hero). You get credits that can be used to try out paid Azure services. Even after the credits are used up, you can keep the account and use free Azure services and features.
-* [Activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=redis_cache_hero). Your MSDN subscription gives you credits every month that you can use for paid Azure services.
-
-### What do the StackExchange.Redis configuration options do?
-
-StackExchange.Redis has many options. This section talks about some of the common settings. For more detailed information about StackExchange.Redis options, see [StackExchange.Redis configuration](https://stackexchange.github.io/StackExchange.Redis/Configuration).
-
-| ConfigurationOptions | Description | Recommendation |
-| | | |
-| AbortOnConnectFail |When set to true, the connection can't reconnect after a network failure. |Set to false and let StackExchange.Redis reconnect automatically. |
-| ConnectRetry |The number of times to repeat connection attempts during initial connect. |See the following notes for guidance. |
-| ConnectTimeout |Timeout in ms for connect operations. |See the following notes for guidance. |
-
-Usually the default values of the client are sufficient. You can fine-tune the options based on your workload.
-
-* **Retries**
- * For ConnectRetry and ConnectTimeout, the general guidance is to fail fast and retry again. This guidance is based on your workload and how much timeon averageit takes for your client to issue a Redis command and receive a response.
- * Let StackExchange.Redis automatically reconnect instead of checking connection status and reconnecting yourself. **Avoid using the ConnectionMultiplexer.IsConnected property**.
- * Snowballing - you might run into an issue where you're retrying and the retries snowball and never recover. If snowballing occurs, consider using an exponential backoff retry algorithm as described in [Retry general guidance](/azure/architecture/best-practices/transient-faults) published by the Microsoft Patterns & Practices group.
-
-* **Timeout values**
- * Consider your workload and set the values to match. If you're storing large values, set the timeout to a higher value.
- * Set `AbortOnConnectFail` to false and let StackExchange.Redis reconnect for you.
- * Use a single ConnectionMultiplexer instance for the application. You can use a LazyConnection to create a single instance that is returned by a Connection property, as shown in [Connect to the cache using the ConnectionMultiplexer class](cache-dotnet-how-to-use-azure-redis-cache.md#connect-to-the-cache).
- * Set the `ConnectionMultiplexer.ClientName` property to an app instance unique name for diagnostic purposes.
- * Use multiple `ConnectionMultiplexer` instances for custom workloads.
- * You can follow this model if you have varying load in your application. For example:
- * You can have one multiplexer for dealing with large keys.
- * You can have one multiplexer for dealing with small keys.
- * You can set different values for connection timeouts and retry logic for each ConnectionMultiplexer that you use.
- * Set the `ClientName` property on each multiplexer to help with diagnostics.
- * This guidance may lead to more streamlined latency per `ConnectionMultiplexer`.
-
-### What Azure Cache for Redis clients can I use?
-
-One of the great things about Redis is that there are many clients supporting many different development languages. For a current list of clients, see [Redis clients](https://redis.io/clients). For tutorials that cover several different languages and clients, see [How to use Azure Cache for Redis](cache-dotnet-how-to-use-azure-redis-cache.md).
--
-### Is there a local emulator for Azure Cache for Redis?
-
-There's no local emulator for Azure Cache for Redis. You can run the MSOpenTech version of redis-server.exe from the [Redis command-line tools](https://github.com/MSOpenTech/redis/releases/) on your local machine. Then, connect to it to get a similar experience to a local cache emulator, as shown in the following example:
-
-```csharp
-private static Lazy<ConnectionMultiplexer>
- lazyConnection = new Lazy<ConnectionMultiplexer> (() =>
- {
- // Connect to a locally running instance of Redis to simulate
- // a local cache emulator experience.
- return ConnectionMultiplexer.Connect("127.0.0.1:6379");
- });
-
-public static ConnectionMultiplexer Connection
-{
- get
- {
- return lazyConnection.Value;
- }
-}
-```
-
-You can optionally configure a [redis.conf](https://redis.io/topics/config) file to more closely match the [default cache settings](cache-configure.md#default-redis-server-configuration) for your online Azure Cache for Redis if you want.
-
-### How can I run Redis commands?
-
-You can use any of the commands listed at [Redis commands](https://redis.io/commands#) except for the commands listed at [Redis commands not supported in Azure Cache for Redis](cache-configure.md#redis-commands-not-supported-in-azure-cache-for-redis). You have several options to run Redis commands.
-
-* If you have a Standard or Premium cache, you can run Redis commands using the [Redis Console](cache-configure.md#redis-console). The Redis console provides a secure way to run Redis commands in the Azure portal.
-* You can also use the Redis command-line tools. To use them, do the following steps:
-* Download the [Redis command-line tools](https://github.com/MSOpenTech/redis/releases/).
-* Connect to the cache using `redis-cli.exe`. Pass in the cache endpoint using the -h switch and the key using -a as shown in the following example:
-* `redis-cli -h <Azure Cache for Redis name>.redis.cache.windows.net -a <key>`
-
-> [!NOTE]
-> The Redis command-line tools do not work with the TLS port, but you can use a utility such as `stunnel` to securely connect the tools to the TLS port by following the directions in the [How to use the Redis command-line tool with Azure Cache for Redis](./cache-how-to-redis-cli-tool.md) article.
->
->
-
-### Why doesn't Azure Cache for Redis have an MSDN class library reference?
-
-Microsoft Azure Cache for Redis is based on the popular open-source in-memory data store, Redis. It can be accessed by a wide variety of [Redis clients](https://redis.io/clients) for many programming languages. Each client has its own API that makes calls to the Azure Cache for Redis instance using [Redis commands](https://redis.io/commands).
-
-Because each client is different, you can't find one centralized class reference on MSDN. Each client maintains its own reference documentation. Besides the reference documentation, there are several tutorials showing how to get started with Azure Cache for Redis using different languages and cache clients. To access these tutorials, see [How to use Azure Cache for Redis](cache-dotnet-how-to-use-azure-redis-cache.md) and it's sibling articles in the table of contents.
-
-### Can I use Azure Cache for Redis as a PHP session cache?
-
-Yes, to use Azure Cache for Redis as a PHP session cache, specify the connection string to your Azure Cache for Redis instance in `session.save_path`.
-
-> [!IMPORTANT]
-> When using Azure Cache for Redis as a PHP session cache, you must URL encode the security key used to connect to the cache, as shown in the following example:
->
-> `session.save_path = "tcp://mycache.redis.cache.windows.net:6379?auth=<url encoded primary or secondary key here>";`
->
-> If the key is not URL encoded, you may receive an exception with a message like: `Failed to parse session.save_path`
->
-
-For more information about using Azure Cache for Redis as a PHP session cache with the PhpRedis client, see [PHP Session handler](https://github.com/phpredis/phpredis#php-session-handler).
-
-### What are Redis databases?
-
-Redis Databases are just a logical separation of data within the same Redis instance. The cache memory is shared between all the databases and actual memory consumption of a given database depends on the keys/values stored in that database. For example, a C6 cache has 53 GB of memory, and a P5 has 120 GB. You can choose to put all 53 GB / 120 GB into one database or you can split it up between multiple databases.
-
-> [!NOTE]
-> When using a Premium Azure Cache for Redis with clustering enabled, only database 0 is available. This limitation is an intrinsic Redis limitation and is not specific to Azure Cache for Redis. For more information, see [Do I need to make any changes to my client application to use clustering?](cache-how-to-premium-clustering.md#do-i-need-to-make-any-changes-to-my-client-application-to-use-clustering).
->
->
-
-## Next steps
-
-Learn about other [Azure Cache for Redis FAQs](cache-faq.yml).
azure-cache-for-redis Cache Dotnet Core Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-dotnet-core-quickstart.md
static void Main(string[] args)
Save *Program.cs*.
-Azure Cache for Redis has a configurable number of databases (default of 16) that can be used to logically separate the data within an Azure Cache for Redis. The code connects to the default database, DB 0. For more information, see [What are Redis databases?](cache-development-faq.md#what-are-redis-databases) and [Default Redis server configuration](cache-configure.md#default-redis-server-configuration).
+Azure Cache for Redis has a configurable number of databases (default of 16) that can be used to logically separate the data within an Azure Cache for Redis. The code connects to the default database, DB 0. For more information, see [What are Redis databases?](cache-development-faq.yml#what-are-redis-databases-) and [Default Redis server configuration](cache-configure.md#default-redis-server-configuration).
Cache items can be stored and retrieved by using the `StringSet` and `StringGet` methods.
azure-cache-for-redis Cache Dotnet How To Use Azure Redis Cache https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-dotnet-how-to-use-azure-redis-cache.md
static void Main(string[] args)
} ```
-Azure Cache for Redis has a configurable number of databases (default of 16) that can be used to logically separate the data within an Azure Cache for Redis. The code connects to the default database, DB 0. For more information, see [What are Redis databases?](cache-development-faq.md#what-are-redis-databases) and [Default Redis server configuration](cache-configure.md#default-redis-server-configuration).
+Azure Cache for Redis has a configurable number of databases (default of 16) that can be used to logically separate the data within an Azure Cache for Redis. The code connects to the default database, DB 0. For more information, see [What are Redis databases?](cache-development-faq.yml#what-are-redis-databases-) and [Default Redis server configuration](cache-configure.md#default-redis-server-configuration).
Cache items can be stored and retrieved by using the `StringSet` and `StringGet` methods.
azure-cache-for-redis Cache Management Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-management-faq.md
- Title: Azure Cache for Redis management FAQs
-description: Learn the answers to common questions that help you manage Azure Cache for Redis
----- Previously updated : 08/06/2020-
-# Azure Cache for Redis management FAQs
-
-This article provides answers to common questions about how to manage Azure Cache for Redis.
-
-## Common questions and answers
-
-This section covers the following FAQs:
-
-* [When should I enable the non-TLS/SSL port for connecting to Redis?](#when-should-i-enable-the-non-tlsssl-port-for-connecting-to-redis)
-* [What are some production best practices?](#what-are-some-production-best-practices)
-* [What are some of the considerations when using common Redis commands?](#what-are-some-of-the-considerations-when-using-common-redis-commands)
-* [How can I benchmark and test the performance of my cache?](#how-can-i-benchmark-and-test-the-performance-of-my-cache)
-* [Important details about ThreadPool growth](#important-details-about-threadpool-growth)
-* [Enable server GC to get more throughput on the client when using StackExchange.Redis](#enable-server-gc-to-get-more-throughput-on-the-client-when-using-stackexchangeredis)
-* [Performance considerations around connections](#performance-considerations-around-connections)
-
-### When should I enable the non-TLS/SSL port for connecting to Redis?
-
-Redis server doesn't natively support TLS, but Azure Cache for Redis does. If you're connecting to Azure Cache for Redis and your client supports TLS, like StackExchange.Redis, then use TLS.
-
->[!NOTE]
->The non-TLS port is disabled by default for new Azure Cache for Redis instances. If your client does not support TLS, then you must enable the non-TLS port by following the directions in the [Access ports](cache-configure.md#access-ports) section of the [Configure a cache in Azure Cache for Redis](cache-configure.md) article.
->
->
-
-Redis tools such as `redis-cli` don't work with the TLS port, but you can use a utility such as `stunnel` to securely connect the tools to the TLS port by following the directions in the [Announcing ASP.NET Session State Provider for Redis Preview Release](https://devblogs.microsoft.com/aspnet/announcing-asp-net-session-state-provider-for-redis-preview-release/) blog post.
-
-For instructions on downloading the Redis tools, see the [How can I run Redis commands?](cache-development-faq.md#how-can-i-run-redis-commands) section.
-
-### What are some production best practices?
-
-* [StackExchange.Redis best practices](#stackexchangeredis-best-practices)
-* [Configuration and concepts](#configuration-and-concepts)
-* [Performance testing](#performance-testing)
-
-#### StackExchange.Redis best practices
-
-* Set `AbortConnect` to false, then let the ConnectionMultiplexer reconnect automatically. [See here for details](https://gist.github.com/JonCole/36ba6f60c274e89014dd#file-se-redis-setabortconnecttofalse-md).
-* Reuse the ConnectionMultiplexer - don't create a new one for each request. Instead use this pattern. The `Lazy<ConnectionMultiplexer>` pattern [shown here](cache-dotnet-how-to-use-azure-redis-cache.md#connect-to-the-cache).
-* Redis works best with smaller values, so consider chopping up bigger data into multiple keys. In [this Redis discussion](https://groups.google.com/forum/#!searchin/redis-db/size/redis-db/n7aa2A4DZDs/3OeEPHSQBAAJ), 100 kb is considered large. Read [this article](https://gist.github.com/JonCole/db0e90bedeb3fc4823c2#large-requestresponse-size) for an example problem that can be caused by large values.
-* Configure your [ThreadPool settings](#important-details-about-threadpool-growth) to avoid timeouts.
-* Use at least the default connectTimeout of 5 seconds. This interval gives StackExchange.Redis sufficient time to re-establish the connection if there's a network blip.
-* Be aware of the performance costs associated with different operations you're running. For instance, the `KEYS` command is an O(n) operation and should be avoided. The [redis.io site](https://redis.io/commands/) has details around the time complexity for each operation that it supports. Select each command to see the complexity for each operation.
-
-#### Configuration and concepts
-
-* Use Standard or Premium Tier for Production systems. The Basic Tier is a single node system with no data replication and no SLA. Also, use at least a C1 cache. C0 caches are typically used for simple dev/test scenarios.
-* Remember that Redis is an **In-Memory** data store. Read [this article](https://gist.github.com/JonCole/b6354d92a2d51c141490f10142884ea4#file-whathappenedtomydatainredis-md) so that you're aware of scenarios where data loss can occur.
-* Develop your system such that it can handle connection blips [caused by patching and failover](https://gist.github.com/JonCole/317fe03805d5802e31cfa37e646e419d#file-azureredis-patchingexplained-md).
-
-#### Performance testing
-
-* Start by using `redis-benchmark.exe` to get a feel for possible throughput before writing your own perf tests. Because `redis-benchmark` doesn't support TLS, you must [enable the Non-TLS port through the Azure portal](cache-configure.md#access-ports) before you run the test. For examples, see [How can I benchmark and test the performance of my cache?](#how-can-i-benchmark-and-test-the-performance-of-my-cache)
-* The client VM used for testing should be in the same region as your Azure Cache for Redis instance.
-* We recommend using Dv2 VM Series for your client as they have better hardware and should give the best results.
-* Make sure the client VM you choose has at least as much computing and bandwidth capability as the cache you're testing.
-* Enable VRSS on the client machine if you are on Windows. [See here for details](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn383582(v=ws.11)).
-* Premium tier Redis instances have better network latency and throughput because they're running on better hardware for both CPU and Network.
-
-### What are some of the considerations when using common Redis commands?
-
-* Avoid using certain Redis commands that take a long time to complete, unless you fully understand the result of these commands. For example, don't run the [KEYS](https://redis.io/commands/keys) command in production. Depending on the number of keys, it could take a long time to return. Redis is a single-threaded server and it processes commands one at a time. If you have other commands issued after KEYS, they're not be processed until Redis processes the KEYS command. The [redis.io site](https://redis.io/commands/) has details around the time complexity for each operation that it supports. Select each command to see the complexity for each operation.
-* Key sizes - should I use small key/values or large key/values? It depends on the scenario. If your scenario requires larger keys, you can adjust the ConnectionTimeout, then retry values and adjust your retry logic. From a Redis server perspective, smaller values give better performance.
-* These considerations don't mean that you can't store larger values in Redis; you must be aware of the following considerations. Latencies will be higher. If you have one set of data that is larger and one that is smaller, you can use multiple ConnectionMultiplexer instances. Configure each with a different set of timeout and retry values, as described in the previous [What do the StackExchange.Redis configuration options do](cache-development-faq.md#what-do-the-stackexchangeredis-configuration-options-do) section.
-
-### How can I benchmark and test the performance of my cache?
-
-* [Enable cache diagnostics](cache-how-to-monitor.md#enable-cache-diagnostics) so you can [monitor](cache-how-to-monitor.md) the health of your cache. You can view the metrics in the Azure portal and you can also [download and review](https://github.com/rustd/RedisSamples/tree/master/CustomMonitoring) them using the tools of your choice.
-* You can use redis-benchmark.exe to load test your Redis server.
-* Ensure that the load testing client and the Azure Cache for Redis are in the same region.
-* Use redis-cli.exe and monitor the cache using the INFO command.
-* If your load is causing high memory fragmentation, you should scale up to a larger cache size.
-* For instructions on downloading the Redis tools, see the [How can I run Redis commands?](cache-development-faq.md#how-can-i-run-redis-commands) section.
-
-Here are some examples of using redis-benchmark.exe. Run these commands from a VM in the same region as your cache for accurate results.
-
-* Test Pipelined SET requests using a 1k payload
-
- `redis-benchmark.exe -h **yourcache**.redis.cache.windows.net -a **yourAccesskey** -t SET -n 1000000 -d 1024 -P 50`
-* Test Pipelined GET requests using a 1k payload.
-
->[!NOTE]
-> Run the SET test shown above first to populate cache
->
-
- `redis-benchmark.exe -h **yourcache**.redis.cache.windows.net -a **yourAccesskey** -t GET -n 1000000 -d 1024 -P 50`
-
-### Important details about ThreadPool growth
-
-The CLR ThreadPool has two types of threads - "Worker" and "I/O Completion Port" (IOCP) threads.
-
-* Worker threads are used for things like processing the `Task.Run(…)`, or `ThreadPool.QueueUserWorkItem(…)` methods. These threads are also used by various components in the CLR when work needs to happen on a background thread.
-* IOCP threads are used when asynchronous IO happens, such as when reading from the network.
-
-The thread pool provides new worker threads or I/O completion threads on demand (without any throttling) until it reaches the "Minimum" setting for each type of thread. By default, the minimum number of threads is set to the number of processors on a system.
-
-Once the number of existing (busy) threads hits the "minimum" number of threads, the ThreadPool will throttle the rate at which it injects new threads to one thread per 500 milliseconds. Typically, if your system gets a burst of work needing an IOCP thread, it will process that work quickly. However, if the burst is more than the configured "Minimum" setting, there's some delay in processing some of the work as the ThreadPool waits for one of two possibilities:
-
-* An existing thread becomes free to process the work.
-* No existing thread becomes free for 500 ms and a new thread is created.
-
-Basically, when the number of Busy threads is greater than Min threads, you're likely paying a 500-ms delay before network traffic is processed by the application. Also, when an existing thread stays idle for longer than 15 seconds, it's cleaned up and this cycle of growth and shrinkage can repeat.
-
-If we look at an example error message from StackExchange.Redis (build 1.0.450 or later), we see that it now prints ThreadPool statistics. See IOCP and WORKER details below.
-
-```
-System.TimeoutException: Timeout performing GET MyKey, inst: 2, mgr: Inactive,
-queue: 6, qu: 0, qs: 6, qc: 0, wr: 0, wq: 0, in: 0, ar: 0,
-IOCP: (Busy=6,Free=994,Min=4,Max=1000),
-WORKER: (Busy=3,Free=997,Min=4,Max=1000)
-```
-
-As shown In the example, you see that for IOCP thread there are six busy threads and the system is configured to allow four minimum threads. In this case, the client would have likely seen two 500-ms delays, because 6 > 4.
-
-> [!NOTE]
-> StackExchange.Redis can hit timeouts if growth of either IOCP or WORKER threads gets throttled.
-
-#### Recommendation
-
-Given this information, we strongly recommend that customers set the minimum configuration value for IOCP and WORKER threads to something larger than the default value. We can't give one-size-fits-all guidance on what this value should be because the right value for one application will likely be too high or low for another application. This setting can also affect the performance of other parts of complicated applications, so each customer needs to fine-tune this setting to their specific needs. A good starting place is 200 or 300, then test and tweak as needed.
-
-How to configure this setting:
-
-* We recommend changing this setting programmatically by using the [ThreadPool.SetMinThreads (...)](/dotnet/api/system.threading.threadpool.setminthreads#System_Threading_ThreadPool_SetMinThreads_System_Int32_System_Int32_) method in `global.asax.cs`. For example:
-
- ```csharp
- private readonly int minThreads = 200;
- void Application_Start(object sender, EventArgs e)
- {
- // Code that runs on application startup
- AreaRegistration.RegisterAllAreas();
- RouteConfig.RegisterRoutes(RouteTable.Routes);
- BundleConfig.RegisterBundles(BundleTable.Bundles);
- ThreadPool.SetMinThreads(minThreads, minThreads);
- }
- ```
-
- > [!NOTE]
- > The value specified by this method is a global setting, affecting the whole AppDomain. For example, if you have a 4-core machine and want to set *minWorkerThreads* and *minIoThreads* to 50 per CPU during run-time, use **ThreadPool.SetMinThreads(200, 200)**.
-
-* It is also possible to specify the minimum threads setting by using the [*minIoThreads* or *minWorkerThreads* configuration setting](/previous-versions/dotnet/netframework-4.0/7w2sway1(v=vs.100)) under the `<processModel>` configuration element in `Machine.config`. `Machine.config` is typically located at `%SystemRoot%\Microsoft.NET\Framework\[versionNumber]\CONFIG\`. **Setting the number of minimum threads in this way isn't recommended because it's a System-wide setting.**
-
- > [!NOTE]
- > The value specified in this configuration element is a *per-core* setting. For example, if you have a 4-core machine and want your *minIoThreads* setting to be 200 at runtime, you would use `<processModel minIoThreads="50"/>`.
- >
-
-### Enable server GC to get more throughput on the client when using StackExchange.Redis
-
-Enabling server GC can optimize the client and provide better performance and throughput when using StackExchange.Redis. For more information on server GC and how to enable it, see the following articles:
-
-* [To enable server GC](/dotnet/framework/configure-apps/file-schema/runtime/gcserver-element)
-* [Fundamentals of Garbage Collection](/dotnet/standard/garbage-collection/fundamentals)
-* [Garbage Collection and Performance](/dotnet/standard/garbage-collection/performance)
-
-### Performance considerations around connections
-
-Each pricing tier has different limits for client connections, memory, and bandwidth. While each size of cache allows *up to* some number of connections, each connection to Redis has overhead associated with it. An example of such overhead would be CPU and memory usage because of TLS/SSL encryption. The maximum connection limit for a given cache size assumes a lightly loaded cache. If load from connection overhead *plus* load from client operations exceeds capacity for the system, the cache can experience capacity issues even if you haven't exceeded the connection limit for the current cache size.
-
-For more information about the different connections limits for each tier, see [Azure Cache for Redis pricing](https://azure.microsoft.com/pricing/details/cache/). For more information about connections and other default configurations, see [Default Redis server configuration](cache-configure.md#default-redis-server-configuration).
-
-## Next steps
-
-Learn about other [Azure Cache for Redis FAQs](cache-faq.yml).
azure-cache-for-redis Cache Monitor Troubleshoot Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-monitor-troubleshoot-faq.md
- Title: Azure Cache for Redis monitoring and troubleshooting FAQs
-description: Learn the answers to common questions that help you monitor and troubleshoot Azure Cache for Redis
---- Previously updated : 08/06/2020-
-# Azure Cache for Redis monitoring and troubleshooting FAQs
-
-This article provides answers to common questions about how to monitor and troubleshoot Azure Cache for Redis.
-
-## Common questions and answers
-
-This section covers the following FAQs:
-
-* [How do I monitor the health and performance of my cache?](#how-do-i-monitor-the-health-and-performance-of-my-cache)
-* [Why am I seeing timeouts?](#why-am-i-seeing-timeouts)
-* [Why was my client disconnected from the cache?](#why-was-my-client-disconnected-from-the-cache)
-
-### How do I monitor the health and performance of my cache?
-
-Microsoft Azure Cache for Redis instances can be monitored in the [Azure portal](https://portal.azure.com). You can view metrics, pin metrics charts to the Startboard, customize the date and time range of monitoring charts, add and remove metrics from the charts, and set alerts when certain conditions are met. For more information, see [Monitor Azure Cache for Redis](cache-how-to-monitor.md).
-
-The Azure Cache for Redis **Resource menu** also contains several tools for monitoring and troubleshooting your caches.
-
-* **Diagnose and solve problems** provides information about common issues and strategies for resolving them.
-* **Resource health** watches your resource and tells you if it's running as expected. For more information about the Azure Resource health service, see [Azure Resource health overview](../service-health/resource-health-overview.md).
-* **New support request** provides options to open a support request for your cache.
-
-These tools enable you to monitor the health of your Azure Cache for Redis instances. The tools also help you manage your caching applications. For more information, see the "Support & troubleshooting settings" section of [How to configure Azure Cache for Redis](cache-configure.md).
-
-### Why am I seeing timeouts?
-
-Timeouts happen in the client that you use to talk to Redis. When a command is sent to the Redis server, the command is queued up. The Redis server eventually picks up the command and executes it. However, the client can time out during this process. If it does, an exception is raised on the calling side. For more information on troubleshooting timeout issues, see [client-side troubleshooting](cache-troubleshoot-client.md) and [StackExchange.Redis timeout exceptions](cache-troubleshoot-timeouts.md#stackexchangeredis-timeout-exceptions).
-
-### Why was my client disconnected from the cache?
-
-The following are some common reason for a cache disconnect.
-
-* Client-side causes
- * The client application was redeployed.
- * The client application did a scaling operation.
- * Cloud Services or Web Apps might cause a cache disconnect during autoscaling.
- * The networking layer on the client side changed.
- * Transient errors occurred in the client or in the network nodes between the client and the server.
- * The bandwidth threshold limits were reached.
- * CPU bound operations took too long to complete.
-* Server-side causes
- * On the standard cache offering, the Azure Cache for Redis service started a fail-over from the primary node to the replica node.
- * Azure was patching the instance where the cache was deployed during a Redis server update or general VM maintenance.
-
-## Next steps
-
-For more information about monitoring and troubleshooting your Azure Cache for Redis instances, see [How to monitor Azure Cache for Redis](cache-how-to-monitor.md) and the various troubleshoot guides.
-
-Learn about other [Azure Cache for Redis FAQs](cache-faq.yml).
azure-cache-for-redis Cache Troubleshoot Client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-troubleshoot-client.md
In the preceding exception, there are several issues that are interesting:
- Notice that in the `IOCP` section and the `WORKER` section you have a `Busy` value that is greater than the `Min` value. This difference means your `ThreadPool` settings need adjusting. - You can also see `in: 64221`. This value indicates that 64,211 bytes have been received at the client's kernel socket layer but haven't been read by the application. This difference typically means that your application (for example, StackExchange.Redis) isn't reading data from the network as quickly as the server is sending it to you.
-You can [configure your `ThreadPool` Settings](cache-management-faq.md#important-details-about-threadpool-growth) to make sure that your thread pool scales up quickly under burst scenarios.
+You can [configure your `ThreadPool` Settings](cache-management-faq.yml#important-details-about-threadpool-growth) to make sure that your thread pool scales up quickly under burst scenarios.
## High client CPU usage
Resolutions for large response sizes are varied but include:
## Additional information - [Troubleshoot Azure Cache for Redis server-side issues](cache-troubleshoot-server.md)-- [How can I benchmark and test the performance of my cache?](cache-management-faq.md#how-can-i-benchmark-and-test-the-performance-of-my-cache)
+- [How can I benchmark and test the performance of my cache?](cache-management-faq.yml#how-can-i-benchmark-and-test-the-performance-of-my-cache-)
azure-cache-for-redis Cache Troubleshoot Data Loss https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-troubleshoot-data-loss.md
Consider using [Redis data persistence](https://redis.io/topics/persistence) and
- [Troubleshoot Azure Cache for Redis server-side issues](cache-troubleshoot-server.md) - [Choosing the right tier](cache-overview.md#choosing-the-right-tier) - [How to monitor Azure Cache for Redis](cache-how-to-monitor.md)-- [How can I run Redis commands?](cache-development-faq.md#how-can-i-run-redis-commands)
+- [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-)
azure-cache-for-redis Cache Troubleshoot Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-troubleshoot-server.md
To mitigate situations where network bandwidth usage is close to maximum capacit
- [Troubleshoot Azure Cache for Redis client-side issues](cache-troubleshoot-client.md) - [Choosing the right tier](cache-overview.md#choosing-the-right-tier)-- [How can I benchmark and test the performance of my cache?](cache-management-faq.md#how-can-i-benchmark-and-test-the-performance-of-my-cache)
+- [How can I benchmark and test the performance of my cache?](cache-management-faq.yml#how-can-i-benchmark-and-test-the-performance-of-my-cache-)
- [How to monitor Azure Cache for Redis](cache-how-to-monitor.md)-- [How can I run Redis commands?](cache-development-faq.md#how-can-i-run-redis-commands)
+- [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-)
azure-cache-for-redis Cache Troubleshoot Timeouts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-troubleshoot-timeouts.md
This error message contains metrics that can help point you to the cause and pos
| wr |There's an active writer (meaning the 6 unsent requests aren't being ignored) bytes/activewriters | | in |There are no active readers and zero bytes are available to be read on the NIC bytes/activereaders |
-In the preceding exception example, the `IOCP` and `WORKER` sections each include a `Busy` value that is greater than the `Min` value. The difference means that you should adjust your `ThreadPool` settings. You can [configure your ThreadPool settings](cache-management-faq.md#important-details-about-threadpool-growth) to ensure that your thread pool scales up quickly under burst scenarios.
+In the preceding exception example, the `IOCP` and `WORKER` sections each include a `Busy` value that is greater than the `Min` value. The difference means that you should adjust your `ThreadPool` settings. You can [configure your ThreadPool settings](cache-management-faq.yml#important-details-about-threadpool-growth) to ensure that your thread pool scales up quickly under burst scenarios.
You can use the following steps to investigate possible root causes.
You can use the following steps to investigate possible root causes.
- [Troubleshoot Azure Cache for Redis client-side issues](cache-troubleshoot-client.md) - [Troubleshoot Azure Cache for Redis server-side issues](cache-troubleshoot-server.md)-- [How can I benchmark and test the performance of my cache?](cache-management-faq.md#how-can-i-benchmark-and-test-the-performance-of-my-cache)
+- [How can I benchmark and test the performance of my cache?](cache-management-faq.yml#how-can-i-benchmark-and-test-the-performance-of-my-cache-)
- [How to monitor Azure Cache for Redis](cache-how-to-monitor.md)
azure-functions Functions Bindings Event Grid Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-event-grid-trigger.md
For information on setup and configuration details, see the [overview](./functio
For an HTTP trigger example, see [Receive events to an HTTP endpoint](../event-grid/receive-events.md).
-### C# (2.x and higher)
+### Version 3.x
-The following example shows a [C# function](functions-dotnet-class-library.md) that binds to `EventGridEvent`:
+The following example shows a Functions 3.x [C# function](functions-dotnet-class-library.md) that binds to a `CloudEvent`:
```cs
-using Microsoft.Azure.EventGrid.Models;
+using Azure.Messaging;
using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Extensions.EventGrid;
-using Microsoft.Azure.WebJobs.Host;
using Microsoft.Extensions.Logging; namespace Company.Function {
- public static class EventGridTriggerCSharp
+ public static class CloudEventTriggerFunction
{
- [FunctionName("EventGridTest")]
- public static void EventGridTest([EventGridTrigger]EventGridEvent eventGridEvent, ILogger log)
+ [FunctionName("CloudEventTriggerFunction")]
+ public static void Run(
+ ILogger logger,
+ [EventGridTrigger] CloudEvent e)
{
- log.LogInformation(eventGridEvent.Data.ToString());
+ logger.LogInformation("Event received {type} {subject}", e.Type, e.Subject);
} } } ```
-For more information, see Packages, [Attributes](#attributes-and-annotations), [Configuration](#configuration), and [Usage](#usage).
-
-### Version 1.x
-
-The following example shows a Functions 1.x [C# function](functions-dotnet-class-library.md) that binds to `JObject`:
+The following example shows a Functions 3.x [C# function](functions-dotnet-class-library.md) that binds to an `EventGridEvent`:
```cs using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Extensions.EventGrid;
-using Microsoft.Azure.WebJobs.Host;
-using Newtonsoft.Json;
-using Newtonsoft.Json.Linq;
+using Azure.Messaging.EventGrid;
using Microsoft.Extensions.Logging; namespace Company.Function {
- public static class EventGridTriggerCSharp
+ public static class EventGridEventTriggerFunction
{
- [FunctionName("EventGridTriggerCSharp")]
- public static void Run([EventGridTrigger]JObject eventGridEvent, ILogger log)
+ [FunctionName("EventGridEventTriggerFunction")]
+ public static void Run(
+ ILogger logger,
+ [EventGridTrigger] EventGridEvent e)
{
- log.LogInformation(eventGridEvent.ToString(Formatting.Indented));
+ logger.LogInformation("Event received {type} {subject}", e.EventType, e.Subject);
} } } ```
-### Version 3.x (preview)
+### C# (2.x and higher)
-The following example shows a Functions 3.x [C# function](functions-dotnet-class-library.md) that binds to a `CloudEvent`:
+The following example shows a [C# function](functions-dotnet-class-library.md) that binds to `EventGridEvent`:
```cs
-using Azure.Messaging;
+using System;
using Microsoft.Azure.WebJobs;
+using Microsoft.Azure.WebJobs.Host;
+using Microsoft.Azure.EventGrid.Models;
using Microsoft.Azure.WebJobs.Extensions.EventGrid; using Microsoft.Extensions.Logging;
-namespace Azure.Extensions.WebJobs.Sample
+namespace Company.Function
{
- public static class CloudEventTriggerFunction
+ public static class EventGridTriggerDemo
{
- [FunctionName("CloudEventTriggerFunction")]
- public static void Run(
- ILogger logger,
- [EventGridTrigger] CloudEvent e)
+ [FunctionName("EventGridTriggerDemo")]
+ public static void Run([EventGridTrigger]EventGridEvent eventGridEvent, ILogger log)
{
- logger.LogInformation("Event received {type} {subject}", e.Type, e.Subject);
+ log.LogInformation(eventGridEvent.Data.ToString());
} } } ```
-The following example shows a Functions 3.x [C# function](functions-dotnet-class-library.md) that binds to an `EventGridEvent`:
+For more information, see Packages, [Attributes](#attributes-and-annotations), [Configuration](#configuration), and [Usage](#usage).
+
+### Version 1.x
+
+The following example shows a Functions 1.x [C# function](functions-dotnet-class-library.md) that binds to `JObject`:
```cs using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Extensions.EventGrid;
-using Azure.Messaging.EventGrid;
+using Microsoft.Azure.WebJobs.Host;
+using Newtonsoft.Json;
+using Newtonsoft.Json.Linq;
using Microsoft.Extensions.Logging;
-namespace Azure.Extensions.WebJobs.Sample
+namespace Company.Function
{
- public static class EventGridEventTriggerFunction
+ public static class EventGridTriggerCSharp
{
- [FunctionName("EventGridEventTriggerFunction")]
- public static void Run(
- ILogger logger,
- [EventGridTrigger] EventGridEvent e)
+ [FunctionName("EventGridTriggerCSharp")]
+ public static void Run([EventGridTrigger]JObject eventGridEvent, ILogger log)
{
- logger.LogInformation("Event received {type} {subject}", e.EventType, e.Subject);
+ log.LogInformation(eventGridEvent.ToString(Formatting.Indented));
} } }
azure-functions Functions Bindings Http Webhook Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-http-webhook-trigger.md
In non-C# functions, requests sent with the content-type `image/jpeg` results in
The HTTP request length is limited to 100 MB (104,857,600 bytes), and the URL length is limited to 4 KB (4,096 bytes). These limits are specified by the `httpRuntime` element of the runtime's [Web.config file](https://github.com/Azure/azure-functions-host/blob/v3.x/src/WebJobs.Script.WebHost/web.config).
-If a function that uses the HTTP trigger doesn't complete within 230 seconds, the [Azure Load Balancer](../app-service/faq-availability-performance-application-issues.md#why-does-my-request-time-out-after-230-seconds) will time out and return an HTTP 502 error. The function will continue running but will be unable to return an HTTP response. For long-running functions, we recommend that you follow async patterns and return a location where you can ping the status of the request. For information about how long a function can run, see [Scale and hosting - Consumption plan](functions-scale.md#timeout).
+If a function that uses the HTTP trigger doesn't complete within 230 seconds, the [Azure Load Balancer](../app-service/faq-availability-performance-application-issues.yml#why-does-my-request-time-out-after-230-seconds-) will time out and return an HTTP 502 error. The function will continue running but will be unable to return an HTTP response. For long-running functions, we recommend that you follow async patterns and return a location where you can ping the status of the request. For information about how long a function can run, see [Scale and hosting - Consumption plan](functions-scale.md#timeout).
## Next steps
azure-functions Functions Create Cosmos Db Triggered Function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-cosmos-db-triggered-function.md
Learn how to create a function triggered when data is added to or changed in Azure Cosmos DB. To learn more about Azure Cosmos DB, see [Azure Cosmos DB: Serverless database computing using Azure Functions](../cosmos-db/serverless-computing-database.md). - ## Prerequisites To complete this tutorial:
azure-functions Functions Monitor Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-monitor-log-analytics.md
Azure Monitor Logs gives you the ability to consolidate logs from different reso
Azure Monitor uses a version of the [Kusto query language](/azure/kusto/query/) used by Azure Data Explorer that is suitable for simple log queries but also includes advanced functionality such as aggregations, joins, and smart analytics. You can quickly learn the query language using [multiple lessons](../azure-monitor/logs/get-started-queries.md). > [!NOTE]
-> Integration with Azure Monitor Logs is currently in public preview for v2 and v3 function apps running on Windows Consumption, Premium, and Dedicated hosting plans.
+> Integration with Azure Monitor Logs is currently in public preview. Not supported for function apps running on [version 1.x](functions-versions.md).
## Setting up
azure-functions Functions Reference Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference-node.md
The `tagOverrides` parameter sets the `operation_Id` to the function's invocatio
## HTTP triggers and bindings
-HTTP and webhook triggers and HTTP output bindings use request and response objects to represent the HTTP messaging.
+HTTP and webhook triggers and HTTP output bindings use request and response objects to represent the HTTP messaging.
### Request object
When you work with HTTP triggers, you can access the HTTP request and response o
context.done(null, res); ```
+Note that request and response keys are in lowercase.
+ ## Scaling and concurrency By default, Azure Functions automatically monitors the load on your application and creates additional host instances for Node.js as needed. Functions uses built-in (not user configurable) thresholds for different trigger types to decide when to add instances, such as the age of messages and queue size for QueueTrigger. For more information, see [How the Consumption and Premium plans work](event-driven-scaling.md).
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference.md
Identity-based connections are supported by the following trigger and binding ex
The storage connections used by the Functions runtime (`AzureWebJobsStorage`) may also be configured using an identity-based connection. See [Connecting to host storage with an identity](#connecting-to-host-storage-with-an-identity) below.
-When hosted in the Azure Functions service, identity-based connections use a [managed identity](../app-service/overview-managed-identity.md?toc=%2fazure%2fazure-functions%2ftoc.json). The system-assigned identity is used by default. When run in other contexts, such as local development, your developer identity is used instead, although this can be customized using alternative connection parameters.
+When hosted in the Azure Functions service, identity-based connections use a [managed identity](../app-service/overview-managed-identity.md?toc=%2fazure%2fazure-functions%2ftoc.json). The system-assigned identity is used by default, although a user-assigned identity can be specified with the `credential` and `clientID` properties. When run in other contexts, such as local development, your developer identity is used instead, although this can be customized using alternative connection parameters.
#### Grant permission to the identity
An identity-based connection for an Azure service accepts the following properti
||||| | Service URI | Azure Blob<sup>1</sup>, Azure Queue | `<CONNECTION_NAME_PREFIX>__serviceUri` | The data plane URI of the service to which you are connecting. | | Fully Qualified Namespace | Event Hubs, Service Bus | `<CONNECTION_NAME_PREFIX>__fullyQualifiedNamespace` | The fully qualified Event Hubs and Service Bus namespace. |
+| Token Credential | (Optional) | `<CONNECTION_NAME_PREFIX>__credential` | Defines how a token should be obtained for the connection. Recommended only when specifying a user-assigned identity, when it should be set to "managedidentity". This is only valid when hosted in the Azure Functions service. |
+| Client ID | (Optional) | `<CONNECTION_NAME_PREFIX>__clientId` | When `credential` is set to "managedidentity", this property pecifies the user-assigned identity to be used when obtaining a token. The property accepts a client ID corresponding to a user-assigned identity assigned to the application. If not specified, the system-assigned identity will be used. This property is used differently in [local development scenarios](#local-development-with-identity-based-connections), when `credential` should not be set. |
<sup>1</sup> Both blob and queue service URI's are required for Azure Blob. Additional options may be supported for a given connection type. Please refer to the documentation for the component making the connection.
-##### Local development
+##### Local development with identity-based connections
When running locally, the above configuration tells the runtime to use your local developer identity. The connection will attempt to get a token from the following locations, in order:
azure-functions Run Functions From Deployment Package https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/run-functions-from-deployment-package.md
To enable your function app to run from a package, you just add a `WEBSITE_RUN_F
| Value | Description | ||| | **`1`** | Recommended for function apps running on Windows. Run from a package file in the `d:\home\data\SitePackages` folder of your function app. If not [deploying with zip deploy](#integration-with-zip-deployment), this option requires the folder to also have a file named `packagename.txt`. This file contains only the name of the package file in folder, without any whitespace. |
-|**`<URL>`** | Location of a specific package file you want to run. When using Blob storage, you should use a private container with a [Shared Access Signature (SAS)](../vs-azure-tools-storage-manage-with-storage-explorer.md#generate-a-sas-in-storage-explorer) to enable the Functions runtime to access to the package. You can use the [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) to upload package files to your Blob storage account. When you specify a URL, you must also [sync triggers](functions-deployment-technologies.md#trigger-syncing) after you publish an updated package. |
+|**`<URL>`** | Location of a specific package file you want to run. When you specify a URL, you must also [sync triggers](functions-deployment-technologies.md#trigger-syncing) after you publish an updated package. <br/>When using Blob storage, you typically should not use a public blob. Instead, use a private container with a [Shared Access Signature (SAS)](../vs-azure-tools-storage-manage-with-storage-explorer.md#generate-a-sas-in-storage-explorer) or [use a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity) to enable the Functions runtime to access the package. You can use the [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) to upload package files to your Blob storage account. |
> [!CAUTION] > When running a function app on Windows, the external URL option yields worse cold-start performance. When deploying your function app to Windows, you should set `WEBSITE_RUN_FROM_PACKAGE` to `1` and publish with zip deployment.
The following shows a function app configured to run from a .zip file hosted in
> [!NOTE] > Currently, only .zip package files are supported.
+### Fetch a package from Azure Blob Storage using a managed identity
++ ## Integration with zip deployment [Zip deployment][Zip deployment for Azure Functions] is a feature of Azure App Service that lets you deploy your function app project to the `wwwroot` directory. The project is packaged as a .zip deployment file. The same APIs can be used to deploy your package to the `d:\home\data\SitePackages` folder. With the `WEBSITE_RUN_FROM_PACKAGE` app setting value of `1`, the zip deployment APIs copy your package to the `d:\home\data\SitePackages` folder instead of extracting the files to `d:\home\site\wwwroot`. It also creates the `packagename.txt` file. After a restart, the package is mounted to `wwwroot` as a read-only filesystem. For more information about zip deployment, see [Zip deployment for Azure Functions](deployment-zip-push.md).
azure-monitor Log Analytics Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/log-analytics-overview.md
Start Log Analytics from **Logs** in the **Azure Monitor** menu in the Azure por
[![Start Log Analytics](media/log-analytics-overview/start-log-analytics.png)](media/log-analytics-overview/start-log-analytics.png#lightbox)
-When you start Log Analytics, the first thing you'll see is a dialog box with [example queries](../logs/queries.md). These are categorized by solution, and you can browse or search for queries that match your particular requirements. You may be able to find a that does exactly what you need, or load one to the editor and modify it as required. Browsing through example queries is actually a great way to learn how to write your own queries.
+When you start Log Analytics, the first thing you'll see is a dialog box with [example queries](../logs/queries.md). These are categorized by solution, and you can browse or search for queries that match your particular requirements. You may be able to find one that does exactly what you need, or load one to the editor and modify it as required. Browsing through example queries is actually a great way to learn how to write your own queries.
Of course if you want to start with an empty script and write it yourself, you can close the example queries. Just click the **Queries** at the top of the screen if you want to get them back.
azure-monitor Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/log-analytics-tutorial.md
description: Learn from this tutorial how to use features of Log Analytics in Az
Previously updated : 10/07/2020 Last updated : 06/28/2021
Open the [Log Analytics demo environment](https://ms.portal.azure.com/#blade/Mic
You can view the scope in the top left corner of the screen. If you're using your own environment, you'll see an option to select a different scope, but this option isn't available in the demo environment.
-[![Query scope](media/log-analytics-tutorial/scope.png)](media/log-analytics-tutorial/scope.png#lightbox)
## Table schema
-The left side of the screen includes the **Tables** tab which allows you to inspect the tables that are available in the current scope. These are grouped by **Solution** by default, but you change their grouping or filter them.
+The left side of the screen includes the **Tables** tab which allows you to inspect the tables that are available in the current scope. These are grouped by **Solution** by default, but you can change their grouping or filter them.
-Expand the **Log Management** solution and locate the **AzureActivity** table. You can expand the table to view its schema, or hover over its name to show additional information about it.
+Expand the **Log Management** solution and locate the **AppRequests** table. You can expand the table to view its schema, or hover over its name to show additional information about it.
-[![Tables view](media/log-analytics-tutorial/table-details.png)](media/log-analytics-tutorial/table-details.png#lightbox)
Click **Learn more** to go to the table reference that documents each table and its columns. Click **Preview data** to have a quick look at a few recent records in the table. This can be useful to ensure that this is the data that you're expecting before you actually run a query with it.
-[![Sample data](media/log-analytics-tutorial/sample-data.png)](media/log-analytics-tutorial/sample-data.png#lightbox)
## Write a query
-Let's go ahead and write a query using the **AzureActivity** table. Double-click its name to add it to the query window. You can also type directly in the window and even get intellisense that will help complete the names of tables in the current scope and KQL commands.
+Let's go ahead and write a query using the **AppRequests** table. Double-click its name to add it to the query window. You can also type directly in the window and even get intellisense that will help complete the names of tables in the current scope and KQL commands.
This is the simplest query that we can write. It just returns all the records in a table. Run it by clicking the **Run** button or by pressing Shift+Enter with the cursor positioned anywhere in the query text.
-[![Query results](media/log-analytics-tutorial/query-results.png)](media/log-analytics-tutorial/query-results.png#lightbox)
You can see that we do have results. The number of records returned by the query is displayed in the bottom right corner. ## Filter
-Let's add a filter to the query to reduce the number of records that are returned. Select the **Filter** tab in the left pane. This shows different columns in the query results that you can use to filter the results. The top values in those columns are displayed with the number of records with that value. Click on **Administrative** under **CategoryValue** and then **Apply & Run**.
+Let's add a filter to the query to reduce the number of records that are returned. Select the **Filter** tab in the left pane. This shows different columns in the query results that you can use to filter the results. The top values in those columns are displayed with the number of records with that value. Click on **200** under **ResultCode** and then **Apply & Run**.
-[![Query pane](media/log-analytics-tutorial/query-pane.png)](media/log-analytics-tutorial/query-pane.png#lightbox)
A **where** statement is added to the query with the value you selected. The results now include only those records with that value so you can see that the record count is reduced.
-[![Query results filtered](media/log-analytics-tutorial/query-results-filter-01.png)](media/log-analytics-tutorial/query-results-filter-01.png#lightbox)
## Time range All tables in a Log Analytics workspace have a column called **TimeGenerated** which is the time that the record was created. All queries have a time range that limits the results to records with a **TimeGenerated** value within that range. The time range can either be set in the query or with the selector at the top of the screen.
-By default, the query will return records form the last 24 hours. Select the **Time range** dropdown and change it to **7 days**. Click **Run** again to return the results. You can see that results are returned, but we have a message here that we're not seeing all of the results. This is because Log Analytics can return a maximum of 30,000 records, and our query returned more records than that.
+By default, the query will return records from the last 24 hours. You should see a message here that we're not seeing all of the results. This is because Log Analytics can return a maximum of 30,000 records, and our query returned more records than that. Select the **Time range** dropdown and change it to **12 hours**. Click **Run** again to return the results.
-[![Time range](media/log-analytics-tutorial/query-results-max.png)](media/log-analytics-tutorial/query-results-max.png#lightbox)
## Multiple query conditions
-Let's reduce our results further by adding another filter condition. A query can include any number of filters to target exactly the set of records that you want. Select **Success** under **ActivityStatusValue** and click **Apply & Run**.
+Let's reduce our results further by adding another filter condition. A query can include any number of filters to target exactly the set of records that you want. Select **Get Home/Index** under **Name** and click **Apply & Run**.
-[![Query results multiple filters](media/log-analytics-tutorial/query-results-filter-02.png)](media/log-analytics-tutorial/query-results-filter-02.png#lightbox)
## Analyze results In addition to helping you write and run queries, Log Analytics provides features for working with the results. Start by expanding a record to view the values for all of its columns.
-[![Expand record](media/log-analytics-tutorial/expand-record.png)](media/log-analytics-tutorial/expand-record.png#lightbox)
Click on the name of any column to sort the results by that column. Click on the filter icon next to it to provide a filter condition. This is similar to adding a filter condition to the query itself except that this filter is cleared if the query is run again. Use this method if you want to quickly analyze a set of records as part of interactive analysis.
-For example, set a filter on the **CallerIpAddress** column to limit the records to a single caller.
+For example, set a filter on the **DurationMs** column to limit the records to those that took over **100** milliseconds.
-[![Query results filter](media/log-analytics-tutorial/query-results-filter.png)](media/log-analytics-tutorial/query-results-filter.png#lightbox)
Instead of filtering the results, you can group records by a particular column. Clear the filter that you just created and then turn on the **Group columns** slider.
-[![Group columns](media/log-analytics-tutorial/query-results-group-columns.png)](media/log-analytics-tutorial/query-results-group-columns.png#lightbox)
-Now drag the **CallerIpAddress** column into the grouping row. Results are now organized by that column, and you can collapse each group to help you with your analysis.
+Now drag the **Url** column into the grouping row. Results are now organized by that column, and you can collapse each group to help you with your analysis.
-[![Query results grouped](media/log-analytics-tutorial/query-results-grouped.png)](media/log-analytics-tutorial/query-results-grouped.png#lightbox)
## Work with charts Let's have a look at a query that uses numerical data that we can view in a chart. Instead of building a query, we'll select an example query. Click on **Queries** in the left pane. This pane includes example queries that you can add to the query window. If you're using your own workspace, you should have a variety of queries in multiple categories, but if you're using the demo environment, you may only see a single **Log Analytics workspaces** category. Expand that to view the queries in the category.
-Click on the query called **Request Count by ResponseCode**. This will add the query to the query window. Notice that the new query is separated from the other by a blank line. A query in KQL ends when it encounters a blank line, so these are seen as separate queries.
+Click on the query called **Function Error rate** in the **Applications** category. This will add the query to the query window. Notice that the new query is separated from the other by a blank line. A query in KQL ends when it encounters a blank line, so these are seen as separate queries.
-[![New query](media/log-analytics-tutorial/example-query.png)](media/log-analytics-tutorial/example-query.png#lightbox)
The current query is the one that the cursor is positioned on. You can see that the first query is highlighted indicating it's the current query. Click anywhere in the new query to select it and then click the **Run** button to run it.
-[![Query results chart](media/log-analytics-tutorial/example-query-output-chart.png)](media/log-analytics-tutorial/example-query-output-chart.png#lightbox)
-Notice that this output is a chart instead of a table like the last query. That's because the example query uses a [render](/azure/data-explorer/kusto/query/renderoperator?pivots=azuremonitor) command at the end. Notice that there are various options for working with the chart such as changing it to another type.
+To view the results in a graph, select **Chart** in the results pane. Notice that there are various options for working with the chart such as changing it to another type.
-Try selecting **Results** to view the output of the query as a table.
-
-[![Query results table](media/log-analytics-tutorial/example-query-output-table.png)](media/log-analytics-tutorial/example-query-output-table.png#lightbox)
## Next steps
azure-monitor Log Analytics Workspace Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/log-analytics-workspace-insights-overview.md
In our demo workspace, you can clearly see that 3 Kuberbetes clusters send far m
### Health tab
-This tab shows the workspace health state and when it was last reported, as well as operational [errors and warnings](./monitor-workspace.md) (retrieved from the _LogOperation table).
+This tab shows the workspace health state and when it was last reported, as well as operational [errors and warnings](../logs/monitor-workspace.md) (retrieved from the _LogOperation table).
:::image type="content" source="media/log-analytics-workspace-insights-overview/workspace-health.png" alt-text="Screenshot of the workspace health tab" lightbox="media/log-analytics-workspace-insights-overview/workspace-health.png":::
This tab provides information on the agents sending logs to this workspace.
:::image type="content" source="media/log-analytics-workspace-insights-overview/workspace-agents.png" alt-text="Screenshot of the workspae agents tab" lightbox="media/log-analytics-workspace-insights-overview/workspace-agents.png"::: * Operation errors and warnings - these are errors and warning related specifically to agents. They are grouped by the error/warning title to help you get a clearer view of different issues that may occur, but can be expanded to show the exact times and resources they refer to. Also note you can click 'Run query in Logs' to query the _LogOperation table through the Logs experience, see the raw data and analyze if further.
-* Workspace agents - these are the agents that sent logs to the workspace during the selected time range. You can see the agents' types (Direct, Gateway, SCOM or SCOM management servers) and health state. Agents marked healthy aren't necessarily working well - it only indicated they sent a heartbeat during the last hour. A more detailed health state is detailed in the below grid.
+* Workspace agents - these are the agents that sent logs to the workspace during the selected time range. You can see the agents' types and health state. Agents marked healthy aren't necessarily working well - it only indicated they sent a heartbeat during the last hour. A more detailed health state is detailed in the below grid.
* Agents activity - this grid shows information on either all agents, healthy or unhealthy agents. Here too "Healthy" only indicated the agent send a heartbeat during the last hour. To understand its state better, review the trend shown in the grid - it shows how many heartbeats this agent sent over time. The true health state can only be inferred if you know how the monitored resource operates, for example - If a computer is intentionally shut down at particular times, you can expect the agent's heartbeats to appear intermittenly, in a matching pattern.
azure-monitor Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-link-security.md
Azure Monitor Private Link Scope (AMPLS) connects private endpoints (and the VNe
> [!NOTE] > A single Azure Monitor resource can belong to multiple AMPLSs, but you cannot connect a single VNet to more than one AMPLS.
-## Planning your Private Link setup
-
-Before setting up your Azure Monitor Private Link setup, consider your network topology, and specifically your DNS routing topology.
-
-### The issue of DNS overrides
-Some Azure Monitor services use global endpoints, meaning they serve requests targeting any workspace/component. A couple of examples are the Application Insights ingestion endpoint, and the query endpoint of both Application Insights and Log Analytics.
+### Azure Monitor Private Links and your DNS: It's All or Nothing
+Some Azure Monitor services use global endpoints, meaning they serve requests targeting any workspace/component. When you set up a Private Link connection your DNS is updated to map Azure Monitor endpoints to private IPs, in order to send traffic through the Private Link. When it comes to global endpoints, setting up a Private Link (even to a single resource) affects traffic to all resources. In other words, it's impossible to create a Private Link connection only for a specific component or workspace.
-When you set up a Private Link connection, your DNS is updated to map Azure Monitor endpoints to private IP addresses from your VNet's IP range. This change overrides any previous mapping of these endpoints, which can have meaningful implications, reviewed below.
+#### Global endpoints
+Most importantly, traffic to the below global endpoints will be sent through the Private Link:
+* All Application Insights endpoints - endpoints handling ingestion, live metrics, profiler, debugger etc. to Application Insights endpoints are global.
+* The Query endpoint - the endpoint handling queries to both Application Insights and Log Analytics resources is global.
-### Azure Monitor Private Link applies to all Azure Monitor resources - it's All or Nothing
-Since some Azure Monitor endpoints are global, it's impossible to create a Private Link connection for a specific component or workspace. Instead, when you set up a Private Link to a single Application Insights component or Log Analytics workspace, your DNS records are updated for **all** Application Insights components. Any attempt to ingest or query a component will go through the Private Link, and possibly fail. With regard to Log Analytics, ingestion and configuration endpoints are workspace-specific, meaning the Private-link setup will only apply for the specified workspaces. Ingestion and configuration of other workspaces will be directed to the default public Log Analytics endpoints.
+That effectively means that all Application Insights traffic will be sent to the Private Link, and that all queries - to both Application Insights and Log Analytics resources - will be sent to the Private Link.
-![Diagram of DNS overrides in a single VNet](./media/private-link-security/dns-overrides-single-vnet.png)
+Traffic to Application Insights resource not added to your AMPLS will not pass the Private Link validation, and will fail.
-That's true not only for a specific VNet, but for all VNets that share the same DNS server (see [The issue of DNS overrides](#the-issue-of-dns-overrides)). So, for example, request to ingest logs to any Application Insights component will always be sent through the Private Link route. Components that aren't linked to the AMPLS will fail the Private Link validation and not go through.
+![Diagram of All or Nothing behavior](./media/private-link-security/all-or-nothing.png)
-> [!NOTE]
-> To conclude:
-> Once your setup a Private Link connection to a single resource, it applies to Azure Monitor resources across your network. For Application Insights resources, that's 'All or Nothing'. That effectively means you should add all Application Insights resources in your network to your AMPLS, or none of them.
->
-> To handle data exfiltration risks, our recommendation is to add all Application Insights and Log Analytics resources to your AMPLS, and block your networks egress traffic as much as possible.
+#### Resource-specific endpoints
+All Log Analytics endpoints except the Query endpoint, are workspace-specific. So, creating a Private Link to a specific Log Analytics workspace won't affect ingestion (or other) traffic to other workspaces, which will continue to use the public Log Analytics endpoints. All queries, however, will be sent through the Private Link.
-### Azure Monitor Private Link applies to your entire network
-Some networks are composed of multiple VNets. If the VNets use the same DNS server, they will override each other's DNS mappings and possibly break each other's communication with Azure Monitor (see [The issue of DNS overrides](#the-issue-of-dns-overrides)). Ultimately, only the last VNet will be able to communicate with Azure Monitor, since the DNS will map Azure Monitor endpoints to private IPs from this VNet's range (which may not be reachable from other VNets).
+### Azure Monitor Private Link applies to all networks that share the same DNS
+Some networks are composed of multiple VNets or other connected networks. If these networks share the same DNS, setting up a Private Link on any of them would update the DNS and affect traffic across all networks. That's especially important to note due to the "All or Nothing" behavior described above.
![Diagram of DNS overrides in multiple VNets](./media/private-link-security/dns-overrides-multiple-vnets.png) In the above diagram, VNet 10.0.1.x first connects to AMPLS1 and maps the Azure Monitor global endpoints to IPs from its range. Later, VNet 10.0.2.x connects to AMPLS2, and overrides the DNS mapping of the *same global endpoints* with IPs from its range. Since these VNets aren't peered, the first VNet now fails to reach these endpoints. +
+## Planning your Private Link setup
+
+Before setting up your Azure Monitor Private Link setup, consider your network topology, and specifically your DNS routing topology.
+
+As discussed above, setting up a Private Link affects - in many ways - traffic to all Azure Monitor resources. That's especially true for Application Insights resources. Additionally, it affects not only the network connected to the Private Endpoint (and through it to the AMPLS resources) but also all other networks the share the same DNS.
+ > [!NOTE]
-> To conclude:
-> AMPLS setup affect all networks that share the same DNS zones. To avoid overriding each other's DNS endpoint mappings, it is best to setup a single Private Endpoint on a peered network (such as a Hub VNet), or separate the networks at the DNS level (foe example by using DNS forwarders or separate DNS servers entirely).
+> Given all that, the simplest and most secure approach would be:
+> 1. Create a single Private Link connection, with a single Private Endpoint and a single AMPLS. If your networks are peered, create the Private Link connection on the shared (or hub) VNet.
+> 2. Add *all* Azure Monitor resources (Application Insights components and Log Analytics workspaces) to that AMPLS.
+> 3. Block network egress traffic as much as possible.
+
+If for some reason you can't use a single Private Link and a single AMPLS, the next best thing would be to create isolated Private Link connections for isolation networks. If you are (or can align with) using spoke vnets, follow the guidance in [Hub-spoke network topology in Azure](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke). Then, setup separate private link settings in the relevant spoke VNets. **Make sure to separate DNS zones as well**, since sharing DNS zones with other spoke networks will cause DNS overrides.
+ ### Hub-spoke networks Hub-spoke topologies can avoid the issue of DNS overrides by setting the Private Link on the hub (main) VNet, and not on each spoke VNet. This setup makes sense especially if the Azure Monitor resources used by the spoke VNets are shared.
Hub-spoke topologies can avoid the issue of DNS overrides by setting the Private
![Hub-and-spoke-single-PE](./media/private-link-security/hub-and-spoke-with-single-private-endpoint.png) > [!NOTE]
-> You may intentionally prefer to create separate Private Links for your spoke VNets, for example to allow each VNet to access a limited set of monitoring resources. In such cases, you can create a dedicated Private Endpoint and AMPLS for each VNet, but must also verify they don't share the same DNS zones in order to avoid DNS overrides.
+> You may intentionally prefer to create separate Private Links for your spoke VNets, for example to allow each VNet to access a limited set of monitoring resources. In such cases, you can create a dedicated Private Endpoint and AMPLS for each VNet, but **must also verify they don't share the same DNS zones in order to avoid DNS overrides**.
++
+### Peered networks
+Network peering is used in various topologies, other than hub-spoke. Such networks can share reach each others' IP addresses, and most likely share the same DNS. In such cases, our recommendation is similar to Hub-spoke - select a single network that is reached by all other (relevant) networks and set the Private Link connection on that network. Avoid creating multiple Private Endpoints and AMPLS objects, since ultimately only the last one set in the DNS will apply.
++
+### Isolated networks
+If your networks aren't peered, **you must also separate their DNS in order to use Private Links**. Once that's done, you can create a Private Link for one (or many) network, without affecting traffic of other networks. That means creating a separate Private Endpoint for each network, and a separate AMPLS object. Your AMPLS objects can link to the same workspaces/components, or to different ones.
++
+### Test with a local bypass: Edit your machine's hosts file instead of the DNS
+As a local bypass to the All or Nothing behavior, you can select not to update your DNS with the Private Link records, and instead edit the hosts files on select machines so only these machines would send requests to the Private Link endpoints.
+* Set up a Private Link as show below, but when [connecting to a Private Endpoint](#connect-to-a-private-endpoint) choose **not** to auto-integrate with the DNS (step 5b).
+* Configure the relevant endpoints on your machines' hosts files. To review the Azure Monitor endpoints that need mapping, see [Reviewing your Endpoint's DNS settings](#reviewing-your-endpoints-dns-settings).
+
+That approach isn't recommended for production environments.
### Consider limits
-As listed in [Restrictions and limitations](#restrictions-and-limitations), the AMPLS object has a number of limits, shown in the below topology:
-* Each VNet connects to only **1** AMPLS object.
-* AMPLS B is connected to Private Endpoints of two VNets (VNet2 and VNet3), using 2 of the 10 possible Private Endpoint connections.
+The AMPLS object has the following limits:
+* A VNet can only connect to **one** AMPLS object. That means the AMPLS object must provide access to all the Azure Monitor resources the VNet should have access to.
+* An AMPLS object can connect to 50 Azure Monitor resources at most.
+* An Azure Monitor resource (Workspace or Application Insights component) can connect to 5 AMPLSs at most.
+* An AMPLS object can connect to 10 Private Endpoints at most.
+
+In the below diagram:
+* Each VNet connects to only **one** AMPLS object.
* AMPLS A connects to two workspaces and one Application Insight component, using 3 of the 50 possible Azure Monitor resources connections.
-* Workspace2 connects to AMPLS A and AMPLS B, using 2 of the 5 possible AMPLS connections.
+* Workspace2 connects to AMPLS A and AMPLS B, using of the 5 possible AMPLS connections.
+* AMPLS B is connected to Private Endpoints of two VNets (VNet2 and VNet3), using 2 of the 10 possible Private Endpoint connections.
![Diagram of AMPLS limits](./media/private-link-security/ampls-limits.png) +
+### Application Insights considerations
+* YouΓÇÖll need to add resources hosting the monitored workloads to a private link. For example, see [Using Private Endpoints for Azure Web App](../../app-service/networking/private-endpoint.md).
+* Non-portal consumption experiences must also run on the private-linked VNET that includes the monitored workloads.
+* In order to support Private Links for Profiler and Debugger, you'll need to [provide your own storage account](../app/profiler-bring-your-own-storage.md)
+ > [!NOTE]
-> If you use Log Analytics solutions that require an Automation account, such as Update Management, Change Tracking or Inventory, you should also setup a separare Private Link for your Automation account. For more information, see [Use Azure Private Link to securely connect networks to Azure Automation](../../automation/how-to/private-link-security.md).
+> To fully secure workspace-based Application Insights, you need to lock down both access to Application Insights resource as well as the underlying Log Analytics workspace.
+
+### Log Analytics considerations
+#### Automation
+If you use Log Analytics solutions that require an Automation account, such as Update Management, Change Tracking, or Inventory, you should also set up a separate Private Link for your Automation account. For more information, see [Use Azure Private Link to securely connect networks to Azure Automation](../../automation/how-to/private-link-security.md).
+
+#### Log Analytics solution packs download
+Log Analytics agents need to access a global storage account to download solution packs. Private Link setups created at or after April 19, 2021 (or starting June 2021 on Azure Sovereign clouds) can reach the agents' solution packs storage over the private link. This capability is made possible through the new DNS zone created for [blob.core.windows.net](#privatelink-blob-core-windows-net).
+
+If your Private Link setup was created before April 19, 2021, it won't reach the solution packs storage over a private link. To handle that you can either:
+* Re-create your AMPLS and the Private Endpoint connected to it
+* Allow your agents to reach the storage account through its public endpoint, by adding the following rules to your firewall allowlist:
+
+ | Cloud environment | Agent Resource | Ports | Direction |
+ |:--|:--|:--|:--|
+ |Azure Public | scadvisorcontent.blob.core.windows.net | 443 | Outbound
+ |Azure Government | usbn1oicore.blob.core.usgovcloudapi.net | 443 | Outbound
+ |Azure China 21Vianet | mceast2oicore.blob.core.chinacloudapi.cn| 443 | Outbound
-## Example connection
+## Private Link connection setup
Start by creating an Azure Monitor Private Link Scope resource.
Now that you have resources connected to your AMPLS, create a private endpoint t
You've now created a new private endpoint that is connected to this AMPLS. +
+## Configure access to your resources
+So far we covered the configuration of your network, but you should also consider how you want to configure network access to your monitored resources - Log Analytics workspaces and Application Insights components.
+
+Go to the Azure portal. In your resource's menu, there's a menu item called **Network Isolation** on the left-hand side. This page controls both which networks can reach the resource through a Private Link, and whether other networks can reach it or not.
+
+![LA Network Isolation](./media/private-link-security/ampls-log-analytics-lan-network-isolation-6.png)
+
+### Connected Azure Monitor Private Link scopes
+Here you can review and configure the resource's connections to Azure Monitor Private Links scopes. Connecting to scopes (AMPLSs) allows traffic from the virtual network connected to each AMPLS to reach this resource, and has the same effect as connecting it from the scope as we did in [Connecting Azure Monitor resources](#connect-azure-monitor-resources). To add a new connection, select **Add** and select the Azure Monitor Private Link Scope. Select **Apply** to connect it. Your resource can connect to 5 AMPLS objects, as mentioned in [Restrictions and limitations](#restrictions-and-limitations).
+
+### Virtual networks access configuration - Managing access from outside of private links scopes
+The settings on the bottom part of this page control access from public networks, meaning networks not connected to the listed scopes (AMPLSs).
+
+If you set **Allow public network access for ingestion** to **No**, then clients (machines, SDKs, etc.) outside of the connected scopes can't upload data or send logs to this resource.
+
+If you set **Allow public network access for queries** to **No**, then clients (machines, SDKs etc.) outside of the connected scopes can't query data in this resource. That data includes access to logs, metrics, and the live metrics stream, as well as experiences built on top such as workbooks, dashboards, query API-based client experiences, insights in the Azure portal, and more. Experiences running outside the Azure portal and that query Log Analytics data also have to be running within the private-linked VNET.
++
+### Exceptions
+
+#### Diagnostic logs
+Logs and metrics uploaded to a workspace via [Diagnostic Settings](../essentials/diagnostic-settings.md) go over a secure private Microsoft channel, and are not controlled by these settings.
+
+#### Azure Resource Manager
+Restricting access as explained above applies to data in the resource. However, configuration changes, including turning these access settings on or off, are managed by Azure Resource Manager. To control these settings, you should restrict access to this resources using the appropriate roles, permissions, network controls, and auditing. For more information, see [Azure Monitor Roles, Permissions, and Security](../roles-permissions-security.md)
+
+Additionally, specific experiences (such as the LogicApp connector) query data through Azure Resource Manager and therefore won't be able to query data unless Private Link settings are applied to the Resource Manager as well.
++ ## Review and validate your Private Link setup ### Reviewing your Endpoint's DNS settings
This zone configures connectivity to the global agents' solution packs storage a
Note: Some browsers may use other DNS settings (see [Browser DNS settings](#browser-dns-settings)). Make sure your DNS settings apply.
-* To make sure your workspace or component aren't receiving requests from public networks (not connected through AMPLS), set the resource's public ingestion and query flags to *No* as explained in [Manage access from outside of private links scopes](#manage-access-from-outside-of-private-links-scopes).
+* To make sure your workspace or component aren't receiving requests from public networks (not connected through AMPLS), set the resource's public ingestion and query flags to *No* as explained in [Configure access to your resources](#configure-access-to-your-resources).
* From a client on your protected network, use `nslookup` to any of the endpoints listed in your DNS zones. It should be resolved by your DNS server to the mapped private IPs instead of the public IPs used by default.
-## Configure Log Analytics
-
-Go to the Azure portal. In your Log Analytics workspace resource menu, there's an item called **Network Isolation** on the left-hand side. You can control two different states from this menu.
-
-![LA Network Isolation](./media/private-link-security/ampls-log-analytics-lan-network-isolation-6.png)
-
-### Connected Azure Monitor Private Link scopes
-All scopes connected to the workspace show up in this screen. Connecting to scopes (AMPLSs) allows network traffic from the virtual network connected to each AMPLS to reach this workspace. Creating a connection through here has the same effect as setting it up on the scope, as we did in [Connecting Azure Monitor resources](#connect-azure-monitor-resources). To add a new connection, select **Add** and select the Azure Monitor Private Link Scope. Select **Apply** to connect it. Note that a workspace can connect to 5 AMPLS objects, as mentioned in [Restrictions and limitations](#restrictions-and-limitations).
-
-### Manage access from outside of private links scopes
-The settings on the bottom part of this page control access from public networks, meaning networks not connected to the listed scopes (AMPLSs). Setting **Allow public network access for ingestion** to **No** blocks ingestion of logs from machines outside of the connected scopes. Setting **Allow public network access for queries** to **No** blocks queries coming from machines outside of the scopes. That includes queries run via workbooks, dashboards, API-based client experiences, insights in the Azure portal, and more. Experiences running outside the Azure portal, and that query Log Analytics data also have to be running within the private-linked VNET.
-
-### Exceptions
-Restricting access as explained above doesn't apply to the Azure Resource Manager and therefore has the following limitations:
-* Access to data - while blocking/allowing queries from public networks applies to most Log Analytics experiences, some experiences query data through Azure Resource Manager and therefore won't be able to query data unless Private Link settings are applied to the Resource Manager as well (feature coming up soon). Examples are Azure Monitor solutions, Workbooks and Insights, and the LogicApp connector.
-* Workspace management - Workspace setting and configuration changes (including turning these access settings on or off) are managed by Azure Resource Manager. Restrict access to workspace management using the appropriate roles, permissions, network controls, and auditing. For more information, see [Azure Monitor Roles, Permissions, and Security](../roles-permissions-security.md).
-
-> [!NOTE]
-> Logs and metrics uploaded to a workspace via [Diagnostic Settings](../essentials/diagnostic-settings.md) go over a secure private Microsoft channel, and are not controlled by these settings.
-
-### Log Analytics solution packs download
-Log Analytics agents need to access a global storage account to download solution packs. Private Link setups created at or after April 19, 2021 (or starting June, 2021 on Azure Sovereign clouds) can reach the agents' solution packs storage over the private link. This is made possible through the new DNS zone created for [blob.core.windows.net](#privatelink-blob-core-windows-net).
-
-If your Private Link setup was created before April 19, 2021, it won't reach the solution packs storage over a private link. To handle that you can do one of the following:
-* Re-create your AMPLS and the Private Endpoint connected to it
-* Allow your agents to reach the storage account through its public endpoint, by adding the following rules to your firewall allowlist:
-
- | Cloud environment | Agent Resource | Ports | Direction |
- |:--|:--|:--|:--|
- |Azure Public | scadvisorcontent.blob.core.windows.net | 443 | Outbound
- |Azure Government | usbn1oicore.blob.core.usgovcloudapi.net | 443 | Outbound
- |Azure China 21Vianet | mceast2oicore.blob.core.chinacloudapi.cn| 443 | Outbound
--
-## Configure Application Insights
-
-Go to the Azure portal. In your Azure Monitor Application Insights component resource, is a menu item **Network Isolation** on the left-hand side. You can control two different states from this menu.
-
-![AI Network Isolation](./media/private-link-security/ampls-application-insights-lan-network-isolation-6.png)
-
-First, you can connect this Application Insights resource to Azure Monitor Private Link scopes that you have access to. Select **Add** and select the **Azure Monitor Private Link Scope**. Select Apply to connect it. All connected scopes show up in this screen. Making this connection allows network traffic in the connected virtual networks to reach this component, and has the same effect as connecting it from the scope as we did in [Connecting Azure Monitor resources](#connect-azure-monitor-resources).
-
-Then, you can control how this resource can be reached from outside of the private link scopes (AMPLS) listed previously. If you set **Allow public network access for ingestion** to **No**, then machines or SDKs outside of the connected scopes can't upload data to this component. If you set **Allow public network access for queries** to **No**, then machines outside of the scopes can't access data in this Application Insights resource. That data includes access to APM logs, metrics, and the live metrics stream, as well as experiences built on top such as workbooks, dashboards, query API-based client experiences, insights in the Azure portal, and more.
-
-> [!NOTE]
-> Non-portal consumption experiences must also run on the private-linked VNET that includes the monitored workloads.
-
-YouΓÇÖll need to add resources hosting the monitored workloads to the private link. For example, see [Using Private Endpoints for Azure Web App](../../app-service/networking/private-endpoint.md).
-
-Restricting access in this manner only applies to data in the Application Insights resource. However, configuration changes, including turning these access settings on or off, are managed by Azure Resource Manager. So, you should restrict access to Resource Manager using the appropriate roles, permissions, network controls, and auditing. For more information, see [Azure Monitor Roles, Permissions, and Security](../roles-permissions-security.md).
-
-> [!NOTE]
-> To fully secure workspace-based Application Insights, you need to lock down both access to Application Insights resource as well as the underlying Log Analytics workspace.
->
-> Code-level diagnostics (profiler/debugger) need you to [provide your own storage account](../app/profiler-bring-your-own-storage.md) to support private link.
-
-### Handling the All-or-Nothing nature of Private Links
-As explained in [Planning your Private Link setup](#planning-your-private-link-setup), setting up a Private Link even for a single resource affects all Azure Monitor resources in that networks, and in other networks that share the same DNS. This behavior can make your onboarding process challenging. Consider the following options:
-
-* All in - the simplest and most secure approach is to add all of your Application Insights components to the AMPLS. For components that you wish to still access from other networks as well, leave the ΓÇ£Allow public internet access for ingestion/queryΓÇ¥ flags set to Yes (the default).
-* Isolate networks - if you are (or can align with) using spoke vnets, follow the guidance in [Hub-spoke network topology in Azure](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke). Then, setup separate private link settings in the relevant spoke VNets. Make sure to separate DNS zones as well, since sharing DNS zones with other spoke networks will cause [DNS overrides](#the-issue-of-dns-overrides).
-* Use custom DNS zones for specific apps - this solution allows you to access select Application Insights components over a Private Link, while keeping all other traffic over the public routes.
- - Set up a [custom private DNS zone](../../private-link/private-endpoint-dns.md), called in.applicationinsights.azure.com
- - Create an AMPLS and a Private Endpoint, and choose **not** to auto-integrate with private DNS
- - Go to Private Endpoint -> DNS Configuration and review the suggested mapping of FQDNs.
- - Choose to Add Configuration and pick the in.applicationinsights.azure.com zone you just created
- - Add records for the above
- ![Screenshot of configured DNS zone](./media/private-link-security/private-endpoint-global-dns-zone.png)
- - Go to your Application Insights component and copy its [Connection String](../app/sdk-connection-string.md).
- - Apps or scripts that wish to call this component over a Private Link should use the connection string
-* Map endpoints through hosts files instead of DNS - to have a Private Link access only from a specific machine/VM in your network:
- - Set up an AMPLS and a Private Endpoint, and choose **not** to auto-integrate with private DNS
- - Configure the above A records on a machine that runs the app in the hosts file
-- ## Use APIs and command line You can automate the process described earlier using Azure Resource Manager templates, REST, and command-line interfaces.
To create and manage private link scopes, use the [REST API](/rest/api/monitor/p
To manage the network access flag on your workspace or component, use the flags `[--ingestion-access {Disabled, Enabled}]` and `[--query-access {Disabled, Enabled}]`on [Log Analytics workspaces](/cli/azure/monitor/log-analytics/workspace) or [Application Insights components](/cli/azure/ext/application-insights/monitor/app-insights/component).
-### Example ARM template
-The below ARM template creates:
+### Example Azure Resource Manager (ARM) template
+The below Azure Resource Manager template creates:
* A private link scope (AMPLS) named "my-scope" * A Log Analytics workspace named "my-workspace" * Add a scoped resource to the "my-scope" AMPLS, named "my-workspace-connection"
For more information on bringing your own storage account, see [Customer-owned s
## Restrictions and limitations ### AMPLS
-The AMPLS object has a number of limits you should consider when planning your Private Link setup:
-
-* A VNet can only connect to 1 AMPLS object. That means the AMPLS object must provide access to all the Azure Monitor resources the VNet should have access to.
-* An Azure Monitor resource (Workspace or Application Insights component) can connect to 5 AMPLSs at most.
-* An AMPLS object can connect to 50 Azure Monitor resources at most.
-* An AMPLS object can connect to 10 Private Endpoints at most.
-
-See [Consider limits](#consider-limits) for a deeper review of these limits.
+The AMPLS object has a number of limits you should consider when planning your Private Link setup. See [Consider limits](#consider-limits) for a deeper review of these limits.
### Agents
azure-monitor Monitor Virtual Machine Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/monitor-virtual-machine-alerts.md
If you're not familiar with the process for creating alert rules in Azure Monito
### Machine unavailable
-The most basic requirement is to send an alert when a machine is unavailable. It could be stopped, the guest operating system could be hung, or the agent could be unresponsive. There are a variety of ways to configure this alerting, but the most common is to use the heartbeat sent from the Log Analytics agent.
+The most basic requirement is to send an alert when a machine is unavailable. It could be stopped, the guest operating system could be unresponsive, or the agent could be unresponsive. There are a variety of ways to configure this alerting, but the most common is to use the heartbeat sent from the Log Analytics agent.
#### Log query alert rules Log query alerts use the [Heartbeat table ](/azure/azure-monitor/reference/tables/heartbeat) which should have a heartbeat record every minute from each machine.
azure-monitor Monitor Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/monitor-virtual-machine.md
There are fundamentally four layers to a virtual machine that require monitoring
:::image type="content" source="media/monitor-virtual-machines/monitoring-layers.png" alt-text="Monitoring layers" lightbox="media/monitor-virtual-machines/monitoring-layers.png"::: ## VM insights
-This scenario will focus on [VM insights](../vm/vminsights-overview.md), which is the primary feature in Azure Monitor for monitoring virtual machines, providing the following features.
+This scenario focuses on [VM insights](../vm/vminsights-overview.md), which is the primary feature in Azure Monitor for monitoring virtual machines, providing the following features:
- Simplified onboarding of agents to enable monitoring of a virtual machine guest operating system and workloads. - Pre-defined trending performance charts and workbooks that allow you to analyze core performance metrics from the virtual machine's guest operating system.
This scenario will focus on [VM insights](../vm/vminsights-overview.md), which i
Any monitoring tool such as Azure Monitor requires an agent installed on a machine to collect data from its guest operating system. Azure Monitor currently has multiple agents that collect different data, send data to different locations, and support different features. VM insights manages the deployment and configuration of the agents that most customers will use, but you should be aware of the different agents that are described in the following table in case you require the particular scenarios that they support. See [Overview of Azure Monitor agents](../agents/agents-overview.md) for a detailed description and comparison of the different agents. > [!NOTE]
-> When the Azure Monitor agent fully supported VM insights, Azure Security Center, and Azure Sentinel, then it will completely replace the Log Analytics agent, diagnostic extension, and Telegraf agent.
+> When the Azure Monitor agent fully supports VM insights, Azure Security Center, and Azure Sentinel, then it will completely replace the Log Analytics agent, diagnostic extension, and Telegraf agent.
- [Azure Monitor agent](../agents/agents-overview.md#log-analytics-agent) - Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Metrics and Logs. When it fully supports VM insights, Azure Security Center, and Azure Sentinel, then it will completely replace the Log Analytics agent and diagnostic extension. - [Log Analytics agent](../agents/agents-overview.md#log-analytics-agent) - Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Logs. Supports VM insights and monitoring solutions. This is the same agent used for System Center Operations Manager.
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na ms.devlang: na Previously updated : 06/28/2021 Last updated : 06/30/2021 # Solution architectures using Azure NetApp Files
This section provides references to SAP on Azure solutions.
This section provides references for Virtual Desktop infrastructure solutions.
-### Windows Virtual Desktop
+### <a name="windows-virtual-desktop"></a>Azure Virtual Desktop
* [Benefits of using Azure NetApp Files with Windows Virtual Desktop](solutions-windows-virtual-desktop.md) * [Storage options for FSLogix profile containers in Windows Virtual Desktop](../virtual-desktop/store-fslogix-profile.md#azure-platform-details)
This section provides references for Virtual Desktop infrastructure solutions.
* [Microsoft FSLogix for the enterprise - Azure NetApp Files best practices](/azure/architecture/example-scenario/wvd/windows-virtual-desktop-fslogix#azure-netapp-files-best-practices) * [Setting up Azure NetApp Files for MSIX App Attach](https://techcommunity.microsoft.com/t5/windows-virtual-desktop/setting-up-azure-netapp-files-for-msix-app-attach-step-by-step/m-p/1990021)
+### Citrix
+
+* [Citrix Profile Management with Azure NetApp Files Best Practices Guide](https://www.netapp.com/pdf.html?item=/media/55973-tr-4901.pdf)
++ ## HPC solutions This section provides references for High Performance Computing (HPC) solutions.
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-volumes-dual-protocol.md
na ms.devlang: na Previously updated : 06/29/2021 Last updated : 06/30/2021 # Create a dual-protocol (NFSv3 and SMB) volume for Azure NetApp Files
You need to set the following attributes for LDAP users and LDAP groups:
`objectClass: posixGroup`, `gidNumber: 555` * All users and groups must have unique `uidNumber` and `gidNumber`, respectively.
-Azure Active Directory Domain Services (AADDS) doesnΓÇÖt allow you to modify POSIX attributes on users and groups created in the organizational ADDC Users OU. As a workaround, you can create a custom OU and create users and groups in the custom OU.
+Azure Active Directory Domain Services (AADDS) doesnΓÇÖt allow you to modify POSIX attributes on users and groups created in the organizational AADDC Users OU. As a workaround, you can create a custom OU and create users and groups in the custom OU.
If you are synchronizing the users and groups in your Azure AD tenancy to users and groups in the AADDC Users OU, you cannot move users and groups into a custom OU. Users and groups created in the custom OU will not be synchronized to your AD tenancy. For more information, see the [AADDS Custom OU Considerations and Limitations](../active-directory-domain-services/create-ou.md#custom-ou-considerations-and-limitations).
azure-percept How To Select Update Package https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-select-update-package.md
Using the **model** and **swVersion** identified in the previous section, check
|model |swVersion |Update method |Download links |Note | ||||||
-|PE-101 |2020.108.101.105, <br>2020.108.114.120, <br>2020.109.101.122, <br>2020.109.116.120, <br>2021.101.106.118 |**USB only** |[2021.105.111.112 USB update package](https://go.microsoft.com/fwlink/?linkid=2155734) |May release (2105) |
-|PE-101 |2021.102.108.112, <br> |OTA or USB |[2021.105.111.112 OTA manifest (PE-101)](https://go.microsoft.com/fwlink/?linkid=2155625)<br>[2021.105.111.112 OTA update package](https://go.microsoft.com/fwlink/?linkid=2161538)<br>[2021.105.111.112 USB update package](https://go.microsoft.com/fwlink/?linkid=2155734) |May release (2105) |
-|APDK-101 |All swVersions |OTA or USB | [2021.105.111.112 OTA manifest (APDK-101)](https://go.microsoft.com/fwlink/?linkid=2163554)<br>[2021.105.111.112 OTA update package](https://go.microsoft.com/fwlink/?linkid=2163456)<br>[2021.105.111.112 USB update package](https://go.microsoft.com/fwlink/?linkid=2163555) |May release (2105) |
+|PE-101 |2020.108.101.105, <br>2020.108.114.120, <br>2020.109.101.122, <br>2020.109.116.120, <br>2021.101.106.118 |**USB only** |[2021.106.111.115 USB update package](https://go.microsoft.com/fwlink/?linkid=2167236) |June release (2106) |
+|PE-101 |2021.102.108.112, <br> |OTA or USB |[2021.106.111.115 OTA manifest (PE-101)](https://go.microsoft.com/fwlink/?linkid=2167127)<br>[2021.106.111.115 OTA update package](https://go.microsoft.com/fwlink/?linkid=2167128)<br>[2021.106.111.115 USB update package](https://go.microsoft.com/fwlink/?linkid=2167236) |June release (2106) |
+|APDK-101 |All swVersions |OTA or USB | [2021.106.111.115 OTA manifest (APDK-101)](https://go.microsoft.com/fwlink/?linkid=2167235)<br>[2021.106.111.115 OTA update package](https://go.microsoft.com/fwlink/?linkid=2167128)<br>[2021.106.111.115 USB update package](https://go.microsoft.com/fwlink/?linkid=2167236) |June release (2106) |
## Next steps Update your dev kits via the methods and update packages determined in the previous section. - [Update your Azure Percept DK over-the-air](./how-to-update-over-the-air.md)-- [Update your Azure Percept DK via USB](./how-to-update-via-usb.md)
+- [Update your Azure Percept DK via USB](./how-to-update-via-usb.md)
azure-percept Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/known-issues.md
Here are issues with the Azure Percept DK, Azure Percept Audio, or Azure Percept
|-|||| | Azure Percept DK | Unable to deploy the sample and demo models in Azure Percept Studio | Sometimes the azureeyemodule or azureearspeechmodule modules stop running. edgeAgent logs show "too many levels of symbolic links" error. | Reset your device by [updating it over USB](./how-to-update-via-usb.md) | | Localization | Non-English speaking users may see parts of the Azure Percept DK setup experience display English text. | The Azure Percept DK setup experience isn't fully localized. | Fix is scheduled for July 2021 |
-| Azure Percept DK | When going through the setup experience on a Mac, the setup experience my abruptly close after connecting to Wi-Fi. | When going through the setup experience on a Mac, it initially opens in a window rather than a web browser. The window isn't persisted once the connection switches from the device's access point to Wi-Fi. | Open a web browser and go to https://10.1.1.1, which will allow you to complete the setup experience. |
+| Azure Percept DK | When going through the setup experience on a Mac, the setup experience my abruptly close after connecting to Wi-Fi. | When going through the setup experience on a Mac, it initially opens in a window rather than a web browser. The window isn't persisted once the connection switches from the device's access point to Wi-Fi. | Open a web browser and go to https://10.1.1.1, which will allow you to complete the setup experience. |
+| Azure Percept DK | The dev kit is running a custom model and after rebooting the dev kit it runs the default sample model. | The module twin container for the custom model doesn't persist across device reboots. After the reboot, the module twin for the custom module must be rebuilt which can take 5 minutes or longer. The dev kit will run the default model until that process is completed. | After a reboot, you must wait until the custom module twin is recreated. |
azure-percept Vision Solution Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/vision-solution-troubleshooting.md
This article provides information on troubleshooting no-code vision solutions in
:::image type="content" source="./media/vision-solution-troubleshooting/vision-delete-device.png" alt-text="Screenshot that shows the Delete button highlighted on the IoT Edge home page.":::
-## Eye module troubleshooting tips
-
-The following troubleshooting tips help with some of the more common issues found in the vision AI prototyping experiences.
-
-### Check the runtime status of azureeyemodule
+## Check the runtime status of azureeyemodule
If there's a problem with **WebStreamModule**, ensure that **azureeyemodule**, which handles the vision model inferencing, is running. To check the runtime status:
If there's a problem with **WebStreamModule**, ensure that **azureeyemodule**, w
:::image type="content" source="./media/vision-solution-troubleshooting/firmware-desired-status-stopped.png" alt-text="Screenshot that shows the Module Settings configuration screen.":::
-### Update TelemetryIntervalNeuralNetworkMs
+## Change how often messages are sent from the azureeyemodule
-If you see the following count limitation error, you need to update the TelemetryIntervalNeuralNetworkMs value in the azureeyemodule module twin settings.
+Your subscription tier may cap the number of messages that can be sent from your device to IoT Hub. For instance, the Free Tier will limit the number of messages to 8,000 per day. Once that limit is reached, your azureeyemodule will stop functioning and you may receive this error:
|Error message| ||
-|Total number of messages on IotHub 'xxxxxxxxx' exceeded the allocated quota. Max allowed message count: '8000', current message count: 'xxxx'. Send and Receive operations are blocked for this hub until the next UTC day. Consider increasing the units for this hub to increase the quota.|
+|*Total number of messages on IotHub 'xxxxxxxxx' exceeded the allocated quota. Max allowed message count: '8000', current message count: 'xxxx'. Send and Receive operations are blocked for this hub until the next UTC day. Consider increasing the units for this hub to increase the quota.*|
-TelemetryIntervalNeuralNetworkMs determines how often to send messages from the neural network. Messages are sent in milliseconds. Azure subscriptions have a limited number of messages per day.
+Using the azureeyemodule module twin, it's possible change the interval rate for how often messages are sent. The value entered for the interval rate indicates the frequency that each message gets sent, in milliseconds. The larger the number the more time there is between each message. For example, if you set the interval rate to 12,000 it means one message will be sent every 12 seconds. For a model that is running for the entire day this rate factors out to 7,200 messages per day, which is under the Free Tier limit. The value that you choose depends on how responsive you need your vision model to be.
-The message amount is based on your subscription tier. If you find yourself locked out because you've sent too many messages, increase the amount to a higher number. An amount of 12,000 is one message every 12 seconds. This amount gives you 7,200 messages per day, which is under the 8,000-message limit for the free subscription.
+> [!NOTE]
+> Changing the message interval rate does not impact the size of each message. The message size depends on a few different factors such as the model type and the number of objects being detected in each message. As such, it is difficult to determine message size.
-To update your TelemetryIntervalNeuralNetworkMs value:
+Follow these steps to update the message interval:
1. Sign in to the [Azure portal](https://ms.portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_Iothub=aduprod#home), and open **All resources**.
To update your TelemetryIntervalNeuralNetworkMs value:
:::image type="content" source="./media/vision-solution-troubleshooting/module-page-inline.png" alt-text="Screenshot of a module page." lightbox= "./media/vision-solution-troubleshooting/module-page.png":::
-1. Scroll down to **properties**. The **Running** and **Logging** properties aren't active at this time.
+1. Scroll down to **properties**
+1. Find **TelemetryInterval** and replace it with **TelemetryIntervalNeuralNetworkMs**
+
+ :::image type="content" source="./media/vision-solution-troubleshooting/module-identity-twin-inline-02.png" alt-text="Screenshot of Module Identity Twin properties." lightbox= "./media/vision-solution-troubleshooting/module-identity-twin.png":::
- :::image type="content" source="./media/vision-solution-troubleshooting/module-identity-twin-inline.png" alt-text="Screenshot of Module Identity Twin properties." lightbox= "./media/vision-solution-troubleshooting/module-identity-twin.png":::
+1. Update the **TelemetryIntervalNeuralNetworkMs** value to the needed value
-1. Update the **TelemetryIntervalNeuralNetworkMs** value as you want it, and select the **Save** icon.
+1. Select the **Save** icon.
## View device RTSP video stream
azure-resource-manager Linter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/linter.md
+
+ Title: Use Bicep linter
+description: Learn how to use Bicep linter.
+ Last updated : 07/01/2021++
+# Use Bicep linter
+
+The Bicep linter can be used to analyze Bicep files. It checks syntax errors, and catches a customizable set of authoring best practices before you build or deploy your Bicep files. The linter makes it easier to enforce coding standards by providing guidance during development.
+
+## Install linter
+
+The linter can be used with Visual Studio code and Bicep CLI. It requires:
+
+- Bicep CLI version 0.4 or later.
+- Bicep extension for Visual Studio Code version 0.4 or later.
+
+## Customize linter
+
+Using bicepconfig.json, you can enable or disable linter, supply rule-specific values, and set the level of rules as well. The following is the default bicepconfig.json:
+
+```json
+{
+ "analyzers": {
+ "core": {
+ "verbose": false,
+ "enabled": true,
+ "rules": {
+ "no-hardcoded-env-urls": {
+ "level": "warning",
+ "disallowedhosts": [
+ "management.core.windows.net",
+ "gallery.azure.com",
+ "management.core.windows.net",
+ "management.azure.com",
+ "database.windows.net",
+ "core.windows.net",
+ "login.microsoftonline.com",
+ "graph.windows.net",
+ "trafficmanager.net",
+ "vault.azure.net",
+ "datalake.azure.net",
+ "azuredatalakestore.net",
+ "azuredatalakeanalytics.net",
+ "vault.azure.net",
+ "api.loganalytics.io",
+ "api.loganalytics.iov1",
+ "asazure.windows.net",
+ "region.asazure.windows.net",
+ "api.loganalytics.iov1",
+ "api.loganalytics.io",
+ "asazure.windows.net",
+ "region.asazure.windows.net",
+ "batch.core.windows.net"
+ ],
+ "excludedhosts": [
+ "schema.management.azure.com"
+ ]
+ }
+ }
+ }
+ }
+}
+```
+
+Customized bicepconfig.json can be placed alongside your templates in the same directory. The closest configuration file found up the folder tree is used.
+
+The following json is a sample bicepconfig.json:
+
+```json
+{
+ "analyzers": {
+ "core": {
+ "enabled": true,
+ "verbose": true,
+ "rules": {
+ "no-hardcoded-env-urls": {
+ "level": "warning"
+ },
+ "no-unused-params": {
+ "level": "error"
+ },
+ "no-unused-vars": {
+ "level": "error"
+ },
+ "prefer-interpolation": {
+ "level": "warning"
+ },
+ "secure-parameter-default": {
+ "level": "error"
+ },
+ "simplify-interpolation": {
+ "level": "warning"
+ }
+ }
+ }
+ }
+}
+```
+
+- **enabled**: enter **true** for enabling linter, enter **false** for disabling linter.
+- **verbose**: enter **true** to show the bicepconfig.json file used by Visual Studio Code..
+- **rules**: enter rule-specific values. Each rule has at least one property, and level. This property commands the behavior of Bicep if the case if found in the Bicep file.
+
+You can use several values for rule level:
+
+| **level** | **Build-time behavior** | **Editor behavior** |
+|--|--|--|
+| `Error` | Violations appear as Errors in command-line build output, and cause builds to fail. | Offending code is underlined with a red squiggle and appears in Problems tab. |
+| `Warning` | Violations appear as Warnings in command-line build output, but do not cause builds to fail. | Offending code is underlined with a yellow squiggle and appears in Problems tab. |
+| `Info` | Violations do not appear in command-line build output. | Offending code is underlined with a blue squiggle and appears in Problems tab. |
+| `Off` | Suppressed completely. | Suppressed completely. |
+
+The current set of linter rules is minimal and taken from [arm-ttk test cases](../templates/test-cases.md). Both Visual Studio Code extension and Bicep CLI check for all available rules by default and all rules are set at warning level. Based on the level of a rule, you see errors or warnings or informational messages within the editor.
+
+- [no-hardcoded-env-urls](https://github.com/Azure/bicep/blob/main/docs/linter-rules/no-hardcoded-env-urls.md)
+- [no-unused-params](https://github.com/Azure/bicep/blob/main/docs/linter-rules/no-unused-params.md)
+- [no-unused-vars](https://github.com/Azure/bicep/blob/main/docs/linter-rules/no-unused-vars.md)
+- [prefer-interpolation](https://github.com/Azure/bicep/blob/main/docs/linter-rules/prefer-interpolation.md)
+- [secure-parameter-default](https://github.com/Azure/bicep/blob/main/docs/linter-rules/secure-parameter-default.md)
+- [simplify-interpolation](https://github.com/Azure/bicep/blob/main/docs/linter-rules/simplify-interpolation.md)
+
+The Bicep extension of Visual Studio Code provides intellisense for editing Bicep configuration files:
++
+## Use in Visual Studio code
+
+Install the Bicep extension 0.4 or later to use linter. The following screenshot shows linter in action:
++
+In the **PROBLEMS** pane, there are four errors, one warning, and one info message shown in the screenshot. The info message shows the bicep configuration file that is used. It only shows this piece of information when you set **verbose** to **true** in the configuration file.
+
+Hover your mouse cursor to one of the problem areas. Linter gives the details about the error or warning. Click the area, it also shows a blue light bulb:
++
+Select either the light bulb or the **Quick fix** link to see the solution:
++
+Select the solution to fix the issue automatically.
+
+## Use in Bicep CLI
+
+Install the Bicep CLI 0.4 or later to use linter. The following screenshot shows linter in action. The Bicep file is the same as used in [Use in Visual Studio code](#use-in-visual-studio-code).
++
+You can integrate these checks as a part of your CI/CD pipelines. You can use a GitHub action to attempt a bicep build. Errors will fail the pipelines.
+
+## Next steps
+
+For more information about using Visual Studio Code and the Bicep extension, see [Quickstart: Create Bicep files with Visual Studio Code](./quickstart-create-bicep-use-visual-studio-code.md).
azure-resource-manager Scope Extension Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/scope-extension-resources.md
description: Describes how to use the scope property when deploying extension re
Previously updated : 06/01/2021 Last updated : 07/01/2021 # Set scope for extension resources in Bicep
resource roleAssignSub 'Microsoft.Authorization/roleAssignments@2020-04-01-previ
## Apply to resource
-To apply an extension resource to a resource, use the `scope` property. Set the scope property to the name of the resource you're adding the extension to. The scope property is a root property for the extension resource type.
+To apply an extension resource to a resource, use the `scope` property. In the scope property, reference the resource you're adding the extension to. You reference the resource by providing the symbolic name for the resource. The scope property is a root property for the extension resource type.
The following example creates a storage account and applies a role to it.
var role = {
} var uniqueStorageName = 'storage${uniqueString(resourceGroup().id)}'
-resource storageName 'Microsoft.Storage/storageAccounts@2019-04-01' = {
+resource demoStorageAcct 'Microsoft.Storage/storageAccounts@2019-04-01' = {
name: uniqueStorageName location: location sku: {
resource roleAssignStorage 'Microsoft.Authorization/roleAssignments@2020-04-01-p
roleDefinitionId: role[builtInRoleType] principalId: principalId }
- scope: storageName
+ scope: demoStorageAcct
dependsOn: [
- storageName
+ demoStorageAcct
] } ```
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/lock-resources.md
Title: Lock resources to prevent changes description: Prevent users from updating or deleting Azure resources by applying a lock for all users and roles. Previously updated : 06/24/2021 Last updated : 07/01/2021
Applying locks can lead to unexpected results because some operations that don't
- A read-only lock on a **subscription** prevents **Azure Advisor** from working correctly. Advisor is unable to store the results of its queries.
+- A read-only lock on an **Application Gateway** prevents you from getting the backend health of the application gateway. That [operation uses POST](/rest/api/application-gateway/application-gateways/backend-health), which is blocked by the read-only lock.
+ ## Who can create or delete locks To create or delete management locks, you must have access to `Microsoft.Authorization/*` or `Microsoft.Authorization/locks/*` actions. Of the built-in roles, only **Owner** and **User Access Administrator** are granted those actions.
azure-resource-manager Test Cases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/test-cases.md
Title: Test cases for test toolkit
-description: Describes the tests that are run by the ARM template test toolkit.
+description: Describes the tests that are run by the Azure Resource Manager template test toolkit.
Previously updated : 06/25/2021 Last updated : 06/30/2021
Test name: **DeploymentTemplate Schema Is Correct**
In your template, you must specify a valid schema value.
-The following example **passes** this test.
+This example **fails** because the schema is invalid:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-01-01/deploymentTemplate.json#",
+}
+```
+
+This example displays a **warning** because schema version `2015-01-01` is deprecated and isn't maintained.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+}
+```
+
+The following example **passes** using a valid schema.
```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {},
- "resources": []
} ```
-The schema property in the template must be set to one of the following schemas:
+The template's `schema` property must be set to one of the following schemas:
* `https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#` * `https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#`
The schema property in the template must be set to one of the following schemas:
Test name: **Parameters Must Be Referenced**
-To reduce confusion in your template, delete any parameters that are defined but not used. This test finds any parameters that aren't used anywhere in the template. Eliminating unused parameters also makes it easier to deploy your template because you don't have to provide unnecessary values.
+This test finds parameters that aren't used in the template or parameters that aren't used in a valid expression.
+
+To reduce confusion in your template, delete any parameters that are defined but not used. Eliminating unused parameters simplifies template deployments because you don't have to provide unnecessary values.
+
+This example **fails** because the expression that references a parameter is missing the leading square bracket (`[`).
+
+```json
+"resources": [
+ {
+ "location": " parameters('location')]"
+ }
+]
+```
+
+This example **passes** because the expression is valid:
+
+```json
+"resources": [
+ {
+ "location": "[parameters('location')]"
+ }
+]
+```
-## Secure parameters can't have hardcoded default
+## Secure parameters can't have hard-coded default
Test name: **Secure String Parameters Cannot Have Default**
-Don't provide a hard-coded default value for a secure parameter in your template. An empty string is fine for the default value.
+Don't provide a hard-coded default value for a secure parameter in your template. A secure parameter can have an empty string as a default value or use the [newGuid](template-functions-string.md#newguid) function in an expression.
You use the types `secureString` or `secureObject` on parameters that contain sensitive values, like passwords. When a parameter uses a secure type, the value of the parameter isn't logged or stored in the deployment history. This action prevents a malicious user from discovering the sensitive value.
-However, when you provide a default value, that value is discoverable by anyone who can access the template or the deployment history.
+When you provide a default value, that value is discoverable by anyone who can access the template or the deployment history.
The following example **fails** this test:
The next example **passes** this test:
} ```
-## Environment URLs can't be hardcoded
+This example **passes** because the `newGuid` function is used:
+
+```json
+"parameters": {
+ "secureParameter": {
+ "type": "secureString",
+ "defaultValue": "[newGuid()]"
+ }
+}
+```
+
+## Environment URLs can't be hard-coded
Test name: **DeploymentTemplate Must Not Contain Hardcoded Uri**
-Don't hardcode environment URLs in your template. Instead, use the [environment function](template-functions-deployment.md#environment) to dynamically get these URLs during deployment. For a list of the URL hosts that are blocked, see the [test case](https://github.com/Azure/arm-ttk/blob/master/arm-ttk/testcases/deploymentTemplate/DeploymentTemplate-Must-Not-Contain-Hardcoded-Uri.test.ps1).
+Don't hard-code environment URLs in your template. Instead, use the [environment](template-functions-deployment.md#environment) function to dynamically get these URLs during deployment. For a list of the URL hosts that are blocked, see the [test case](https://github.com/Azure/arm-ttk/blob/master/arm-ttk/testcases/deploymentTemplate/DeploymentTemplate-Must-Not-Contain-Hardcoded-Uri.test.ps1).
-The following example **fails** this test because the URL is hardcoded.
+The following example **fails** this test because the URL is hard-coded.
```json "variables":{
The following example **passes** this test.
Test name: **Location Should Not Be Hardcoded**
-Your templates should have a parameter named location. Use this parameter for setting the location of resources in your template. In the main template (named _azuredeploy.json_ or _mainTemplate.json_), this parameter can default to the resource group location. In linked or nested templates, the location parameter shouldn't have a default location.
+To set a resource's location, your templates should have a parameter named `location` with the type set to `string`. In the main template, _azuredeploy.json_ or _mainTemplate.json_, this parameter can default to the resource group location. In linked or nested templates, the location parameter shouldn't have a default location.
-Users of your template may have limited regions available to them. When you hardcode the resource location, users may be blocked from creating a resource in that region. Users could be blocked even if you set the resource location to `"[resourceGroup().location]"`. The resource group may have been created in a region that other users can't access. Those users are blocked from using the template.
+Template users may have limited access to regions where they can create resources. A hard-coded resource location might block users from creating a resource. The `"[resourceGroup().location]"` expression could block users if the resource group was created in a region the user can't access. Users who are blocked are unable to use the template.
-By providing a location parameter that defaults to the resource group location, users can use the default value when convenient but also specify a different location.
+By providing a `location` parameter that defaults to the resource group location, users can use the default value when convenient but also specify a different location.
-The following example **fails** this test because location on the resource is set to `resourceGroup().location`.
+The following example **fails** because the resource's `location` is set to `resourceGroup().location`.
```json {
The following example **fails** this test because location on the resource is se
} ```
-The next example uses a location parameter but **fails** this test because the location parameter defaults to a hardcoded location.
+The next example uses a `location` parameter but **fails** because the parameter defaults to a hard-coded location.
```json {
The next example uses a location parameter but **fails** this test because the l
} ```
-Instead, create a parameter that defaults to the resource group location but allows users to provide a different value. The following example **passes** this test when the template is used as the main template.
+The following example **passes** when the template is used as the main template. Create a parameter that defaults to the resource group location but allows users to provide a different value.
```json {
Instead, create a parameter that defaults to the resource group location but all
} ```
-However, if the preceding example is used as a linked template, the test **fails**. When used as a linked template, remove the default value.
+> [!NOTE]
+> If the preceding example is used as a linked template, the test **fails**. When used as a linked template, remove the default value.
## Resources should have location Test name: **Resources Should Have Location**
-The location for a resource should be set to a [template expression](template-expressions.md) or `global`. The template expression would typically use the location parameter described in the previous test.
+The location for a resource should be set to a [template expression](template-expressions.md) or `global`. The template expression would typically use the `location` parameter described in [Location uses parameter](#location-uses-parameter).
-The following example **fails** this test because the location isn't an expression or `global`.
+The following example **fails** this test because the `location` isn't an expression or `global`.
```json {
The following example **fails** this test because the location isn't an expressi
} ```
-The following example **passes** this test.
+The following example **passes** because the resource `location` is set to `global`.
```json {
The following example **passes** this test.
"variables": {}, "resources": [ {
- "type": "Microsoft.Maps/accounts",
+ "type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2021-02-01",
- "name": "demoMap",
+ "name": "storageaccount1",
"location": "global",
+ "kind": "StorageV2",
"sku": {
- "name": "S0"
+ "name": "Premium_LRS",
+ "tier": "Premium"
} } ],
- "outputs": {
- }
+ "outputs": {}
}+ ```
-The next example also **passes** this test.
+The next example also **passes** because the `location` parameter uses an expression. The resource `location` uses the expression's value.
```json {
The next example also **passes** this test.
Test name: **VM Size Should Be A Parameter**
-Don't hardcode the virtual machine (VM) size. Provide a parameter so users of your template can modify the size of the deployed virtual machine.
+Don't hard-code the `hardwareProfile` object's `vmSize`. The test fails when the `hardwareProfile` is omitted or contains a hard-coded value. Provide a parameter so users of your template can modify the size of the deployed virtual machine. For more information, see [Microsoft.Compute virtualMachines](/azure/templates/microsoft.compute/virtualmachines).
-The following example **fails** this test.
+The following example **fails** because the `hardwareProfile` object's `vmSize` is a hard-coded value.
```json
-"hardwareProfile": {
- "vmSize": "Standard_D2_v3"
-}
+"resources": [
+ {
+ "type": "Microsoft.Compute/virtualMachines",
+ "apiVersion": "2020-12-01",
+ "name": "demoVM",
+ "location": "[parameters('location')]",
+ "properties": {
+ "hardwareProfile": {
+ "vmSize": "Standard_D2_v3"
+ }
+ }
+ }
+]
```
-Instead, provide a parameter.
+The example **passes** when a parameter specifies a value for `vmSize`:
```json
-"vmSize": {
- "type": "string",
- "defaultValue": "Standard_A2_v2",
- "metadata": {
- "description": "Size for the Virtual Machine."
+"parameters": {
+ "vmSizeParameter": {
+ "type": "string",
+ "defaultValue": "Standard_D2_v3",
+ "metadata": {
+ "description": "Size for the virtual machine."
+ }
} } ```
-Then, set the VM size to that parameter.
+Then, `hardwareProfile` uses an expression for `vmSize` to reference the parameter's value:
```json
-"hardwareProfile": {
- "vmSize": "[parameters('vmSize')]"
-}
+"resources": [
+ {
+ "type": "Microsoft.Compute/virtualMachines",
+ "apiVersion": "2020-12-01",
+ "name": "demoVM",
+ "location": "[parameters('location')]",
+ "properties": {
+ "hardwareProfile": {
+ "vmSize": "[parameters('vmSizeParameter')]"
+ }
+ }
+ }
+]
``` ## Min and max values are numbers Test name: **Min And Max Value Are Numbers**
-If you define min and max values for a parameter, specify them as numbers.
+When you define a parameter with `minValue` and `maxValue`, specify them as numbers. You must use `minValue` and `maxValue` as a pair or the test fails.
-The following example **fails** this test:
+The following example **fails** because `minValue` and `maxValue` are strings:
```json "exampleParameter": {
The following example **fails** this test:
} ```
-Instead, provide the values as numbers. The following example **passes** this test:
+The following example **fails** because only `minValue` is used:
+
+```json
+"exampleParameter": {
+ "type": "int",
+ "minValue": 0
+}
+```
+
+The following example **passes** because `minValue` and `maxValue` are numbers:
```json "exampleParameter": {
Instead, provide the values as numbers. The following example **passes** this te
} ```
-You also get this warning if you provide a min or max value, but not the other.
- ## Artifacts parameter defined correctly Test name: **artifacts parameter** When you include parameters for `_artifactsLocation` and `_artifactsLocationSasToken`, use the correct defaults and types. The following conditions must be met to pass this test:
-* if you provide one parameter, you must provide the other
-* `_artifactsLocation` must be a `string`
-* `_artifactsLocation` must have a default value in the main template
-* `_artifactsLocation` can't have a default value in a nested template
-* `_artifactsLocation` must have either `"[deployment().properties.templateLink.uri]"` or the raw repo URL for its default value
-* `_artifactsLocationSasToken` must be a `secureString`
-* `_artifactsLocationSasToken` can only have an empty string for its default value
-* `_artifactsLocationSasToken` can't have a default value in a nested template
+* If you provide one parameter, you must provide the other.
+* `_artifactsLocation` must be a `string`.
+* `_artifactsLocation` must have a default value in the main template.
+* `_artifactsLocation` can't have a default value in a nested template.
+* `_artifactsLocation` must have either `"[deployment().properties.templateLink.uri]"` or the raw repo URL for its default value.
+* `_artifactsLocationSasToken` must be a `secureString`.
+* `_artifactsLocationSasToken` can only have an empty string for its default value.
+* `_artifactsLocationSasToken` can't have a default value in a nested template.
## Declared variables must be used Test name: **Variables Must Be Referenced**
-To reduce confusion in your template, delete any variables that are defined but not used. This test finds any variables that aren't used anywhere in the template.
+This test finds variables that aren't used in the template or aren't used in a valid expression. To reduce confusion in your template, delete any variables that are defined but not used.
+
+This example **fails** because the expression that references a variable is missing the leading square bracket (`[`).
+
+```json
+"outputs": {
+ "outputVariable": {
+ "type": "string",
+ "value": " variables('varExample')]"
+ }
+}
+```
+
+This example **passes** because the expression is valid:
+
+```json
+"outputs": {
+ "outputVariable": {
+ "type": "string",
+ "value": "[variables('varExample')]"
+ }
+}
+```
## Dynamic variable should not use concat
Test name: **Dynamic Variable References Should Not Use Concat**
Sometimes you need to dynamically construct a variable based on the value of another variable or parameter. Don't use the [concat](template-functions-string.md#concat) function when setting the value. Instead, use an object that includes the available options and dynamically get one of the properties from the object during deployment.
-The following example **passes** this test. The **currentImage** variable is dynamically set during deployment.
+The following example **passes** this test. The `currentImage` variable is dynamically set during deployment.
```json {
The following example **passes** this test. The **currentImage** variable is dyn
Test name: **apiVersions Should Be Recent**
-The API version for each resource should use a recent version. The test evaluates the version you use against the versions available for that resource type.
+The API version for each resource should use a recent version that's hard-coded as a string. The test evaluates the version you use against the versions available for that resource type. An API version that's less than two years old from the date the test was run is considered recent. Don't use a preview version when a more recent version is available.
+
+The following example **fails** because the API version is more than two years old:
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2019-06-01",
+ "name": "storageaccount1",
+ "location": "[parameters('location')]"
+ }
+]
+```
+
+The following example **fails** because a preview version is used when a newer version is available:
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2020-08-01-preview",
+ "name": "storageaccount1",
+ "location": "[parameters('location')]"
+ }
+]
+```
+
+The following example **passes** because it's a recent version that's not a preview version:
-## Use hardcoded API version
+```json
+"resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-02-01",
+ "name": "storageaccount1",
+ "location": "[parameters('location')]"
+ }
+]
+```
+
+## Use hard-coded API version
Test name: **Providers apiVersions Is Not Permitted**
-The API version for a resource type determines which properties are available. Provide a hard-coded API version in your template. Don't retrieve an API version that is determined during deployment. You won't know which properties are available.
+The API version for a resource type determines which properties are available. Provide a hard-coded API version in your template. Don't retrieve an API version that's determined during deployment because you won't know which properties are available.
The following example **fails** this test.
The following example **passes** this test.
Test name: **Template Should Not Contain Blanks**
-Don't hardcode properties to an empty value. Empty values include null and empty strings, objects, or arrays. If you've set a property to an empty value, remove that property from your template. However, it's okay to set a property to an empty value during deployment, such as through a parameter.
+Don't hard-code properties to an empty value. Empty values include null and empty strings, objects, or arrays. If a property is set to an empty value, remove that property from your template. You can set a property to an empty value during deployment, such as through a parameter.
+
+The following example **fails** because there are empty properties:
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-01-01",
+ "name": "storageaccount1",
+ "location": "[parameters('location')]",
+ "sku": {},
+ "kind": ""
+ }
+]
+```
+
+The following example **passes**:
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-01-01",
+ "name": "storageaccount1",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS",
+ "tier": "Standard"
+ },
+ "kind": "Storage"
+ }
+]
+```
## Use Resource ID functions
For `reference` and `list*`, the test **fails** when you use `concat` to constru
Test name: **DependsOn Best Practices**
-When setting the deployment dependencies, don't use the [if](template-functions-logical.md#if) function to test a condition. If one resource depends on a resource that is [conditionally deployed](conditional-resource-deployment.md), set the dependency as you would with any resource. When a conditional resource isn't deployed, Azure Resource Manager automatically removes it from the required dependencies.
+When setting the deployment dependencies, don't use the [if](template-functions-logical.md#if) function to test a condition. If one resource depends on a resource that's [conditionally deployed](conditional-resource-deployment.md), set the dependency as you would with any resource. When a conditional resource isn't deployed, Azure Resource Manager automatically removes it from the required dependencies.
-The following example **fails** this test.
+The `dependsOn` element can't begin with a [concat](template-functions-array.md#concat) function.
+
+The following example **fails** because it contains an `if` function:
```json "dependsOn": [
The following example **fails** this test.
] ```
-The next example **passes** this test.
+This example **fails** because it begins with `concat`:
+
+```json
+"dependsOn": [
+ "[concat(variables('storageAccountName'))]"
+]
+```
+
+The following example **passes**:
```json "dependsOn": [
The following example **passes** this test:
Test name: **adminUsername Should Not Be A Literal**
-When setting an admin user name, don't use a literal value.
+When setting an `adminUserName`, don't use a literal value. Create a parameter for the user name and use an expression to reference the parameter's value.
-The following example **fails** this test:
+The following example **fails** with a literal value:
```json "osProfile": {
The following example **fails** this test:
} ```
-Instead, use a parameter. The following example **passes** this test:
+The following example **passes** with an expression:
```json "osProfile": {
The following example **passes** this test.
Test name: **ManagedIdentityExtension must not be used**
-Don't apply the ManagedIdentity extension to a virtual machine. The extension was deprecated in 2019 and should no longer be used.
+Don't apply the `ManagedIdentity` extension to a virtual machine. The extension was deprecated in 2019 and should no longer be used.
## Outputs can't include secrets Test name: **Outputs Must Not Contain Secrets**
-Don't include any values in the outputs section that potentially expose secrets. The output from a template is stored in the deployment history, so a malicious user could find that information.
+Don't include any values in the `outputs` section that potentially exposes secrets. For example, secure parameters of type `secureString` or `secureObject`, or [list*](template-functions-resource.md#list) functions such as `listKeys`.
+
+The output from a template is stored in the deployment history, so a malicious user could find that information.
The following example **fails** the test because it includes a secure parameter in an output value.
The following example **fails** because it uses a [list*](template-functions-res
Test name: **CommandToExecute Must Use ProtectedSettings For Secrets**
-For resources with type `CustomScript`, use the encrypted `protectedSettings` when `commandToExecute` includes secret data such as a password. For example, secret data can be used in secure parameters of type `secureString` or `secureObject`, [list() functions](template-functions-resource.md#list) such as `listKeys()`, or custom scripts.
+For resources with type `CustomScript`, use the encrypted `protectedSettings` when `commandToExecute` includes secret data such as a password. For example, secret data can be used in secure parameters of type `secureString` or `secureObject`, [list*](template-functions-resource.md#list) functions such as `listKeys`, or custom scripts.
Don't use secret data in the `settings` object because it uses clear text. For more information, see [Microsoft.Compute virtualMachines/extensions](/azure/templates/microsoft.compute/virtualmachines/extensions), [Windows]( /azure/virtual-machines/extensions/custom-script-windows), or [Linux](../../virtual-machines/extensions/custom-script-linux.md).
This example **fails** because `settings` uses `commandToExecute` with a secure
} ```
-This example **fails** because `settings` uses `commandToExecute` with a `listKeys()` function:
+This example **fails** because `settings` uses `commandToExecute` with a `listKeys` function:
```json "properties": {
This example **passes** because `protectedSettings` uses `commandToExecute` with
} ```
-This example **passes** because `protectedSettings` uses `commandToExecute` with a `listKeys()` function:
+This example **passes** because `protectedSettings` uses `commandToExecute` with a `listKeys` function:
```json "properties": {
This example **passes** because `protectedSettings` uses `commandToExecute` with
Test name: **apiVersions Should Be Recent In Reference Functions**
-Ensures the `apiVersions` used in [reference functions](template-functions-resource.md#reference) are recent and aren't preview versions. The test evaluates API versions against the resource providers available versions. An API version that's less than two years old from the date the test was run is considered recent.
+Ensures the `apiVersions` used in [reference](template-functions-resource.md#reference) functions are recent and aren't preview versions. The test evaluates API versions against the resource providers available versions. An API version that's less than two years old from the date the test was run is considered recent.
This example **fails** because the API version is more than two years old:
For example, a `resourceId` function is considered ambiguous:
Test name: **Secure Params In Nested Deployments**
-Use the nested template's `expressionEvaluationOptions` object with `inner` scope to evaluate expressions that contain secure parameters of type `secureString` or `secureObject` or [list() functions](template-functions-resource.md#list) such as `listKeys()`. If the `outer` scope is used, expressions are evaluated in clear text within the parent template's scope. The secure value is then visible to anyone with access to the deployment history. The default value of `expressionEvaluationOptions` is `outer`.
+Use the nested template's `expressionEvaluationOptions` object with `inner` scope to evaluate expressions that contain secure parameters of type `secureString` or `secureObject` or [list*](template-functions-resource.md#list) functions such as `listKeys`. If the `outer` scope is used, expressions are evaluated in clear text within the parent template's scope. The secure value is then visible to anyone with access to the deployment history. The default value of `expressionEvaluationOptions` is `outer`.
For more information about nested templates, see [Microsoft.Resources/deployments](/azure/templates/microsoft.resources/deployments) and [Expression evaluation scope in nested templates](linked-templates.md#expression-evaluation-scope-in-nested-templates).
-This example **fails** because `expressionEvaluationOptions` uses `outer` scope to evaluate secure parameters or `list()` functions:
+This example **fails** because `expressionEvaluationOptions` uses `outer` scope to evaluate secure parameters or `list*` functions:
```json "resources": [
-{
- "type": "Microsoft.Resources/deployments",
- "apiVersion": "2021-04-01",
- "name": "nestedTemplate",
- "properties": {
- "expressionEvaluationOptions": {
- "scope": "outer"
+ {
+ "type": "Microsoft.Resources/deployments",
+ "apiVersion": "2021-04-01",
+ "name": "nestedTemplate",
+ "properties": {
+ "expressionEvaluationOptions": {
+ "scope": "outer"
+ }
}
+ }
+]
```
-This example **passes** because `expressionEvaluationOptions` uses `inner` scope to evaluate secure parameters or `list()` functions:
+This example **passes** because `expressionEvaluationOptions` uses `inner` scope to evaluate secure parameters or `list*` functions:
```json "resources": [
-{
- "type": "Microsoft.Resources/deployments",
- "apiVersion": "2021-04-01",
- "name": "nestedTemplate",
- "properties": {
- "expressionEvaluationOptions": {
- "scope": "inner"
+ {
+ "type": "Microsoft.Resources/deployments",
+ "apiVersion": "2021-04-01",
+ "name": "nestedTemplate",
+ "properties": {
+ "expressionEvaluationOptions": {
+ "scope": "inner"
+ }
}
+ }
+]
``` ## Next steps
azure-resource-manager Test Toolkit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/test-toolkit.md
Title: ARM template test toolkit description: Describes how to run the Azure Resource Manager template (ARM template) test toolkit on your template. The toolkit lets you see if you have implemented recommended practices. Previously updated : 09/02/2020 Last updated : 06/30/2020
The toolkit is a set of PowerShell scripts that can be run from a command in Pow
1. Start PowerShell.
-1. Navigate to the folder where you extracted the test toolkit. Within that folder, navigate to **arm-ttk** folder.
+1. Navigate to the folder where you extracted the test toolkit. Within that folder, navigate to _arm-ttk_ folder.
-1. If your [execution policy](/powershell/module/microsoft.powershell.core/about/about_execution_policies) blocks scripts from the Internet, you need to unblock the script files. Make sure you're in the **arm-ttk** folder.
+1. If your [execution policy](/powershell/module/microsoft.powershell.core/about/about_execution_policies) blocks scripts from the Internet, you need to unblock the script files. Make sure you're in the _arm-ttk_ folder.
```powershell Get-ChildItem *.ps1, *.psd1, *.ps1xml, *.psm1 -Recurse | Unblock-File
The toolkit is a set of PowerShell scripts that can be run from a command in Pow
pwsh ```
-1. Navigate to the folder where you extracted the test toolkit. Within that folder, navigate to **arm-ttk** folder.
+1. Navigate to the folder where you extracted the test toolkit. Within that folder, navigate to _arm-ttk_ folder.
-1. If your [execution policy](/powershell/module/microsoft.powershell.core/about/about_execution_policies) blocks scripts from the Internet, you need to unblock the script files. Make sure you're in the **arm-ttk** folder.
+1. If your [execution policy](/powershell/module/microsoft.powershell.core/about/about_execution_policies) blocks scripts from the Internet, you need to unblock the script files. Make sure you're in the _arm-ttk_ folder.
```powershell Get-ChildItem *.ps1, *.psd1, *.ps1xml, *.psm1 -Recurse | Unblock-File
The toolkit is a set of PowerShell scripts that can be run from a command in Pow
pwsh ```
-1. Navigate to the folder where you extracted the test toolkit. Within that folder, navigate to **arm-ttk** folder.
+1. Navigate to the folder where you extracted the test toolkit. Within that folder, navigate to _arm-ttk_ folder.
-1. If your [execution policy](/powershell/module/microsoft.powershell.core/about/about_execution_policies) blocks scripts from the Internet, you need to unblock the script files. Make sure you're in the **arm-ttk** folder.
+1. If your [execution policy](/powershell/module/microsoft.powershell.core/about/about_execution_policies) blocks scripts from the Internet, you need to unblock the script files. Make sure you're in the _arm-ttk_ folder.
```powershell Get-ChildItem *.ps1, *.psd1, *.ps1xml, *.psm1 -Recurse | Unblock-File
The toolkit is a set of PowerShell scripts that can be run from a command in Pow
## Result format
-Tests that pass are displayed in **green** and prefaced with **[+]**.
+Tests that pass are displayed in **green** and prefaced with `[+]`.
-Tests that fail are displayed in **red** and prefaced with **[-]**.
+Tests that fail are displayed in **red** and prefaced with `[-]`.
+Tests with a warning are displayed in **yellow** and prefaced with `[?]`.
+ The text results are: ```powershell
-[+] adminUsername Should Not Be A Literal (24 ms)
-[+] apiVersions Should Be Recent (18 ms)
-[+] artifacts parameter (16 ms)
-[+] DeploymentTemplate Schema Is Correct (17 ms)
-[+] IDs Should Be Derived From ResourceIDs (15 ms)
-[-] Location Should Not Be Hardcoded (41 ms)
- azuredeploy.json must use the location parameter, not resourceGroup().location (except when used as a default value in the main template)
+deploymentTemplate
+[+] adminUsername Should Not Be A Literal (6 ms)
+[+] apiVersions Should Be Recent In Reference Functions (9 ms)
+[-] apiVersions Should Be Recent (6 ms)
+ Api versions must be the latest or under 2 years old (730 days) - API version 2019-06-01 of
+ Microsoft.Storage/storageAccounts is 760 days old
+ Valid Api Versions:
+ 2021-04-01
+ 2021-02-01
+ 2021-01-01
+ 2020-08-01-preview
+
+[+] artifacts parameter (4 ms)
+[+] CommandToExecute Must Use ProtectedSettings For Secrets (9 ms)
+[+] DependsOn Best Practices (5 ms)
+[+] Deployment Resources Must Not Be Debug (6 ms)
+[+] DeploymentTemplate Must Not Contain Hardcoded Uri (4 ms)
+[?] DeploymentTemplate Schema Is Correct (6 ms)
+ Template is using schema version '2015-01-01' which has been deprecated and is no longer
+ maintained.
``` ## Test parameters
-When you provide the **-TemplatePath** parameter, the toolkit looks in that folder for a template named azuredeploy.json or maintemplate.json. It tests this template first and then tests all other templates in the folder and its subfolders. The other templates are tested as linked templates. If your path includes a file named [CreateUiDefinition.json](../managed-applications/create-uidefinition-overview.md), it runs tests that are relevant to UI definition.
+When you provide the `-TemplatePath` parameter, the toolkit looks in that folder for a template named _azuredeploy.json_ or _maintemplate.json_. It tests this template first and then tests all other templates in the folder and its subfolders. The other templates are tested as linked templates. If your path includes a file named [CreateUiDefinition.json](../managed-applications/create-uidefinition-overview.md), it runs tests that are relevant to UI definition.
```powershell Test-AzTemplate -TemplatePath $TemplateFolder ```
-To test one file in that folder, add the **-File** parameter. However, the folder must still have a main template named azuredeploy.json or maintemplate.json.
+To test one file in that folder, add the `-File` parameter. However, the folder must still have a main template named _azuredeploy.json_ or _maintemplate.json_.
```powershell Test-AzTemplate -TemplatePath $TemplateFolder -File cdn.json ```
-By default, all tests are run. To specify individual tests to run, use the **-Test** parameter. Provide the name of the test. For the names, see [Test cases for toolkit](test-cases.md).
+By default, all tests are run. To specify individual tests to run, use the `-Test` parameter. Provide the name of the test. For the names, see [Test cases for toolkit](test-cases.md).
```powershell Test-AzTemplate -TemplatePath $TemplateFolder -Test "Resources Should Have Location"
Test-AzTemplate -TemplatePath $TemplateFolder -Test "Resources Should Have Locat
## Customize tests
-For ARM templates, the toolkit runs all of the tests in the folder **\arm-ttk\testcases\deploymentTemplate**. If you want to permanently remove a test, delete that file from the folder.
+For ARM templates, the toolkit runs all of the tests in the folder _\arm-ttk\testcases\deploymentTemplate_. If you want to permanently remove a test, delete that file from the folder.
-For [CreateUiDefinition](../managed-applications/create-uidefinition-overview.md) files, it runs all of the tests in the folder **\arm-ttk\testcases\CreateUiDefinition**.
+For [CreateUiDefinition](../managed-applications/create-uidefinition-overview.md) files, it runs all of the tests in the folder _\arm-ttk\testcases\CreateUiDefinition_.
-To add your own test, create a file with the naming convention: **Your-Custom-Test-Name.test.ps1**.
+To add your own test, create a file with the naming convention: _Your-Custom-Test-Name.test.ps1_.
The test can get the template as an object parameter or a string parameter. Typically, you use one or the other, but you can use both.
Use the object parameter when you need to get a section of the template and iter
```powershell param(
- [Parameter(Mandatory=$true,Position=0)]
- [PSObject]
- $TemplateObject
+ [Parameter(Mandatory=$true,Position=0)]
+ [PSObject]
+ $TemplateObject
) # Implement test logic that evaluates parts of the template.
Use the string parameter when you need to do a string operation on the whole tem
```powershell param(
- [Parameter(Mandatory)]
- [string]
- $TemplateText
+ [Parameter(Mandatory)]
+ [string]
+ $TemplateText
) # Implement test logic that performs string operations.
Or, you can implement your own tasks. The following example shows how to downloa
```json {
- "environment": {},
- "enabled": true,
- "continueOnError": false,
- "alwaysRun": false,
- "displayName": "Download TTK",
- "timeoutInMinutes": 0,
- "condition": "succeeded()",
- "task": {
- "id": "e213ff0f-5d5c-4791-802d-52ea3e7be1f1",
- "versionSpec": "2.*",
- "definitionType": "task"
- },
- "inputs": {
- "targetType": "inline",
- "filePath": "",
- "arguments": "",
- "script": "New-Item '$(ttk.folder)' -ItemType Directory\nInvoke-WebRequest -uri '$(ttk.uri)' -OutFile \"$(ttk.folder)/$(ttk.asset.filename)\" -Verbose\nGet-ChildItem '$(ttk.folder)' -Recurse\n\nWrite-Host \"Expanding files...\"\nExpand-Archive -Path '$(ttk.folder)/*.zip' -DestinationPath '$(ttk.folder)' -Verbose\n\nWrite-Host \"Expanded files found:\"\nGet-ChildItem '$(ttk.folder)' -Recurse",
- "errorActionPreference": "stop",
- "failOnStderr": "false",
- "ignoreLASTEXITCODE": "false",
- "pwsh": "true",
- "workingDirectory": ""
- }
+ "environment": {},
+ "enabled": true,
+ "continueOnError": false,
+ "alwaysRun": false,
+ "displayName": "Download TTK",
+ "timeoutInMinutes": 0,
+ "condition": "succeeded()",
+ "task": {
+ "id": "e213ff0f-5d5c-4791-802d-52ea3e7be1f1",
+ "versionSpec": "2.*",
+ "definitionType": "task"
+ },
+ "inputs": {
+ "targetType": "inline",
+ "filePath": "",
+ "arguments": "",
+ "script": "New-Item '$(ttk.folder)' -ItemType Directory\nInvoke-WebRequest -uri '$(ttk.uri)' -OutFile \"$(ttk.folder)/$(ttk.asset.filename)\" -Verbose\nGet-ChildItem '$(ttk.folder)' -Recurse\n\nWrite-Host \"Expanding files...\"\nExpand-Archive -Path '$(ttk.folder)/*.zip' -DestinationPath '$(ttk.folder)' -Verbose\n\nWrite-Host \"Expanded files found:\"\nGet-ChildItem '$(ttk.folder)' -Recurse",
+ "errorActionPreference": "stop",
+ "failOnStderr": "false",
+ "ignoreLASTEXITCODE": "false",
+ "pwsh": "true",
+ "workingDirectory": ""
+ }
} ```
The next example shows how to run the tests.
```json {
- "environment": {},
- "enabled": true,
- "continueOnError": true,
- "alwaysRun": false,
- "displayName": "Run Best Practices Tests",
- "timeoutInMinutes": 0,
- "condition": "succeeded()",
- "task": {
- "id": "e213ff0f-5d5c-4791-802d-52ea3e7be1f1",
- "versionSpec": "2.*",
- "definitionType": "task"
- },
- "inputs": {
- "targetType": "inline",
- "filePath": "",
- "arguments": "",
- "script": "Import-Module $(ttk.folder)/arm-ttk/arm-ttk.psd1 -Verbose\n$testOutput = @(Test-AzTemplate -TemplatePath \"$(sample.folder)\")\n$testOutput\n\nif ($testOutput | ? {$_.Errors }) {\n exit 1 \n} else {\n Write-Host \"##vso[task.setvariable variable=result.best.practice]$true\"\n exit 0\n} \n",
- "errorActionPreference": "continue",
- "failOnStderr": "true",
- "ignoreLASTEXITCODE": "false",
- "pwsh": "true",
- "workingDirectory": ""
- }
+ "environment": {},
+ "enabled": true,
+ "continueOnError": true,
+ "alwaysRun": false,
+ "displayName": "Run Best Practices Tests",
+ "timeoutInMinutes": 0,
+ "condition": "succeeded()",
+ "task": {
+ "id": "e213ff0f-5d5c-4791-802d-52ea3e7be1f1",
+ "versionSpec": "2.*",
+ "definitionType": "task"
+ },
+ "inputs": {
+ "targetType": "inline",
+ "filePath": "",
+ "arguments": "",
+ "script": "Import-Module $(ttk.folder)/arm-ttk/arm-ttk.psd1 -Verbose\n$testOutput = @(Test-AzTemplate -TemplatePath \"$(sample.folder)\")\n$testOutput\n\nif ($testOutput | ? {$_.Errors }) {\n exit 1 \n} else {\n Write-Host \"##vso[task.setvariable variable=result.best.practice]$true\"\n exit 0\n} \n",
+ "errorActionPreference": "continue",
+ "failOnStderr": "true",
+ "ignoreLASTEXITCODE": "false",
+ "pwsh": "true",
+ "workingDirectory": ""
+ }
} ``` ## Next steps -- To learn about the default tests, see [Default test cases for ARM template test toolkit](test-cases.md).-- For a Microsoft Learn module that covers using the test toolkit, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/learn/modules/arm-template-test/).
+* To learn about the default tests, see [Default test cases for ARM template test toolkit](test-cases.md).
+* For a Microsoft Learn module that covers using the test toolkit, see [Validate Azure resources by using the ARM Template Test Toolkit](/learn/modules/arm-template-test/).
azure-signalr Signalr Howto Troubleshoot Live Trace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-howto-troubleshoot-live-trace.md
+
+ Title: How to use live trace tool for Azure SignalR service
+description: Learn how to use live trace tool for Azure SignalR service
++++ Last updated : 06/30/2021++
+# How to use live trace tool for Azure SignalR service
+
+Live trace tool is a single web application for capturing and displaying live traces in Azure SignalR service. The live traces can be collected in real time without any dependency on other services.
+You can enable and disable the live trace feature with a single click. You can also choose any log category that you're interested.
+
+> [!NOTE]
+> Please note that the live traces will be counted as outbound messages.
+
+## Launch the live trace tool
+
+1. Go to the Azure portal.
+2. Check **Enable Live Trace**.
+3. click **Save** button in tool bar and wait for the changes take effect.
+4. On the **Diagnostic Settings** page of your Azure Web PubSub service instance, select **Open Live Trace Tool**.
+
+ :::image type="content" source="media/signalr-howto-troubleshoot-live-trace/live-traces-with-live-trace-tool.png" alt-text="Screenshot of launching the live trace tool.":::
+
+## Capture live traces
+
+The live trace tool provides some fundamental functionalities to help you capture the live traces for troubleshooting.
+
+* **Capture**: Begin to capture the real time live traces from Azure Web PubSub instance with live trace tool.
+* **Clear**: Clear the captured real time live traces.
+* **Export**: Export live traces to a file. The current supported file format is CSV file.
+* **Log filter**: The live trace tool allows you filtering the captured real time live traces with one specific key word. The common separator (for example, space, comma, semicolon, and so on) will be treated as part of the key word.
+* **Status**: The status shows whether the live trace tool is connected or disconnected with the specific instance.
++
+The real time live traces captured by live trace tool contain detailed information for troubleshooting.
+
+| Name | Description |
+| | |
+| Time | Log event time |
+| Log Level | Log event level (Trace/Debug/Informational/Warning/Error) |
+| Event Name | Operation name of the event |
+| Message | Detailed message of log event |
+| Exception | The run-time exception of Azure Web PubSub service |
+| Hub | User-defined Hub Name |
+| Connection ID | Identity of the connection |
+| Connection ID | Type of the connection. Allowed values are `Server` (connections between server and service) and `Client` (connections between client and service)|
+| User ID | Identity of the user |
+| IP | The IP address of client |
+| Server Sticky | Routing mode of client. Allowed values are `Disabled`, `Preferred` and `Required`. For more information, see [ServerStickyMode](https://github.com/Azure/azure-signalr/blob/master/docs/run-asp-net-core.md#serverstickymode) |
+| Transport | The transport that the client can use to send HTTP requests. Allowed values are `WebSockets`, `ServerSentEvents` and `LongPolling`. For more information, see [HttpTransportType](https://docs.microsoft.com/dotnet/api/microsoft.aspnetcore.http.connections.httptransporttype) |
+
+## Next Steps
+
+In this guide, you learned about how to use live trace tool. You could also learn how to handle the common issues:
+* Troubleshooting guides: For how to troubleshoot typical issues based on live traces, see our [troubleshooting guide](./signalr-howto-troubleshoot-guide.md).
+* Troubleshooting methods: For self-diagnosis to find the root cause directly or narrow down the issue, see our [troubleshooting methods introduction](./signalr-howto-troubleshoot-method.md).
azure-signalr Signalr Howto Troubleshoot Method https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-howto-troubleshoot-method.md
First, you need to check from the Azure portal which [ServiceMode](./concept-ser
* For `Classic` mode, refer to [classic mode troubleshooting](#classic_mode_tsg)
-<a name="default_mode_tsg"></a>
+Second, you need to capture service traces to troubleshoot. For how to capture traces, refer to [How to capture service traces](#how-to-capture-service-traces).
[Having issues or feedback about the troubleshooting? Let us know.](https://aka.ms/asrs/survey/troubleshooting)
+## How to capture service traces
+
+To simplify troubleshooting process, Azure SignalR service provides **live trace tool** to expose service traces on **connectivity** and **messaging** categories. The traces includes but not limited to connection connected/disconnected events, message received/left events. With **live trace tool**, you can capture, view, sort, filter and export live traces. For more details, refer to [How to use live trace tool](./signalr-howto-troubleshoot-live-trace.md).
+
+[Having issues or feedback about the troubleshooting? Let us know.](https://aka.ms/asrs/survey/troubleshooting)
+
+<a name="default_mode_tsg"></a>
+ ## Default mode troubleshooting When **ASRS** is in *Default* mode, there are **three** roles: *Client*, *Server*, and *Service*:
azure-sql Arm Templates Content Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/arm-templates-content-guide.md
Previously updated : 05/24/2021 Last updated : 06/30/2021 # Azure Resource Manager templates for Azure SQL Database & SQL Managed Instance
The following table includes links to Azure Resource Manager templates for Azure
| [Import data from Blob storage using ADF V2](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.datafactory/data-factory-v2-blob-to-sql-copy) | This Azure Resource Manager template creates an instance of Azure Data Factory V2 that copies data from Azure Blob storage to SQL Database.| | [HDInsight cluster with a database](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.hdinsight/hdinsight-linux-with-sql-database) | This template allows you to create an HDInsight cluster, a logical SQL server, a database, and two tables. This template is used by the [Use Sqoop with Hadoop in HDInsight article](../../hdinsight/hadoop/hdinsight-use-sqoop.md). | | [Azure Logic App that runs a SQL Stored Procedure on a schedule](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.logic/logic-app-sql-proc) | This template allows you to create a logic app that will run a SQL stored procedure on schedule. Any arguments for the procedure can be put into the body section of the template.|
+| [Provision server with Azure AD-only authentication enabled](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.sql/sql-logical-server-aad-only-auth) | This template creates a SQL logical server with an Azure AD admin set for the server and Azure AD-only authentication enabled. |
## [Azure SQL Managed Instance](#tab/managed-instance)
azure-sql Authentication Azure Ad Only Authentication Create Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-azure-ad-only-authentication-create-server.md
+
+ Title: Create server with Azure Active Directory only authentication enabled in Azure SQL
+description: This article guides you through creating an Azure SQL logical server or managed instance with Azure Active Directory (Azure AD) only authentication enabled, which disables connectivity using SQL Authentication
++++++ Last updated : 06/30/2021++
+# Create server with Azure AD-only authentication enabled in Azure SQL
++
+> [!NOTE]
+> The **Azure AD-only authentication** feature discussed in this article is in **public preview**. For detailed information about this feature, see [Azure AD-only authentication with Azure SQL](authentication-azure-ad-only-authentication.md). Azure AD-only authentication is currently not available for Azure Synapse Analytics.
+
+This how-to guide outlines the steps to create an [Azure SQL logical server](logical-servers.md) or [Azure SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md) with [Azure AD-only authentication](authentication-azure-ad-only-authentication.md) enabled during provisioning. The Azure AD-only authentication feature prevents users from connecting to the server or managed instance using SQL authentication, and only allows connection using Azure AD authentication.
+
+## Prerequisites
+
+- [Az 6.1.0](https://www.powershellgallery.com/packages/Az/6.1.0) module or higher is needed when using PowerShell.
+- If you're provisioning a managed instance using PowerShell or Rest API, a virtual network and subnet needs to be created before you begin. For more information, see [Create a virtual network for Azure SQL Managed Instance](../managed-instance/virtual-network-subnet-create-arm-template.md).
+
+## Permissions
+
+To provision an Azure SQL logical server or managed instance, you'll need to have the appropriate permissions to create these resources. Azure users with higher permissions, such as subscription [Owners](../../role-based-access-control/built-in-roles.md#owner), [Contributors](../../role-based-access-control/built-in-roles.md#contributor), [Service Administrators](/azure/role-based-access-control/rbac-and-directory-admin-roles#classic-subscription-administrator-roles), and [Co-Administrators](/azure/role-based-access-control/rbac-and-directory-admin-roles#classic-subscription-administrator-roles) have the privilege to create a SQL server or managed instance. To create these resources with the least privileged Azure RBAC role, use the [SQL Server Contributor](../../role-based-access-control/built-in-roles.md#sql-server-contributor) role for SQL Database and [SQL Managed Instance Contributor](../../role-based-access-control/built-in-roles.md#sql-managed-instance-contributor) role for Managed Instance.
+
+The [SQL Security Manager](../../role-based-access-control/built-in-roles.md#sql-security-manager) Azure RBAC role doesn't have enough permissions to create a server or instance with Azure AD-only authentication enabled. The [SQL Security Manager](../../role-based-access-control/built-in-roles.md#sql-security-manager) role will be required to manage the Azure AD-only authentication feature after server or instance creation.
+
+## Provision with Azure AD-only authentication enabled
+
+The following section provides you with examples and scripts on how to create a SQL logical server or managed instance with an Azure AD admin set for the server or instance, and have Azure AD-only authentication enabled during server creation. For more information on the feature, see [Azure AD-only authentication](authentication-azure-ad-only-authentication.md).
+
+In our examples, we're enabling Azure AD-only authentication during server or managed instance creation, with a system assigned server admin and password. This will prevent server admin access when Azure AD-only authentication is enabled, and only allows the Azure AD admin to access the resource. It's optional to add parameters to the APIs to include your own server admin and password during server creation. However, the password cannot be reset until you disable Azure AD-only authentication.
+
+To change the existing properties after server or managed instance creation, other existing APIs should be used. See [Managing Azure AD-only authentication using APIs](authentication-azure-ad-only-authentication.md#managing-azure-ad-only-authentication-using-apis) and [Configure and manage Azure AD authentication with Azure SQL](authentication-aad-configure.md) for more information.
+
+> [!NOTE]
+> If Azure AD-only authentication is set to false, which it is by default, a server admin and password will need to be included in all APIs during server or managed instance creation.
+
+## Azure SQL Database
+
+# [PowerShell](#tab/azure-powershell)
+
+The PowerShell command `New-AzSqlServer` is used to provision a new Azure SQL logical server. The below command will provision a new logical server with Azure AD-only authentication enabled.
+
+The server SQL Administrator login will be automatically created and the password will be set to a random password. Since SQL Authentication connectivity is disabled with this server creation, the SQL Administrator login won't be used.
+
+The server Azure AD admin will be the account you set for `<AzureADAccount>`, and can be used to manage the server.
+
+Replace the following values in the example:
+
+- `<ResourceGroupName>`: Name of the resource group for your Azure SQL logical server
+- `<Location>`: Location of the server, such as `West US`, or `Central US`
+- `<ServerName>`: Use a unique Azure SQL logical server name
+- `<AzureADAccount>`: Can be an Azure AD user or group. For example, `DummyLogin`
+
+```powershell
+New-AzSqlServer -ResourceGroupName "<ResourceGroupName>" -Location "<Location>" -ServerName "<ServerName>" -ServerVersion "12.0" -ExternalAdminName "<AzureADAccount>" -EnableActiveDirectoryOnlyAuthentication
+```
+
+For more information, see [New-AzSqlServer](/powershell/module/az.sql/new-azsqlserver).
+
+# [Rest API](#tab/rest-api)
+
+The [Servers - Create Or Update](/rest/api/sql/2020-11-01-preview/servers/create-or-update) Rest API can be used to create an Azure SQL logical server with Azure AD-only authentication enabled during provisioning.
+
+The script below will provision an Azure SQL logical server, set the Azure AD admin as `<AzureADAccount>`, and enable Azure AD-only authentication. The server SQL Administrator login will also be created automatically and the password will be set to a random password. Since SQL Authentication connectivity is disabled with this provisioning, the SQL Administrator login won't be used.
+
+The Azure AD admin, `<AzureADAccount>` can be used to manage the server when the provisioning is complete.
+
+Replace the following values in the example:
+
+- `<tenantId>`: Can be found by going to the [Azure portal](https://portal.azure.com), and going to your **Azure Active Directory** resource. In the **Overview** pane, you should see your **Tenant ID**
+- `<subscriptionId>`: Your subscription ID can be found in the Azure portal
+- `<ServerName>`: Use a unique Azure SQL logical server name
+- `<ResourceGroupName>`: Name of the resource group for your Azure SQL logical server
+- `<AzureADAccount>`: Can be an Azure AD user or group. For example, `DummyLogin`
+- `<Location>`: Location of the server, such as `westus2`, or `centralus`
+- `<objectId>`: Can be found by going to the [Azure portal](https://portal.azure.com), and going to your **Azure Active Directory** resource. In the **User** pane, search for the Azure AD user and find their **Object ID**
+
+```rest
+Import-Module Azure
+Import-Module MSAL.PS
+
+$tenantId = '<tenantId>'
+$clientId = '1950a258-227b-4e31-a9cf-717495945fc2' # Static Microsoft client ID used for getting a token
+$subscriptionId = '<subscriptionId>'
+$uri = "urn:ietf:wg:oauth:2.0:oob"
+$authUrl = "https://login.windows.net/$tenantId"
+$serverName = "<ServerName>"
+$resourceGroupName = "<ResourceGroupName>"
+
+Login-AzAccount -tenantId $tenantId
+
+# login as a user with SQL Server Contributor role or higher
+
+# Get a token
+
+$result = Get-MsalToken -RedirectUri $uri -ClientId $clientId -TenantId $tenantId -Scopes "https://management.core.windows.net/.default"
+
+#Authetication header
+$authHeader = @{
+'Content-Type'='application\json; '
+'Authorization'=$result.CreateAuthorizationHeader()
+}
+
+# Enable Azure AD-only auth
+# No server admin is specified, and only Azure AD admin and Azure AD-only authentication is set to true
+# Server admin (login and password) is generated by the system
+
+# Authentication body
+# The sid is the Azure AD Object ID for the user
+
+$body = '{
+"location": "<Location>",
+"properties": { "administrators":{ "login":"<AzureADAccount>", "sid":"<objectId>", "tenantId":"<tenantId>", "principalType":"User", "azureADOnlyAuthentication":true }
+ }
+}'
+
+# Provision the server
+
+Invoke-RestMethod -Uri https://management.azure.com/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Sql/servers/$serverName/?api-version=2020-11-01-preview -Method PUT -Headers $authHeader -Body $body -ContentType "application/json"
+```
+
+To check the server status, you can use the following script:
+
+```rest
+$uri = 'https://management.azure.com/subscriptions/'+$subscriptionId+'/resourceGroups/'+$resourceGroupName+'/providers/Microsoft.Sql/servers/'+$serverName+'?api-version=2020-11-01-preview&$expand=administrators/activedirectory'
+
+$responce=Invoke-WebRequest -Uri $uri -Method PUT -Headers $authHeader -Body $body -ContentType "application/json"
+
+$responce.statuscode
+
+$responce.content
+```
+
+# [ARM Template](#tab/arm-template)
+
+For more information and ARM templates, see [Azure Resource Manager templates for Azure SQL Database & SQL Managed Instance](arm-templates-content-guide.md).
+
+To provision a SQL logical server with an Azure AD admin set for the server and Azure AD-only authentication enabled using an ARM Template, see our [Azure SQL logical server with Azure AD-only authentication](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.sql/sql-logical-server-aad-only-auth) quickstart template.
+
+You can also use the following template. Use a [Custom deployment in the Azure portal](https://portal.azure.com/#create/Microsoft.Template), and **Build your own template in the editor**. Next, **Save** the configuration once you pasted in the example.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.1",
+ "parameters": {
+ "server": {
+ "type": "string",
+ "defaultValue": "[uniqueString('sql', resourceGroup().id)]",
+ "metadata": {
+ "description": "The name of the SQL logical server."
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Location for all resources."
+ }
+ },
+ "aad_admin_name": {
+ "type": "String",
+ "metadata": {
+ "description": "The name of the Azure AD admin for the SQL server."
+ }
+ },
+ "aad_admin_objectid": {
+ "type": "String",
+ "metadata": {
+ "description": "The Object ID of the Azure AD admin."
+ }
+ },
+ "aad_admin_tenantid": {
+ "type": "String",
+ "defaultValue": "[subscription().tenantId]",
+ "metadata": {
+ "description": "The Tenant ID of the Azure Active Directory"
+ }
+ },
+ "aad_admin_type": {
+ "defaultValue": "User",
+ "allowedValues": [
+ "User",
+ "Group",
+ "Application"
+ ],
+ "type": "String"
+ },
+ "aad_only_auth": {
+ "defaultValue": true,
+ "type": "Bool"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Sql/servers",
+ "apiVersion": "2020-11-01-preview",
+ "name": "[parameters('server')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "administrators": {
+ "login": "[parameters('aad_admin_name')]",
+ "sid": "[parameters('aad_admin_objectid')]",
+ "tenantId": "[parameters('aad_admin_tenantid')]",
+ "principalType": "[parameters('aad_admin_type')]",
+ "azureADOnlyAuthentication": "[parameters('aad_only_auth')]"
+ }
+ }
+ }
+ ]
+}
+```
+++
+## Azure SQL Managed Instance
+
+# [PowerShell](#tab/azure-powershell)
+
+The PowerShell command `New-AzSqlInstance` is used to provision a new Azure SQL Managed Instance. The below command will provision a new managed instance with Azure AD-only authentication enabled.
+
+> [!NOTE]
+> The script requires a virtual network and subnet be created as a prerequisite.
+
+The managed instance SQL Administrator login will be automatically created and the password will be set to a random password. Since SQL Authentication connectivity is disabled with this provision, the SQL Administrator login won't be used.
+
+The Azure AD admin will be the account you set for `<AzureADAccount>`, and can be used to manage the instance when the provisioning is complete.
+
+Replace the following values in the example:
+
+- `<managedinstancename>`: Name the managed instance you want to create
+- `<ResourceGroupName>`: Name of the resource group for your managed instance. The resource group should also include the virtual network and subnet created
+- `<Location>`: Location of the server, such as `West US`, or `Central US`
+- `<AzureADAccount>`: Can be an Azure AD user or group. For example, `DummyLogin`
+- The `SubnetId` parameter needs to be updated with the `<ResourceGroupName>`, the `Subscription ID`, `<VNetName>`, and `<SubnetName>`. Your subscription ID can be found in the Azure portal
++
+```powershell
+New-AzSqlInstance -Name "<managedinstancename>" -ResourceGroupName "<ResourceGroupName>" -ExternalAdminName "<AzureADAccount>" -EnableActiveDirectoryOnlyAuthentication -Location "<Location>" -SubnetId "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/<ResourceGroupName>/providers/Microsoft.Network/virtualNetworks/<VNetName>/subnets/<SubnetName>" -LicenseType LicenseIncluded -StorageSizeInGB 1024 -VCore 16 -Edition "GeneralPurpose" -ComputeGeneration Gen4
+```
+
+For more information, see [New-AzSqlInstance](/powershell/module/az.sql/new-azsqlinstance).
+
+# [Rest API](#tab/rest-api)
+
+The [Managed Instances - Create Or Update](/rest/api/sql/2020-11-01-preview/managed-instances/create-or-update) Rest API can be used to create a managed instance with Azure AD-only authentication enabled during provisioning.
+
+> [!NOTE]
+> The script requires a virtual network and subnet be created as a prerequisite.
+
+The script below will provision a managed instance, set the Azure AD admin as `<AzureADAccount>`, and enable Azure AD-only authentication. The instance SQL Administrator login will also be created automatically and the password will be set to a random password. Since SQL Authentication connectivity is disabled with this provisioning, the SQL Administrator login won't be used.
+
+The Azure AD admin, `<AzureADAccount>` can be used to manage the instance when the provisioning is complete.
+
+Replace the following values in the example:
+
+- `<tenantId>`: Can be found by going to the [Azure portal](https://portal.azure.com), and going to your **Azure Active Directory** resource. In the **Overview** pane, you should see your **Tenant ID**
+- `<subscriptionId>`: Your subscription ID can be found in the Azure portal
+- `<instanceName>`: Use a unique managed instance name
+- `<ResourceGroupName>`: Name of the resource group for your Azure SQL logical server
+- `<AzureADAccount>`: Can be an Azure AD user or group. For example, `DummyLogin`
+- `<Location>`: Location of the server, such as `westus2`, or `centralus`
+- `<objectId>`: Can be found by going to the [Azure portal](https://portal.azure.com), and going to your **Azure Active Directory** resource. In the **User** pane, search for the Azure AD user and find their **Object ID**
+- The `subnetId` parameter needs to be updated with the `<ResourceGroupName>`, the `Subscription ID`, `<VNetName>`, and `<SubnetName>`
++
+```rest
+Import-Module Azure
+Import-Module MSAL.PS
+
+$tenantId = '<tenantId>'
+$clientId = '1950a258-227b-4e31-a9cf-717495945fc2' # Static Microsoft client ID used for getting a token
+$subscriptionId = '<subscriptionId>'
+$uri = "urn:ietf:wg:oauth:2.0:oob"
+$instanceName = "<instanceName>"
+$resourceGroupName = "<ResourceGroupName>"
+$scopes ="https://management.core.windows.net/.default"
+
+Login-AzAccount -tenantId $tenantId
+
+# Login as an Azure AD user with permission to provision a managed instance
+
+$result = Get-MsalToken -RedirectUri $uri -ClientId $clientId -TenantId $tenantId -Scopes $scopes
+
+$authHeader = @{
+'Content-Type'='application\json; '
+'Authorization'=$result.CreateAuthorizationHeader()
+}
+
+$body = '{
+"name": "<instanceName>", "type": "Microsoft.Sql/managedInstances", "identity": { "type": "SystemAssigned"},"location": "<Location>", "sku": {"name": "GP_Gen5", "tier": "GeneralPurpose", "family":"Gen5","capacity": 8},
+"properties": {"administrators":{ "login":"<AzureADAccount>", "sid":"<objectId>", "tenantId":"<tenantId>", "principalType":"User", "azureADOnlyAuthentication":true },
+"subnetId": "/subscriptions/<subscriptionId>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Network/virtualNetworks/<VNetName>/subnets/<SubnetName>",
+"licenseType": "LicenseIncluded", "vCores": 8, "storageSizeInGB": 2048, "collation": "SQL_Latin1_General_CP1_CI_AS", "proxyOverride": "Proxy", "timezoneId": "UTC", "privateEndpointConnections": [], "storageAccountType": "GRS", "zoneRedundant": false
+ }
+}'
+
+# To provision the instance, execute the `PUT` command
+
+Invoke-RestMethod -Uri https://management.azure.com/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Sql/managedInstances/$instanceName/?api-version=2020-11-01-preview -Method PUT -Headers $authHeader -Body $body -ContentType "application/json"
+
+```
+
+To check the results, execute the `GET` command:
+
+```rest
+Invoke-RestMethod -Uri https://management.azure.com/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Sql/managedInstances/$instanceName/?api-version=2020-11-01-preview -Method GET -Headers $authHeader | Format-List
+```
+
+# [ARM Template](#tab/arm-template)
+
+To provision a new managed instance, virtual network and subnet, with an Azure AD admin set for the instance and Azure AD-only authentication enabled, use the following template.
+
+Use a [Custom deployment in the Azure portal](https://portal.azure.com/#create/Microsoft.Template), and **Build your own template in the editor**. Next, **Save** the configuration once you pasted in the example.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.1",
+ "parameters": {
+ "managedInstanceName": {
+ "type": "String",
+ "metadata": {
+ "description": "Enter managed instance name."
+ }
+ },
+ "aad_admin_name": {
+ "type": "String",
+ "metadata": {
+ "description": "The name of the Azure AD admin for the SQL managed instance."
+ }
+ },
+ "aad_admin_objectid": {
+ "type": "String",
+ "metadata": {
+ "description": "The Object ID of the Azure AD admin."
+ }
+ },
+ "aad_admin_tenantid": {
+ "type": "String",
+ "defaultValue": "[subscription().tenantId]",
+ "metadata": {
+ "description": "The Tenant ID of the Azure Active Directory"
+ }
+ },
+ "aad_admin_type": {
+ "defaultValue": "User",
+ "allowedValues": [
+ "User",
+ "Group",
+ "Application"
+ ],
+ "type": "String"
+ },
+ "aad_only_auth": {
+ "defaultValue": true,
+ "type": "Bool"
+ },
+ "location": {
+ "defaultValue": "[resourceGroup().location]",
+ "type": "String",
+ "metadata": {
+ "description": "Enter location. If you leave this field blank resource group location would be used."
+ }
+ },
+ "virtualNetworkName": {
+ "type": "String",
+ "defaultValue": "SQLMI-VNET",
+ "metadata": {
+ "description": "Enter virtual network name. If you leave this field blank name will be created by the template."
+ }
+ },
+ "addressPrefix": {
+ "defaultValue": "10.0.0.0/16",
+ "type": "String",
+ "metadata": {
+ "description": "Enter virtual network address prefix."
+ }
+ },
+ "subnetName": {
+ "type": "String",
+ "defaultValue": "ManagedInstances",
+ "metadata": {
+ "description": "Enter subnet name. If you leave this field blank name will be created by the template."
+ }
+ },
+ "subnetPrefix": {
+ "defaultValue": "10.0.0.0/24",
+ "type": "String",
+ "metadata": {
+ "description": "Enter subnet address prefix."
+ }
+ },
+ "skuName": {
+ "defaultValue": "GP_Gen5",
+ "allowedValues": [
+ "GP_Gen5",
+ "BC_Gen5"
+ ],
+ "type": "String",
+ "metadata": {
+ "description": "Enter sku name."
+ }
+ },
+ "vCores": {
+ "defaultValue": 16,
+ "allowedValues": [
+ 8,
+ 16,
+ 24,
+ 32,
+ 40,
+ 64,
+ 80
+ ],
+ "type": "Int",
+ "metadata": {
+ "description": "Enter number of vCores."
+ }
+ },
+ "storageSizeInGB": {
+ "defaultValue": 256,
+ "minValue": 32,
+ "maxValue": 8192,
+ "type": "Int",
+ "metadata": {
+ "description": "Enter storage size."
+ }
+ },
+ "licenseType": {
+ "defaultValue": "LicenseIncluded",
+ "allowedValues": [
+ "BasePrice",
+ "LicenseIncluded"
+ ],
+ "type": "String",
+ "metadata": {
+ "description": "Enter license type."
+ }
+ }
+ },
+ "variables": {
+ "networkSecurityGroupName": "[concat('SQLMI-', parameters('managedInstanceName'), '-NSG')]",
+ "routeTableName": "[concat('SQLMI-', parameters('managedInstanceName'), '-Route-Table')]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Network/networkSecurityGroups",
+ "apiVersion": "2020-06-01",
+ "name": "[variables('networkSecurityGroupName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "securityRules": [
+ {
+ "name": "allow_tds_inbound",
+ "properties": {
+ "description": "Allow access to data",
+ "protocol": "Tcp",
+ "sourcePortRange": "*",
+ "destinationPortRange": "1433",
+ "sourceAddressPrefix": "VirtualNetwork",
+ "destinationAddressPrefix": "*",
+ "access": "Allow",
+ "priority": 1000,
+ "direction": "Inbound"
+ }
+ },
+ {
+ "name": "allow_redirect_inbound",
+ "properties": {
+ "description": "Allow inbound redirect traffic to Managed Instance inside the virtual network",
+ "protocol": "Tcp",
+ "sourcePortRange": "*",
+ "destinationPortRange": "11000-11999",
+ "sourceAddressPrefix": "VirtualNetwork",
+ "destinationAddressPrefix": "*",
+ "access": "Allow",
+ "priority": 1100,
+ "direction": "Inbound"
+ }
+ },
+ {
+ "name": "deny_all_inbound",
+ "properties": {
+ "description": "Deny all other inbound traffic",
+ "protocol": "*",
+ "sourcePortRange": "*",
+ "destinationPortRange": "*",
+ "sourceAddressPrefix": "*",
+ "destinationAddressPrefix": "*",
+ "access": "Deny",
+ "priority": 4096,
+ "direction": "Inbound"
+ }
+ },
+ {
+ "name": "deny_all_outbound",
+ "properties": {
+ "description": "Deny all other outbound traffic",
+ "protocol": "*",
+ "sourcePortRange": "*",
+ "destinationPortRange": "*",
+ "sourceAddressPrefix": "*",
+ "destinationAddressPrefix": "*",
+ "access": "Deny",
+ "priority": 4096,
+ "direction": "Outbound"
+ }
+ }
+ ]
+ }
+ },
+ {
+ "type": "Microsoft.Network/routeTables",
+ "apiVersion": "2020-06-01",
+ "name": "[variables('routeTableName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "disableBgpRoutePropagation": false
+ }
+ },
+ {
+ "type": "Microsoft.Network/virtualNetworks",
+ "apiVersion": "2020-06-01",
+ "name": "[parameters('virtualNetworkName')]",
+ "location": "[parameters('location')]",
+ "dependsOn": [
+ "[variables('routeTableName')]",
+ "[variables('networkSecurityGroupName')]"
+ ],
+ "properties": {
+ "addressSpace": {
+ "addressPrefixes": [
+ "[parameters('addressPrefix')]"
+ ]
+ },
+ "subnets": [
+ {
+ "name": "[parameters('subnetName')]",
+ "properties": {
+ "addressPrefix": "[parameters('subnetPrefix')]",
+ "routeTable": {
+ "id": "[resourceId('Microsoft.Network/routeTables', variables('routeTableName'))]"
+ },
+ "networkSecurityGroup": {
+ "id": "[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]"
+ },
+ "delegations": [
+ {
+ "name": "miDelegation",
+ "properties": {
+ "serviceName": "Microsoft.Sql/managedInstances"
+ }
+ }
+ ]
+ }
+ }
+ ]
+ }
+ },
+ {
+ "type": "Microsoft.Sql/managedInstances",
+ "apiVersion": "2020-11-01-preview",
+ "name": "[parameters('managedInstanceName')]",
+ "location": "[parameters('location')]",
+ "dependsOn": [
+ "[parameters('virtualNetworkName')]"
+ ],
+ "sku": {
+ "name": "[parameters('skuName')]"
+ },
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "properties": {
+ "subnetId": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('virtualNetworkName'), parameters('subnetName'))]",
+ "storageSizeInGB": "[parameters('storageSizeInGB')]",
+ "vCores": "[parameters('vCores')]",
+ "licenseType": "[parameters('licenseType')]",
+ "administrators": {
+ "login": "[parameters('aad_admin_name')]",
+ "sid": "[parameters('aad_admin_objectid')]",
+ "tenantId": "[parameters('aad_admin_tenantid')]",
+ "principalType": "[parameters('aad_admin_type')]",
+ "azureADOnlyAuthentication": "[parameters('aad_only_auth')]"
+ }
+ }
+ }
+ ]
+}
+```
+++
+### Grant Directory Readers permissions
+
+Once the deployment is complete for your managed instance, you may notice that the Managed Instance needs **Read** permissions to access Azure Active Directory. Read permissions can be granted by clicking on the displayed message in the Azure portal by a person with enough privileges. For more information, see [Directory Readers role in Azure Active Directory for Azure SQL](authentication-aad-directory-readers-role.md).
++
+## Limitations
+
+- Creating a server or instance using the Azure CLI or Azure portal with Azure AD-only authentication enabled during provisioning is currently not supported.
+- To reset the server administrator password, Azure AD-only authentication must be disabled.
+- If Azure AD-only authentication is disabled, you must create a server with a server admin and password when using all APIs.
+
+## Next steps
+
+- If you already have a SQL server or managed instance, and just want to enable Azure AD-only authentication, see [Tutorial: Enable Azure Active Directory only authentication with Azure SQL](authentication-azure-ad-only-authentication-tutorial.md).
+- For more information on the Azure AD-only authentication feature, see [Azure AD-only authentication with Azure SQL](authentication-azure-ad-only-authentication.md).
azure-sql Authentication Azure Ad Only Authentication Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-azure-ad-only-authentication-tutorial.md
Previously updated : 06/01/2021 Last updated : 06/30/2021 # Tutorial: Enable Azure Active Directory only authentication with Azure SQL
Last updated 06/01/2021
> [!NOTE] > The **Azure AD-only authentication** feature discussed in this article is in **public preview**.
-This article guides you through enabling the [Azure AD-only authentication](authentication-azure-ad-only-authentication.md) feature within Azure SQL Database and Azure SQL Managed Instance.
+This article guides you through enabling the [Azure AD-only authentication](authentication-azure-ad-only-authentication.md) feature within Azure SQL Database and Azure SQL Managed Instance. If you are looking to provision a SQL Database or Managed Instance with Azure AD-only authentication enabled, see [Create server with Azure AD-only authentication enabled in Azure SQL](authentication-azure-ad-only-authentication-create-server.md).
In this tutorial, you learn how to:
After disabling Azure AD-only authentication, test connecting using a SQL authen
## Next steps
-[Azure AD-only authentication with Azure SQL](authentication-azure-ad-only-authentication.md)
+- [Azure AD-only authentication with Azure SQL](authentication-azure-ad-only-authentication.md)
+- [Create server with Azure AD-only authentication enabled in Azure SQL](authentication-azure-ad-only-authentication-create-server.md)
azure-sql Authentication Azure Ad Only Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-azure-ad-only-authentication.md
Previously updated : 06/01/2021 Last updated : 06/30/2021 # Azure AD-only authentication with Azure SQL
SELECT SERVERPROPERTY('IsExternalAuthenticationOnly')
## Next steps > [!div class="nextstepaction"]
-> [Tutorial: Enable Azure Active Directory only authentication with Azure SQL](authentication-azure-ad-only-authentication-tutorial.md)
+> [Tutorial: Enable Azure Active Directory only authentication with Azure SQL](authentication-azure-ad-only-authentication-tutorial.md)
+
+> [!div class="nextstepaction"]
+> [Create server with Azure AD-only authentication enabled in Azure SQL](authentication-azure-ad-only-authentication-create-server.md)
azure-sql Automated Backups Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/automated-backups-overview.md
Add a filter for **Service name**, and then select **sql database** in the drop-
>[!NOTE] > Meters are only visible for counters that are currently in use. If a counter is not available, it is likely that the category is not currently being used. For example, managed instance counters will not be present for customers who do not have a managed instance deployed. Likewise, storage counters will not be visible for resources that are not consuming storage.
+For more information, see [Azure SQL Database cost management](cost-management.md).
+ ## Encrypted backups If your database is encrypted with TDE, backups are automatically encrypted at rest, including LTR backups. All new databases in Azure SQL are configured with TDE enabled by default. For more information on TDE, see [Transparent Data Encryption with SQL Database & SQL Managed Instance](/sql/relational-databases/security/encryption/transparent-data-encryption-azure-sql).
azure-sql Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/connectivity-architecture.md
Periodically, we will retire Gateways using old hardware and migrate the traffic
| France Central | 40.79.137.0, 40.79.129.1, 40.79.137.8, 40.79.145.12 | 40.79.136.32/29, 40.79.144.32/29 | | France South | 40.79.177.0, 40.79.177.10 ,40.79.177.12 | 40.79.176.40/29, 40.79.177.32/29 | | Germany West Central | 51.116.240.0, 51.116.248.0, 51.116.152.0 | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29 |
-| Central India | 104.211.96.159, 104.211.86.30 , 104.211.86.31 | 104.211.86.32/29, 20.192.96.32/29 |
+| Central India | 104.211.96.159, 104.211.86.30 , 104.211.86.31, 40.80.48.32, 20.192.96.32 | 104.211.86.32/29, 20.192.96.32/29 |
| South India | 104.211.224.146 | 40.78.192.32/29, 40.78.193.32/29 | | West India | 104.211.160.80, 104.211.144.4 | 104.211.144.32/29, 104.211.145.32/29 |
-| Japan East | 13.78.61.196, 40.79.184.8, 13.78.106.224, 40.79.192.5, 13.78.104.32 | 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29 |
+| Japan East | 13.78.61.196, 40.79.184.8, 13.78.106.224, 40.79.192.5, 13.78.104.32, 40.79.184.32 | 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29 |
| Japan West | 104.214.148.156, 40.74.100.192, 40.74.97.10 | 40.74.96.32/29 | | Korea Central | 52.231.32.42, 52.231.17.22 ,52.231.17.23, 20.44.24.32, 20.194.64.33 | 20.194.64.32/29,20.44.24.32/29, 52.231.16.32/29 | | Korea South | 52.231.200.86, 52.231.151.96 | | | North Central US | 23.96.178.199, 23.98.55.75, 52.162.104.33, 52.162.105.9 | 52.162.105.192/29 | | North Europe | 40.113.93.91, 52.138.224.1, 13.74.104.113 | 13.69.233.136/29, 13.74.105.192/29, 52.138.229.72/29 |
-| Norway East | 51.120.96.0, 51.120.96.33 | 51.120.96.32/29 |
+| Norway East | 51.120.96.0, 51.120.96.33, 51.120.104.32, 51.120.208.32 | 51.120.96.32/29 |
| Norway West | 51.120.216.0 | 51.120.217.32/29 | | South Africa North | 102.133.152.0, 102.133.120.2, 102.133.152.32 | 102.133.120.32/29, 102.133.152.32/29, 102.133.248.32/29| | South Africa West | 102.133.24.0 | 102.133.25.32/29 |
Periodically, we will retire Gateways using old hardware and migrate the traffic
| West Europe | 40.68.37.158, 104.40.168.105, 52.236.184.163 | 104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29 | | West US | 104.42.238.205, 13.86.216.196 | 13.86.217.224/29 | | West US 2 | 13.66.226.202, 40.78.240.8, 40.78.248.10 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29 |
+| West US 3 | 20.150.168.0, 20.150.184.2 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29 |
| | | | ## Next steps
azure-sql Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/cost-management.md
Previously updated : 01/15/2021 Last updated : 06/30/2021 # Plan and manage costs for Azure SQL Database
-This article describes how you plan for and manage costs for Azure SQL Database. First, you use the Azure pricing calculator to add Azure resources, and review the estimated costs. After you've started using Azure SQL Database resources, use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for Azure SQL Database are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Azure SQL Database, you're billed for all Azure services and resources used in your Azure subscription, including any third-party services.
+This article describes how you plan for and manage costs for Azure SQL Database.
+
+First, you use the Azure pricing calculator to add Azure resources, and review the estimated costs. After you've started using Azure SQL Database resources, use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for Azure SQL Database are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Azure SQL Database, you're billed for all Azure services and resources used in your Azure subscription, including any third-party services.
## Prerequisites
For information about assigning access to Azure Cost Management data, see [Assig
When working with Azure SQL Database, there are several cost-saving features to consider: - ### vCore or DTU purchasing models Azure SQL Database supports two purchasing models: vCore and DTU. The way you get charged varies between the purchasing models so it's important to understand the model that works best for your workload when planning and considering costs. For information about vCore and DTU purchasing models, see [Choose between the vCore and DTU purchasing models](purchasing-models.md). - ### Provisioned or serverless In the vCore purchasing model, Azure SQL Database also supports two types of compute tiers: provisioned throughput and serverless. The way you get charged for each compute tier varies so it's important to understand what works best for your workload when planning and considering costs. For details, see [vCore model overview - compute tiers](service-tiers-sql-database-vcore.md#compute-tiers).
In the provisioned compute tier of the vCore-based purchasing model, you can exc
### Elastic pools
-For environments with multiple databases that have varying and unpredictable usage demands, elastic pools can provide cost savings compared to provisioning the same amount of single databases. For details, see [Elastic pools](elastic-pool-overview.md).
+For environments with multiple databases that have varying and unpredictable usage demands, elastic pools can provide cost savings compared to provisioning the same number of single databases. For details, see [Elastic pools](elastic-pool-overview.md).
## Estimate Azure SQL Database costs
-Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate costs for different Azure SQL Database configurations. The information and pricing in the following image are for example purposes only:
+Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate costs for different Azure SQL Database configurations. For more information, see [Azure SQL Database pricing](https://azure.microsoft.com/pricing/details/azure-sql-database/).
+
+The information and pricing in the following image are for example purposes only:
:::image type="content" source="media/cost-management/pricing-calc.png" alt-text="Azure SQL Database pricing calculator example":::
You can also estimate how different Retention Policy options affect cost. The in
## Understand the full billing model for Azure SQL Database
-Azure SQL Database runs on Azure infrastructure that accrues costs along with Azure SQL Database when you deploy the new resource. It's important to understand that additional infrastructure might accrue cost. You need to manage that cost when you make changes to deployed resources.
+Azure SQL Database runs on Azure infrastructure that accrues costs along with Azure SQL Database when you deploy the new resource. It's important to understand that additional infrastructure might accrue cost.
+
+Azure SQL Database (except for serverless) is billed on a predictable, hourly rate. If the SQL database is active for less than one hour, you are billed for the highest service tier selected, provisioned storage, and IO that applied during that hour, regardless of usage or whether the database was active for less than an hour.
+
+Billing depends on the SKU of your product, the generation hardware of your SKU, and the meter category. Azure SQL Database has the following possible SKUs:
+
+- Basic (B)
+- Standard (S)
+- Premium (P)
+- General purpose (GP)
+- Business critical (BC)
+- And for storage: geo-redundant storage (GRS), locally redundant storage (LRS), and zone-redundant storage (ZRS)
+- It's also possible to have a deprecated SKU from deprecated resource offerings
+
+To learn more, see [service tiers](service-tiers-general-purpose-business-critical.md).
+The following table shows the most common billing meters and their possible SKUs for **single databases**:
-Azure SQL Database (with the exception of serverless) is billed on a predictable, hourly rate. If the SQL database is active for less than one hour, you are billed for each hour the database exists using the highest service tier selected, provisioned storage and IO that applied during that hour, regardless of usage or whether the database was active for less than an hour.
+| Measurement| Possible SKU(s) | Description |
+| :-|:-|:-|
+| Backup\* | GP/BC/HS | Measures the consumption of storage used by backups, billed by the amount of storage utilized in GB per month. |
+| Backup (LTR) | GRS/LRS/ZRS/GF | Measures the consumption of storage used by long-term backups configured via long-term retention, billed by the amount of storage utilized. |
+| Compute | B/S/P/GP/BC | Measures the consumption of your compute resources per hour. |
+| Compute (primary/named replica) | HS | Measures the consumption of your compute resources per hour of your primary HS replica.
+| Compute (HA replica) | HS | Measures the consumption of your compute resources per hour of your secondary HS replica. |
+| Compute (ZR add-on) | GP | Measures the consumption of your compute resources per minute of your zone redundant added-on replica. |
+| Compute (serverless) | GP | Measures the consumption of your serverless compute resources per minute. |
+| License | GP/BC/HS | The billing for your SQL Server license accrued per month. |
+| Storage | B/S\*/P\*/G/BC/HS | Billed monthly, by the amount of data stored per hour. |
+\* In the DTU purchasing model, an initial set of storage for data and backups is provided at no additional cost. The size of the storage depends on the service tier selected. Extra data storage can be purchased in the standard and premium tiers. For more information, see [Azure SQL Database pricing](https://azure.microsoft.com/pricing/details/azure-sql-database/).
+
+The following table shows the most common billing meters and their possible SKUs for **elastic pools**:
+
+| Measurement| Possible SKU(s) | Description |
+|:-|:-|:-|
+| Backup\* | GP/BC | Measures the consumption of storage used by backups, billed per GB per hour on a monthly basis. |
+| Compute | B/S/P/GP/BC | Measures the consumption of your compute resources per hour, such as vCores and memory or DTUs. |
+| License | GP/BC | The billing for your SQL Server license accrued per month. |
+| Storage | B/S\*/P\*/GP/HS | Billed monthly, both by the amount of data stored on the drive using storage space per hour, and the throughput of megabytes per second (MBPS). |
+
+\* In the DTU purchasing model, an initial set of storage for data and backups is provided at no additional cost. The size of the storage depends on the service tier selected. Extra data storage can be purchased in the standard and premium tiers. For more information, see [Azure SQL Database pricing](https://azure.microsoft.com/pricing/details/azure-sql-database/).
### Using Monetary Credit with Azure SQL Database
-You can pay for Azure SQL Database charges with your Azure Prepayment (previously called monetary commitment) credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those from the Azure Marketplace.
+You can pay for Azure SQL Database charges with your Azure Prepayment (previously called monetary commitment) credit. However, you can't use Azure Prepayment credit to pay for charges for third-party products and services including those from the Azure Marketplace.
## Review estimated costs in the Azure portal
To access this screen, select **Configure database** on the **Basics** tab of th
:::image type="content" source="media/cost-management/cost-estimate.png" alt-text="Example showing cost estimate in the Azure portal"::: -- If your Azure subscription has a spending limit, Azure prevents you from spending over your credit amount. As you create and use Azure resources, your credits are used. When you reach your credit limit, the resources that you deployed are disabled for the rest of that billing period. You can't change your credit limit, but you can remove it. For more information about spending limits, see [Azure spending limit](../../cost-management-billing/manage/spending-limit.md). ## Monitor costs As you start using Azure SQL Database, you can see the estimated costs in the portal. Use the following steps to review the cost estimate:
-1. Sign into the Azure portal and navigate to your Azure SQL database's resource group. You can locate the resource group by navigating to your database and select **Resource group** in the **Overview** section.
+1. Sign into the Azure portal and navigate to the resource group for your Azure SQL database. You can locate the resource group by navigating to your database and select **Resource group** in the **Overview** section.
1. In the menu, select **Cost analysis**. 1. View **Accumulated costs** and set the chart at the bottom to **Service name**. This chart shows an estimate of your current SQL Database costs. To narrow costs for the entire page to Azure SQL Database, select **Add filter** and then, select **Azure SQL Database**. The information and pricing in the following image are for example purposes only:
From here, you can explore costs on your own. For more and information about the
## Create budgets
-<!-- Note to Azure service writer: Modify the following as needed for your service. -->
- You can create [budgets](../../cost-management-billing/costs/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
-Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you additional money. For more about the filter options when you when create a budget, see [Group and filter options](../../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources. For more about the filter options when you create a budget, see [Group and filter options](../../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
## Export cost data
-You can also [export your cost data](../../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do additional data analysis for costs. For example, a finance teams can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
-
+You can also [export your cost data](../../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need to do further data analysis on cost. For example, a finance team can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
## Other ways to manage and reduce costs for Azure SQL Database
azure-sql Gateway Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/gateway-migration.md
The most up-to-date information will be maintained in the [Azure SQL Database ga
## Status updates # [In progress](#tab/in-progress-ip)
+## August 2021
+New SQL Gateways are being added to the following regions:
+
+- Norway East: 51.120.104.32, 51.120.208.32
+- Japan East: 40.79.184.32
+- Central India: 40.80.48.32, 20.192.96.32
+
+These SQL Gateway shall start accepting customer traffic on 2 August 2021.
+ ## June 2021 New SQL Gateways are being added to the following regions:+ - UK West: 51.140.208.96, 51.140.208.97 - Korea Central: 20.44.24.32, 20.194.64.33 - Japan East: 13.78.104.32
-This SQL Gateway shall start accepting customer traffic on 1 June 2021.
+These SQL Gateway shall start accepting customer traffic on 1 June 2021.
+
+# [Completed](#tab/completed-ip)
+The following gateway migrations are complete:
## May 2021 New SQL Gateways are being added to the following regions:
The following SQL Gateways in multiple regions are in the process of being deact
No customer impact is anticipated since these Gateways (running on older hardware) are not routing any customer traffic. The IP addresses for these Gateways shall be deactivated on 15th March 2021.
-# [Completed](#tab/completed-ip)
-The following gateway migrations are complete:
- ## February 2021 New SQL Gateways are being added to the following regions:
azure-sql Single Database Create Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/single-database-create-quickstart.md
In this quickstart, you create a [single database](single-database-overview.md)
## Prerequisite - An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/).
+- You may also need the latest version of either [Azure PowerShell](/powershell/azure/install-az-ps) or the [Azure CLI](/cli/azure/install-azure-cli-windows), depending on the creation method you choose.
## Create a single database
azure-sql Create Configure Managed Instance Powershell Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/create-configure-managed-instance-powershell-quickstart.md
+
+ Title: Create Azure SQL Managed Instance - Quickstart
+description: Create an instance of Azure SQL Managed Instance using Azure PowerShell.
++++
+ms.devlang:
++++ Last updated : 06/25/2021+
+# Quickstart: Create a managed instance using Azure PowerShell
+
+In this quickstart, learn to create an instance of [Azure SQL Managed Instance](sql-managed-instance-paas-overview.md) using Azure PowerShell.
++
+## Prerequisite
+
+- An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/).
+- The latest version of [Azure PowerShell](/powershell/azure/install-az-ps).
+
+## Set variables
+
+Creating a SQL Manged Instance requires creating several resources within Azure, and as such, the Azure PowerShell commands rely on variables to simplify the experience. Define the variables, and then execute the the cmdlets in each section within the same PowerShell session.
+
+```azurepowershell-interactive
+$NSnetworkModels = "Microsoft.Azure.Commands.Network.Models"
+$NScollections = "System.Collections.Generic"
+# The SubscriptionId in which to create these objects
+$SubscriptionId = ''
+# Set the resource group name and location for your managed instance
+$resourceGroupName = "myResourceGroup-$(Get-Random)"
+$location = "eastus2"
+# Set the networking values for your managed instance
+$vNetName = "myVnet-$(Get-Random)"
+$vNetAddressPrefix = "10.0.0.0/16"
+$miSubnetName = "myMISubnet-$(Get-Random)"
+$miSubnetAddressPrefix = "10.0.0.0/24"
+#Set the managed instance name for the new managed instance
+$instanceName = "myMIName-$(Get-Random)"
+# Set the admin login and password for your managed instance
+$miAdminSqlLogin = "SqlAdmin"
+$miAdminSqlPassword = "ChangeYourAdminPassword1"
+# Set the managed instance service tier, compute level, and license mode
+$edition = "General Purpose"
+$vCores = 4
+$maxStorage = 128
+$computeGeneration = "Gen5"
+$license = "LicenseIncluded" #"BasePrice" or LicenseIncluded if you have don't have SQL Server licence that can be used for AHB discount
+```
+
+## Create resource group
+
+First, connect to Azure, set your subscription context, and create your resource group.
+
+To do so, execute this PowerShell script:
+
+```azurepowershell-interactive
+
+## Connect to Azure
+Connect-AzAccount
+
+# Set subscription context
+Set-AzContext -SubscriptionId $SubscriptionId
+
+# Create a resource group
+$resourceGroup = New-AzResourceGroup -Name $resourceGroupName -Location $location -Tag @{Owner="SQLDB-Samples"}
+```
+
+## Configure networking
+
+After your resource group is created, configure the networking resources such as the virtual network, subnets, network security group, and routing table. This example demonstrates the use of the **Delegate subnet for Managed Instance deployment** script, which is available on GitHub as [delegate-subnet.ps1](https://github.com/microsoft/sql-server-samples/tree/master/samples/manage/azure-sql-db-managed-instance/delegate-subnet).
+
+To do so, execute this PowerShell script:
+
+```azurepowershell-interactive
+
+# Configure virtual network, subnets, network security group, and routing table
+$virtualNetwork = New-AzVirtualNetwork `
+ -ResourceGroupName $resourceGroupName `
+ -Location $location `
+ -Name $vNetName `
+ -AddressPrefix $vNetAddressPrefix
+
+ Add-AzVirtualNetworkSubnetConfig `
+ -Name $miSubnetName `
+ -VirtualNetwork $virtualNetwork `
+ -AddressPrefix $miSubnetAddressPrefix |
+ Set-AzVirtualNetwork
+
+$scriptUrlBase = 'https://raw.githubusercontent.com/Microsoft/sql-server-samples/master/samples/manage/azure-sql-db-managed-instance/delegate-subnet'
+
+$parameters = @{
+ subscriptionId = $SubscriptionId
+ resourceGroupName = $resourceGroupName
+ virtualNetworkName = $vNetName
+ subnetName = $miSubnetName
+ }
+
+Invoke-Command -ScriptBlock ([Scriptblock]::Create((iwr ($scriptUrlBase+'/delegateSubnet.ps1?t='+ [DateTime]::Now.Ticks)).Content)) -ArgumentList $parameters
+
+$virtualNetwork = Get-AzVirtualNetwork -Name $vNetName -ResourceGroupName $resourceGroupName
+$miSubnet = Get-AzVirtualNetworkSubnetConfig -Name $miSubnetName -VirtualNetwork $virtualNetwork
+$miSubnetConfigId = $miSubnet.Id
+```
+
+## Create managed instance
+
+For added security, create a complex and randomized password for your SQL Managed Instance credential:
+
+```azurepowershell-interactive
+# Create credentials
+$secpassword = ConvertTo-SecureString $miAdminSqlPassword -AsPlainText -Force
+$credential = New-Object System.Management.Automation.PSCredential ($miAdminSqlLogin, $secpassword)
+```
+
+Then create your SQL Managed Instance:
+
+```azurepowershell-interactive
+# Create managed instance
+New-AzSqlInstance -Name $instanceName `
+ -ResourceGroupName $resourceGroupName -Location $location -SubnetId $miSubnetConfigId `
+ -AdministratorCredential $credential `
+ -StorageSizeInGB $maxStorage -VCore $vCores -Edition $edition `
+ -ComputeGeneration $computeGeneration -LicenseType $license
+```
+
+This operation may take some time to complete. To learn more, see [Management operations](management-operations-overview.md).
++
+## Clean up resources
+
+Keep the resource group, and managed instance to go on to the next steps, and learn how to connect to your SQL Managed Instance using a client virtual machine.
+
+When you're finished using these resources, you can delete the resource group you created, which will also delete the server and single database within it.
+
+```azurepowershell-interactive
+# Clean up deployment
+Remove-AzResourceGroup -ResourceGroupName $resourceGroupName
+```
++
+## Next steps
+
+After your SQL Managed Instance is created, deploy a client VM to connect to your SQL Managed Instance, and restore a sample database.
+
+> [!div class="nextstepaction"]
+> [Create client VM](connect-vm-instance-configure.md)
+> [Restore database](restore-sample-database-quickstart.md)
++
azure-sql Migrate To Instance From Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/migrate-to-instance-from-sql-server.md
Previously updated : 07/11/2019 Last updated : 06/23/2021 # SQL Server instance migration to Azure SQL Managed Instance [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
azure-sql Sql Server To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide.md
Previously updated : 11/06/2020 Last updated : 06/25/2021 # Migration guide: SQL Server to Azure SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqlmi.md)]
Alternatively, use theΓÇ»[Microsoft Assessment and Planning ToolkitΓÇ»(the "MAP
For more information about tools available to use for the Discover phase, see [Services and tools available for data migration scenarios](../../../dms/dms-tools-matrix.md).
+After data sources have been discovered, assess any on-premises SQL Server instance(s) that can be migrated to Azure SQL Managed Instance to identify migration blockers or compatibility issues.
+Proceed to the following steps to assess and migrate databases to Azure SQL Managed Instance:
++
+- [Assess SQL Managed Instance compatibility](#assess) where you should ensure that there are no blocking issues that can prevent your migrations.
+ This step also includes creation of a [performance baseline](sql-server-to-managed-instance-performance-baseline.md#create-a-baseline) to determine resource usage on your source SQL Server instance. This step is needed if you want to deploy a properly sized managed instance and verify that performance after migration is not affected.
+- [Choose app connectivity options](../../managed-instance/connect-application-instance.md).
+- [Deploy to an optimally sized managed instance](#deploy-to-an-optimally-sized-managed-instance) where you will choose technical characteristics (number of vCores, amount of memory) and performance tier (Business Critical, General Purpose) of your managed instance.
+- [Select migration method and migrate](sql-server-to-managed-instance-overview.md#compare-migration-options) where you migrate your databases using offline migration or online migration options.
+- [Monitor and remediate applications](#monitor-and-remediate-applications) to ensure that you have expected performance.
++ ### Assess [!INCLUDE [assess-estate-with-azure-migrate](../../../../includes/azure-migrate-to-assess-sql-data-estate.md)]
-After data sources have been discovered, assess any on-premises SQL Server instance(s) that can be migrated to Azure SQL Managed Instance to identify migration blockers or compatibility issues.
+Determine whether SQL Managed Instance is compatible with the database requirements of
+your application. SQL Managed Instance is designed to provide easy lift and shift migration for
+the majority of existing applications that use SQL Server. However, you may sometimes require
+features or capabilities that are not yet supported and the cost of implementing a workaround is
+too high.
You can use the Data Migration Assistant (version 4.1 and later) to assess databases to get:
To assess your environment using the Database Migration Assessment, follow these
To learn more, see [Perform a SQL Server migration assessment with Data Migration Assistant](/sql/dma/dma-assesssqlonprem).
-If SQL Managed Instance is not a suitable target for your workload, SQL Server on Azure VMs might be a viable alternative target for your business.
+If SQL Managed Instance is not a suitable target for your workload, SQL Server on Azure VMs might be a viable alternative target for your business.
#### Scaled Assessments and Analysis
Data Migration Assistant supports performing scaled assessments and consolidatio
> [!IMPORTANT] >Running assessments at scale for multiple databases can also be automated using [DMA's Command Line Utility](/sql/dma/dma-commandline) which also allows the results to be uploaded to [Azure Migrate](/sql/dma/dma-assess-sql-data-estate-to-sqldb#view-target-readiness-assessment-results) for further analysis and target readiness.
-### Create a performance baseline
+### Deploy to an optimally sized managed instance
+
+Based on the information in the discover and assess phase, create an appropriately-sized target SQL Managed Instance. You can do so by using the [Azure portal](../../managed-instance/instance-create-quickstart.md), [PowerShell](../../managed-instance/scripts/create-configure-managed-instance-powershell.md), or an [Azure Resource Manager (ARM) Template](../../managed-instance/create-template-quickstart.md).
-If you need to compare the performance of your workload on a SQL Managed Instance with your original workload running on SQL Server, create a performance baseline to use for comparison. See [performance baseline](sql-server-to-managed-instance-performance-baseline.md) to learn more.
+SQL Managed Instance is tailored for on-premises workloads that are planning to move to the cloud. It introduces a [purchasing model](../../database/service-tiers-vcore.md) that provides greater flexibility in selecting the right level of resources for your workloads. In the on-premises world, you are probably accustomed to sizing these workloads by using physical cores and IO bandwidth. The purchasing model for managed instance is based upon virtual cores, or "vCores," with additional storage and IO available separately. The vCore model is a simpler way to understand your compute requirements in the cloud versus what you use on-premises today. This purchasing model enables you to right-size your destination environment in the cloud. Some general guidelines that might help you to choose the right service tier and characteristics are described here:
-### Create SQL Managed Instance
+- Based on the baseline CPU usage, you can provision a managed instance that matches the number of cores that you are using on SQL Server, having in mind that CPU characteristics might need to be scaled to match [VM characteristics where the managed instance is installed](../../managed-instance/resource-limits.md#hardware-generation-characteristics).
+- Based on the baseline memory usage, choose [the service tier that has matching memory](../../managed-instance/resource-limits.md#hardware-generation-characteristics). The amount of memory cannot be directly chosen, so you would need to select the managed instance with the amount of vCores that has matching memory (for example, 5.1 GB/vCore in Gen5).
+- Based on the baseline IO latency of the file subsystem, choose between the General Purpose (latency greater than 5 ms) and Business Critical (latency less than 3 ms) service tiers.
+- Based on baseline throughput, pre-allocate the size of data or log files to get expected IO performance.
-Based on the information in the discover and assess phase, create an appropriately-sized target SQL Managed Instance. You can do so by using the [Azure portal](../../managed-instance/instance-create-quickstart.md), [PowerShell](../../managed-instance/scripts/create-configure-managed-instance-powershell.md), or an [Azure Resource Manager (ARM) Template](../../managed-instance/create-template-quickstart.md).
+You can choose compute and storage resources at deployment time and then change it afterward without introducing downtime for your application using the [Azure portal](../../database/scale-resources.md):
+
+To learn how to create the VNet infrastructure and a managed instance, see [Create a managed instance](../../managed-instance/instance-create-quickstart.md).
+
+> [!IMPORTANT]
+> It is important to keep your destination VNet and subnet in accordance with [managed instance VNet requirements](../../managed-instance/connectivity-architecture-overview.md#network-requirements). Any incompatibility can prevent you from creating new instances or using those that you already created. Learn more about [creating new](../../managed-instance/virtual-network-subnet-create-arm-template.md) and [configuring existing](../../managed-instance/vnet-existing-add-subnet.md) networks.
## Migrate After you have completed tasks associated with the Pre-migration stage, you are ready to perform the schema and data migration.
-Migrate your data using your chosen [migration method](sql-server-to-managed-instance-overview.md#compare-migration-options).
+Migrate your data using your chosen [migration method](sql-server-to-managed-instance-overview.md#compare-migration-options).
+
+SQL Managed Instance targets user scenarios requiring mass database migration from on-premises or
+Azure VM database implementations. They are the optimal choice when you need to lift and shift
+the back end of the applications that regularly use instance level and/or cross-database
+functionalities. If this is your scenario, you can move an entire instance to a corresponding
+environment in Azure without the need to re-architect your applications.
+
+To move SQL instances, you need to plan carefully:
-This guide describe the two most popular options - Azure Database Migration Service (DMS) and native backup and restore.
+- The migration of all databases that need to be collocated (ones running on the same instance).
+- The migration of instance-level objects that your application depends on, including logins,
+credentials, SQL Agent jobs and operators, and server-level triggers.
+
+SQL Managed Instance is a managed service that allows you to delegate some of the regular DBA
+activities to the platform as they are built in. Therefore, some instance-level data does not
+need to be migrated, such as maintenance jobs for regular backups or Always On configuration, as
+[high availability](../../database/high-availability-sla.md) is built in.
+
+SQL Managed Instance supports the following database migration options (currently these are the
+only supported migration methods):
+
+- Azure Database Migration Service - migration with near-zero downtime.
+- Native `RESTORE DATABASE FROM URL` - uses native backups from SQL Server and requires some
+downtime.
+
+This guide describe the two most popular options - Azure Database Migration Service (DMS) and native backup and restore.
### Database Migration Service
To perform migrations using DMS, follow the steps below:
For a detailed step-by-step tutorial of this migration option, see [Migrate SQL Server to an Azure SQL Managed Instance online using DMS](../../../dms/tutorial-sql-server-managed-instance-online.md). - ### Backup and restore One of the key capabilities of Azure SQL Managed Instance to enable quick and easy database migration is the native restore of database backup (`.bak`) files stored on on [Azure Storage](https://azure.microsoft.com/services/storage/). Backup and restore is an asynchronous operation based on the size of your database.
The following diagram provides a high-level overview of the process:
> [!NOTE] > The time to take the backup, upload it to Azure storage, and perform a native restore operation to Azure SQL Managed Instance is based on the size of the database. Factor a sufficient downtime to accommodate the operation for large databases.
+The following table provides more information regarding the methods you can use depending on
+source SQL Server version you are running:
+
+|Step|SQL Engine and version|Backup/restore method|
+||||
+|Put backup to Azure Storage|Prior to 2012 SP1 CU2|Upload .bak file directly to Azure Storage|
+| |2012 SP1 CU2 - 2016|Direct backup using deprecated [WITH CREDENTIAL](/sql/t-sql/statements/restore-statements-transact-sql.md) syntax|
+| |2016 and above|Direct backup using [WITH SAS CREDENTIAL](/sql/relational-databases/backup-restore/sql-server-backup-to-url.md)|
+|Restore from Azure Storage to a managed instance| |[RESTORE FROM URL with SAS CREDENTIAL](../../managed-instance/restore-sample-database-quickstart.md)|
+
+> [!IMPORTANT]
+>
+> - When you're migrating a database protected by [Transparent Data Encryption](../../database/transparent-data-encryption-tde-overview.md) to a managed instance using native restore option,
+the corresponding certificate from the on-premises or Azure VM SQL Server needs to be migrated
+before database restore. For detailed steps, see [Migrate a TDE cert to a managed instance](../../managed-instance/tde-certificate-migrate.md).
+> - Restore of system databases is not supported. To migrate instance-level objects (stored in
+master or msdb databases), we recommend to script them out and run T-SQL scripts on the
+destination instance.
To migrate using backup and restore, follow these steps:
To learn more about this migration option, see [Restore a database to Azure SQL
> A database restore operation is asynchronous and retryable. You might get an error in SQL Server Management Studio if the connection breaks or a time-out expires. Azure SQL Database will keep trying to restore database in the background, and you can track the progress of the restore using the [sys.dm_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql) and [sys.dm_operation_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-operation-status-azure-sql-database) views. - ## Data sync and cutover When using migration options that continuously replicate / sync data changes from source to the target, the source data and schema can change and drift from the target. During data sync, ensure that all changes on the source are captured and applied to the target during the migration process.
After you have successfully completed the migration stage, go through a seri
The post-migration phase is crucial for reconciling any data accuracy issues and verifying completeness, as well as addressing performance issues with the workload.
-### Remediate applications
+### Monitor and remediate applications
+Once you have completed the migration to a managed instance, you should track the application behavior and performance of your workload. This process includes the following activities:
-After the data is migrated to the target environment, all the applications that formerly consumed the source need to start consuming the target. Accomplishing this will, in some cases, require changes to the applications.
+- [Compare performance of the workload running on the managed instance](sql-server-to-managed-instance-performance-baseline.md#compare-performance) with the [performance baseline that you created on the source SQL Server instance](sql-server-to-managed-instance-performance-baseline.md#create-a-baseline).
+- Continuously [monitor performance of your workload](sql-server-to-managed-instance-performance-baseline.md#monitor-performance) to identify potential issues and improvement.
### Perform tests
Some SQL Server features are only available once the [database compatibility lev
- [Best practices for costing and sizing workloads migrate to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs) - To assess the Application access layer, see [Data Access Migration Toolkit (Preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit)-- For details on how to perform Data Access Layer A/B testing see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
+- For details on how to perform Data Access Layer A/B testing see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
azure-video-analyzer Get Started Detect Motion Emit Events Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/get-started-detect-motion-emit-events-portal.md
Title: Get started with Azure Video Analyzer using the Azure portal - Azure
-description: This quickstart walks you through the steps to get started with Azure Video Analyzer using the Azure portal.
+description: This quickstart walks you through the steps to get started with Azure Video Analyzer by using the Azure portal.
Last updated 05/25/2021
-# Quickstart: Get Started with Azure Video Analyzer
-This quickstart walks you through the steps to get started with Azure Video Analyzer. You will create an Azure Video Analyzer account and its accompanying resources using the Azure portal.
-After creating your Video Analyzer account, you will be deploying the Video Analyzer edge module and an RTSP camera simulator module to your IoT Edge device
+# Quickstart: Get started with Azure Video Analyzer in the Azure portal
+This quickstart walks you through the steps to get started with Azure Video Analyzer. You'll create an Azure Video Analyzer account and its accompanying resources by using the Azure portal. You'll then deploy the Video Analyzer edge module and a Real Time Streaming Protocol (RTSP) camera simulator module to your Azure IoT Edge device.
-After completing the setup steps, you'll be able to run the simulated live video stream through a pipeline that detects and reports any motion in that stream. The following diagram graphically represents that pipeline.
+After you complete the setup steps, you'll be able to run the simulated live video stream through a pipeline that detects and reports any motion in that stream. The following diagram graphically represents that pipeline.
> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/get-started-detect-motion-emit-events/motion-detection.svg" alt-text="Detect motion":::
+> :::image type="content" source="./media/get-started-detect-motion-emit-events/motion-detection.svg" alt-text="Diagram of a pipeline that detects and reports motion.":::
## Prerequisites * An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/).
-* An IoT Edge device on which you have admin privileges
+
+ [!INCLUDE [the video analyzer account and storage account must be in the same subscription and region](./includes/note-account-storage-same-subscription.md)]
+* An IoT Edge device on which you have admin privileges:
* [Deploy to an IoT Edge device](deploy-iot-edge-device.md) * [Deploy to an IoT Edge for Linux on Windows](deploy-iot-edge-linux-on-windows.md)
-* [Visual Studio Code](https://code.visualstudio.com/), with the following extensions:
-
- * [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools)
+* [Visual Studio Code](https://code.visualstudio.com/), with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) extension.
> [!TIP] > You might be prompted to install Docker while you're installing the Azure IoT Tools extension. Feel free to ignore the prompt.
-## Preparing your IoT Edge device
-Azure Video Analyzer module should be configured to run on the IoT Edge device with a non-privileged local user account. The module needs certain local folders for storing application configuration data. The RTSP camera simulator module needs video files with which it can synthesize a live video feed.
+## Prepare your IoT Edge device
+The Azure Video Analyzer module should be configured to run on the IoT Edge device with a non-privileged local user account. The module needs certain local folders for storing application configuration data. The RTSP camera simulator module needs video files with which it can synthesize a live video feed.
+
+Run the following command on your IoT Edge device:
-https://aka.ms/ava/prepare-device
-
-**Run the following command on your IoT Edge device**
`bash -c "$(curl -sL https://aka.ms/ava-edge/prep_device)"`
-The prep-device script used above automates the task of creating input and configuration folders, downloading video input files, and creating user accounts with correct privileges. Once the command finishes successfully, you should see the following folders created on your edge device.
+The prep-device script in that command automates the tasks of creating input and configuration folders, downloading video input files, and creating user accounts with correct privileges. After the command finishes successfully, you should see the following folders created on your edge device:
-* `/home/localedgeuser/samples`
-* `/home/localedgeuser/samples/input`
-* `/var/lib/videoanalyzer`
-* `/var/media`
+* */home/localedgeuser/samples*
+* */home/localedgeuser/samples/input*
+* */var/lib/videoanalyzer*
+* */var/media*
- Note the video files ("*.mkv") in the /home/localedgeuser/samples/input folder, which are used to simulate live video.
-## Creating Azure Resources
-The next step is to create the required Azure resources (Video Analyzer account, storage account, user-assigned managed identity), create an optional container registry, and register a Video Analyzer edge module with the Video Analyzer account
+The video (*.mkv) files in the */home/localedgeuser/samples/input* folder are used to simulate live video.
-When you create an Azure Video Analyzer account, you have to associate an Azure storage account with it. If you use Video Analyzer to record the live video from a camera, that data is stored as blobs in a container in the storage account. You must use a managed identity to grant the Video Analyzer account the appropriate access to the storage account as follows.
+## Create Azure resources
+The next step is to create the required Azure resources (Video Analyzer account, storage account, and user-assigned managed identity). Then you can create an optional container registry and register a Video Analyzer edge module with the Video Analyzer account.
+When you create an Azure Video Analyzer account, you have to associate an Azure storage account with it. If you use Video Analyzer to record the live video from a camera, that data is stored as blobs in a container in the storage account. You must use a managed identity to grant the Video Analyzer account the appropriate access to the storage account as follows.
- [!INCLUDE [the video analyzer account and storage account must be in the same subscription and region](./includes/note-account-storage-same-subscription.md)]
### Create a Video Analyzer account in the Azure portal 1. Sign in at the [Azure portal](https://portal.azure.com/).
-1. Using the search bar at the top, enter **Video Analyzer**.
-1. Click on *Video Analyzers* under *Services*.
-1. Click **Add**.
-1. In the **Create Video Analyzer account** section enter required values.
- - **Subscription**: Choose the subscription to create the Video Analyzer account under.
- - **Resource group**: Choose a resource group to create the Video Analyzer account in or click **Create new** to create a new resource group.
- - **Video Analyzer account name**: This is the name for your Video Analyzer account. The name must be all lowercase letters or numbers with no spaces and 3 to 24 characters in length.
- - **Location**: Choose a location to deploy your Video Analyzer account, for example **West US 2**.
- - **Storage account**: Create a new storage account. It is recommended to select a [standard general-purpose v2](../../storage/common/storage-account-overview.md#types-of-storage-accounts) storage account.
+1. On the search bar at the top, enter **Video Analyzer**.
+1. Select **Video Analyzers** under **Services**.
+1. Select **Add**.
+1. In the **Create Video Analyzer account** section, enter these required values:
+ - **Subscription**: Choose the subscription that you're using to create the Video Analyzer account.
+ - **Resource group**: Choose a resource group where you're creating the Video Analyzer account, or select **Create new** to create a resource group.
+ - **Video Analyzer account name**: Enter a name for your Video Analyzer account. The name must be all lowercase letters or numbers with no spaces, and 3 to 24 characters in length.
+ - **Location**: Choose a location to deploy your Video Analyzer account (for example, **West US 2**).
+ - **Storage account**: Create a storage account. We recommend that you select a [standard general-purpose v2](../../storage/common/storage-account-overview.md#types-of-storage-accounts) storage account.
- **User identity**: Create and name a new user-assigned managed identity.
-1. Click **Review + create** at the bottom of the form.
+1. Select **Review + create** at the bottom of the form.
### Create a container registry 1. Select **Create a resource** > **Containers** > **Container Registry**.
-1. In the **Basics** tab, enter values for **Resource group** ***(use the same **Resource group** from the previous sections)*** and **Registry name**. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters.
-1. Accept default values for the remaining settings. Then select **Review + create**. After reviewing the settings, select **Create**.
+1. On the **Basics** tab, enter values for **Resource group** and **Registry name**. Use the same resource group from the previous sections. The registry name must be unique within Azure and contain 5 to 50 alphanumeric characters.
+1. Accept default values for the remaining settings. Then select **Review + create**. After you review the settings, select **Create**.
-## Deploying edge modules
+## Deploy edge modules
-### Deploying Video Analyzer edge module
+### Deploy the Video Analyzer edge module
-1. Navigate to your Video Analyzer account
-1. Select **Edge Modules** under the **Edge** blade
-1. Select **Add edge modules**, enter ***avaedge*** as the name for the new edge module, and select **Add**
-1. The **Copy the provisioning token** screen will appear on the right-side of your screen
-1. Copy the snippet under **Recommended desired properties for IoT module deployment**, you will need this in a later step
+1. Go to your Video Analyzer account.
+1. Select **Edge Modules** in the **Edge** pane.
+1. Select **Add edge modules**, enter **avaedge** as the name for the new edge module, and select **Add**.
+1. The **Copy the provisioning token** page appears on the right side of your screen. Copy the following snippet under **Recommended desired properties for IoT module deployment**. You'll need it in a later step.
```JSON { "applicationDataDirectory": "/var/lib/videoanalyzer",
When you create an Azure Video Analyzer account, you have to associate an Azure
"telemetryOptOut": false } ```
-1. Navigate to your IoT Hub
-1. Select **IoT Edge** under the **Automatic Device Management**
-1. Select the **Device ID** for your IoT Edge Device
-1. Select **Set modules**
-1. Select **Add** and then select **IoT Edge Module** from the drop-down menu
-1. Enter **avaedge** for the **IoT Edge Module Name**
-1. Copy and paste the following line into the **Image URI** field: `mcr.microsoft.com/media/video-analyzer:1`
-1. Select **Environment Variables**
-1. Under **NAME**, enter **LOCAL_USER_ID**, and under **VALUE**, enter **1010**
-1. On the second row under **NAME**, enter **LOCAL_GROUP_ID**, and under **VALUE**, enter **1010**
+1. Go to your Azure IoT Hub account.
+1. Select **IoT Edge** under **Automatic Device Management**.
+1. Select the **Device ID** value for your IoT Edge device.
+1. Select **Set modules**.
+1. Select **Add**, and then select **IoT Edge Module** from the dropdown menu.
+1. Enter **avaedge** for **IoT Edge Module Name**.
+1. Copy and paste the following line into the **Image URI** field: `mcr.microsoft.com/media/video-analyzer:1`.
+1. Select **Environment Variables**.
+1. Under **NAME**, enter **LOCAL_USER_ID**. Under **VALUE**, enter **1010**.
+1. On the second row under **NAME**, enter **LOCAL_GROUP_ID**. Under **VALUE**, enter **1010**.
1. Select **Container Create Options** and copy and paste the following lines: ```json {
When you create an Azure Video Analyzer account, you have to associate an Azure
} } ```
-1. Select **Module Twin Settings** and paste the snippet that you copied earlier from the **Copy the provisioning token** page in the Video Analyzer account
+1. Select **Module Twin Settings** and paste the snippet that you copied earlier from the **Copy the provisioning token** page in the Video Analyzer account.
```JSON { "applicationDataDirectory": "/var/lib/videoanalyzer",
When you create an Azure Video Analyzer account, you have to associate an Azure
"telemetryOptOut": false } ```
-1. Select **Add** at the bottom of your screen
-1. Select **Routes**
-1. Under **NAME**, enter **AVAToHub**, and under **VALUE**, enter FROM /messages/modules/avaedge/outputs/* INTO $upstream
-1. Select **Review + create**, then select **Create** and your **avaedge** edge module will be deployed
-
-### Deploying RTSP camera simulator edge module
-1. Navigate to your IoT Hub
-1. Select **IoT Edge** under the **Automatic Device Management**
-1. Select the **Device ID** for your IoT Edge Device
-1. Select **Set modules**
-1. Select **Add** and then select **IoT Edge Module** from the drop-down menu
-1. Enter **rtspsim** for the **IoT Edge Module Name**
-1. Copy and paste the following line into the **Image URI** field: `mcr.microsoft.com/lva-utilities/rtspsim-live555:1.2`
+1. Select **Add** at the bottom of your screen.
+1. Select **Routes**.
+1. Under **NAME**, enter **AVAToHub**. Under **VALUE**, enter `FROM /messages/modules/avaedge/outputs/* INTO $upstream`.
+1. Select **Review + create**, and then select **Create** to deploy your **avaedge** edge module.
+
+### Deploy the edge module for the RTSP camera simulator
+1. Go to your IoT Hub account.
+1. Select **IoT Edge** under **Automatic Device Management**.
+1. Select the **Device ID** value for your IoT Edge device.
+1. Select **Set modules**.
+1. Select **Add**, and then select **IoT Edge Module** from the dropdown menu.
+1. Enter **rtspsim** for **IoT Edge Module Name**.
+1. Copy and paste the following line into the **Image URI** field: `mcr.microsoft.com/lva-utilities/rtspsim-live555:1.2`.
1. Select **Container Create Options** and copy and paste the following lines: ```json {
When you create an Azure Video Analyzer account, you have to associate an Azure
} } ```
-1. Select **Add** at the bottom of your screen
-1. Select **Review + create**, then select **Create** and your **rtspsim** edge module will be deployed
+1. Select **Add** at the bottom of your screen.
+1. Select **Review + create**, and then select **Create** to deploy your **rtspsim** edge module.
### Verify your deployment
-On the device details page, verify that the **avaedge** and **rtspsim** modules are listed as both, **Specified in Deployment** and **Reported by Device**.
+On the device details page, verify that the **avaedge** and **rtspsim** modules are listed as both **Specified in Deployment** and **Reported by Device**.
-It may take a few moments for the modules to be started on the device and then reported back to IoT Hub. Refresh the page to see an updated status.
-Status code: 200 ΓÇôOK means that [the IoT Edge runtime](../../iot-edge/iot-edge-runtime.md) is healthy and is operating fine.
+It might take a few moments for the modules to be started on the device and then reported back to IoT Hub. Refresh the page to see an updated status. Status code **200 -- OK** means that [the IoT Edge runtime](../../iot-edge/iot-edge-runtime.md) is healthy and is operating fine.
-![Screenshot shows a status value for an IoT Edge runtime.](./media/deploy-iot-edge-device/status.png)
+![Screenshot that shows a status value for an IoT Edge runtime.](./media/deploy-iot-edge-device/status.png)
## Set up your development environment ### Obtain your IoT Hub connection string
-1. In Azure portal, navigate to your
-1. Look for **Shared access policies** option in the left hand navigation, and click there.
-1. Click on the policy named **iothubowner**
-1. Copy the **Primary connection string** - it will look like `HostName=xxx.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX`
+1. In the Azure portal, go to your IoT Hub account.
+1. Look for **Shared access policies** in the left pane and select it.
+1. Select the policy named **iothubowner**.
+1. Copy the **Primary connection string** value. It will look like `HostName=xxx.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX`.
-### Connect to the IoT Hub
+### Connect to IoT Hub
-1. Open Visual Studio Code, select **View** > **Explorer**. Or, select Ctrl+Shift+E.
+1. Open Visual Studio Code and select **View** > **Explorer**. Or, select Ctrl+Shift+E.
1. In the lower-left corner of the **Explorer** tab, select **Azure IoT Hub**. 1. Select the **More Options** icon to see the context menu. Then select **Set IoT Hub Connection String**. 1. When an input box appears, enter your IoT Hub connection string.
-1. In about 30 seconds, refresh Azure IoT Hub in the lower-left section. You should see your **device ID**, which should have the following modules deployed:
+1. In about 30 seconds, refresh Azure IoT Hub in the lower-left section. You should see your device ID, which should have the following modules deployed:
* Video Analyzer edge module (module name **avaedge**) * RTSP simulator (module name **rtspsim**) -
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/get-started-detect-motion-emit-events/modules-node.png" alt-text="Expand the Modules node":::
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/get-started-detect-motion-emit-events/modules-node.png" alt-text="Screenshot that shows the expanded Modules node.":::
> [!TIP]
-> If you have [manually deployed Video Analyzer](deploy-iot-edge-device.md) yourselves on an edge device (such as an ARM64 device), then you will see the module show up under that device, under the Azure IoT Hub. You can select that module, and follow the rest of the steps below.
+> If you have [manually deployed Video Analyzer](deploy-iot-edge-device.md) on an edge device (such as an ARM64 device), the module will appear under that device, under Azure IoT Hub. You can select that module and continue with the following steps.
### Prepare to monitor the modules
-When you use run this quickstart, events will be sent to the IoT Hub. To see these events, follow these steps:
+When you use this quickstart, events will be sent to IoT Hub. To see these events, follow these steps:
-1. In Visual Studio Code, open the **Extensions** tab (or press Ctrl+Shift+X) and search for **Azure IoT Hub**.
-1. Right-click and select **Extension Settings**.
+1. In Visual Studio Code, open the **Extensions** tab (or select Ctrl+Shift+X) and search for **Azure IoT Hub**.
+1. Right-click the IoT Hub extension and select **Extension Settings**.
> [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/get-started-detect-motion-emit-events/extension-settings.png" alt-text="Select Extension Settings":::
-1. Search and enable "Show Verbose Message".
+ > :::image type="content" source="./media/get-started-detect-motion-emit-events/extension-settings.png" alt-text="Screenshot that shows the selection of Extension Settings.":::
+1. Search for and enable **Show Verbose Message**.
> [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/get-started-detect-motion-emit-events/verbose-message.png" alt-text="Show Verbose Message":::
-1. Open the Explorer pane in Visual Studio Code, and look for **Azure IoT Hub** in the lower-left corner.
+ > :::image type="content" source="./media/get-started-detect-motion-emit-events/verbose-message.png" alt-text="Screenshot of Show Verbose Message enabled.":::
+1. Open the **Explorer** pane in Visual Studio Code, and look for **Azure IoT Hub** in the lower-left corner.
1. Expand the **Devices** node.
-1. Right-click on your **device ID**, and select **Start Monitoring Built-in Event Endpoint**.
+1. Right-click your device ID, and select **Start Monitoring Built-in Event Endpoint**.
> [!NOTE]
- > You might be asked to provide Built-in endpoint information for the IoT Hub. To get that information, in Azure portal, navigate to your IoT Hub and look for **Built-in endpoints** option in the left navigation pane. Click there and look for the **Event Hub-compatible endpoint** under **Event Hub compatible endpoint** section. Copy and use the text in the box. The endpoint will look something like this:
- ```
- Endpoint=sb://iothub-ns-xxx.servicebus.windows.net/;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX;EntityPath=<IoT Hub name>
- ```
+ > You might be asked to provide built-in endpoint information for IoT Hub. To get that information, in the Azure portal, go to your IoT Hub account and look for **Built-in endpoints** in the left pane. Select it and look for the **Event Hub-compatible endpoint** section. Copy and use the text in the box. The endpoint will look something like this:
+ >
+ > ```
+ > Endpoint=sb://iothub-ns-xxx.servicebus.windows.net/;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX;EntityPath=<IoT Hub name>
+ > ```
## Use direct method calls
-You can now analyze live video streams by invoking direct methods exposed by the Video Analyzer edge module. Read [Video Analyzer direct methods](direct-methods.md) to examine all the direct methods provided by the module.
+You can now analyze live video streams by invoking direct methods that the Video Analyzer edge module exposes. Read [Video Analyzer direct methods](direct-methods.md) to examine all the direct methods that the module provides.
### Enumerate pipeline topologies This step enumerates all the [pipeline topologies](pipeline.md) in the module.
-1. Right-click on "avaedge" module and select **Invoke Module Direct Method** from the context menu.
-1. You will see an edit box pop in the top-middle of Visual Studio Code window. Enter "pipelineTopologyList" in the edit box and press enter.
-1. Next, copy, and paste the below JSON payload in the edit box and press enter.
+1. Right-click the **avaedge** module and select **Invoke Module Direct Method** from the shortcut menu.
+1. Type **pipelineTopologyList** in the edit box and select the Enter key.
+1. Copy the following JSON payload and paste it in the edit box, and then select the Enter key.
-```json
-{
- "@apiVersion" : "1.0"
-}
-```
+ ```json
+ {
+ "@apiVersion" : "1.0"
+ }
+ ```
-Within a few seconds, you will see the following response in the OUTPUT window:
+Within a few seconds, the following response appears in the **OUTPUT** window:
``` [DirectMethod] Invoking Direct Method [pipelineTopologyList] to [deviceId/avaedge] ...
Within a few seconds, you will see the following response in the OUTPUT window:
} ```
-The above response is expected, as no pipeline topologies have been created.
+That response is expected, because no pipeline topologies have been created.
### Set a pipeline topology
-Using the same steps as above, you can invoke `pipelineTopologySet` to set a pipeline topology using the following JSON as the payload. You will be creating a pipeline topology named "MotionDetection".
+By using the same steps described earlier, you can invoke `pipelineTopologySet` to set a pipeline topology by using the following JSON as the payload. You'll create a pipeline topology named *MotionDetection*.
```json
Using the same steps as above, you can invoke `pipelineTopologySet` to set a pip
} ```
-This JSON payload creates a topology that defines three parameters, where two of them have default values. The topology has one source node ([RTSP source](pipeline.md#rtsp-source)), one processor node ([motion detection processor](pipeline.md#motion-detection-processor) and one sink node ([IoT Hub message sink](pipeline.md#iot-hub-message-sink)). The visual representation of the topology is shown above.
+This JSON payload creates a topology that defines three parameters, two of which have default values. The topology has one source node ([RTSP source](pipeline.md#rtsp-source)), one processor node ([motion detection processor](pipeline.md#motion-detection-processor), and one sink node ([IoT Hub message sink](pipeline.md#iot-hub-message-sink)). The payload shows the visual representation of the topology.
-Within a few seconds, you see the following response in the **OUTPUT** window.
+Within a few seconds, the following response appears in the **OUTPUT** window:
```json {
The returned status is 201. This status indicates that a new topology was create
Try the following next steps:
-1. Invoke `pipelineTopologySet` again. The returned status code is 200. This code indicates that an existing topology was successfully updated.
-1. Invoke `pipelineTopologySet` again, but change the description string. The returned status code is 200, and the description is updated to the new value.
-1. Invoke `pipelineTopologyList` as outlined in the previous section. Now you can see the "MotionDetection" topology in the returned payload.
+* Invoke `pipelineTopologySet` again. The returned status code is 200. This code indicates that an existing topology was successfully updated.
+* Invoke `pipelineTopologySet` again, but change the description string. The returned status code is 200, and the description is updated to the new value.
+* Invoke `pipelineTopologyList` as outlined in the previous section. Now you can see the *MotionDetection* topology in the returned payload.
### Read the pipeline topology
-Invoke `pipelineTopologyGet` by using the following payload.
+Invoke `pipelineTopologyGet` by using the following payload:
```json {
Invoke `pipelineTopologyGet` by using the following payload.
} ```
-Within a few seconds, you see the following response in the **OUTPUT** window:
+Within a few seconds, the following response appears in the **OUTPUT** window:
```json {
In the response payload, notice these details:
* The status code is 200, indicating success. * The payload includes the `createdAt` time stamp and the `lastModifiedAt` time stamp.
-### Create a live pipeline using the topology
+### Create a live pipeline by using the topology
-Next, create a live pipeline that references the above pipeline topology. Invoke the `livePipelineSet` direct method with the following payload:
+Next, create a live pipeline that references the preceding pipeline topology. Invoke the `livePipelineSet` direct method with the following payload:
```json {
Next, create a live pipeline that references the above pipeline topology. Invoke
Notice that this payload:
-* The payload above specifies the topology ("MotionDetection") to be used by the live pipeline.
-* The payload contains parameter value for `rtspUrl`, which did not have a default value in the topology payload. This value is a link to the below sample video:
+* Specifies the topology (*MotionDetection*) that the live pipeline will use.
+* Contains a parameter value for `rtspUrl`, which did not have a default value in the topology payload. This value is a link to the following sample video:
> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE4LTY4]
-Within few seconds, you see the following response in the **OUTPUT** window:
+Within few seconds, the following response appears in the **OUTPUT** window:
```json {
Within few seconds, you see the following response in the **OUTPUT** window:
In the response payload, notice that:
-* Status code is 201, indicating a new live pipeline was created.
-* State is "Inactive", indicating that the live pipeline was created but not activated. For more information, see [pipeline states](pipeline.md#pipeline-states).
+* The status code is 201, indicating a new live pipeline was created.
+* The state is `Inactive`, indicating that the live pipeline was created but not activated. For more information, see [Pipeline states](pipeline.md#pipeline-states).
Try the following direct methods as next steps:
-* Invoke `livePipelineSet` again with the same payload and note that the returned status code is now 200.
-* Invoke `livePipelineSet` again but with a different description and note the updated description in the response payload, indicating that the live pipeline was successfully updated.
-* Invoke `livePipelineSet`, but change the name to "mdpipeline2" and `rtspUrl` to "rtsp://rtspsim:554/media/lots_015.mkv". In the response payload, notice the newly created live pipeline (that is, status code 201).
- > [!NOTE]
- > As explained in [Pipeline topologies](pipeline.md#pipeline-topologies), you can create multiple live pipelines, to analyze live video streams from many cameras using the same pipeline topology. If you do create additional live pipelines, take care to delete them during the cleanup step.
+* Invoke `livePipelineSet` again with the same payload. Note that the returned status code is now 200.
+* Invoke `livePipelineSet` again but with a different description. Note the updated description in the response payload, indicating that the live pipeline was successfully updated.
+* Invoke `livePipelineSet`, but change the name to `mdpipeline2` and change `rtspUrl` to `rtsp://rtspsim:554/media/lots_015.mkv`. In the response payload, note the newly created live pipeline (that is, status code 201).
+
+ > [!NOTE]
+ > As explained in [Pipeline topologies](pipeline.md#pipeline-topologies), you can create multiple live pipelines, to analyze live video streams from many cameras by using the same pipeline topology. If you create more live pipelines, take care to delete them during the cleanup step.
### Activate the live pipeline
-Next, you can activate the live pipeline - which starts the flow of (simulated) live video through the pipeline. Invoke the direct method `livePipelineActivate` with the following payload:
+You can activate the live pipeline to start the flow of (simulated) live video through the pipeline. Invoke the direct method `livePipelineActivate` with the following payload:
```json {
Next, you can activate the live pipeline - which starts the flow of (simulated)
} ```
-Within a few seconds, you see the following response in the OUTPUT window.
+Within a few seconds, the following response appears in the **OUTPUT** window:
```json {
The status code of 200 indicates that the live pipeline was successfully activat
### Check the state of the live pipeline
-Now invoke the `livePipelineGet` direct method with the following payload:
+Invoke the `livePipelineGet` direct method with the following payload:
```json {
Now invoke the `livePipelineGet` direct method with the following payload:
} ```
-Within a few seconds, you see the following response in the OUTPUT window.
+Within a few seconds, the following response appears in the **OUTPUT** window:
```json {
Within a few seconds, you see the following response in the OUTPUT window.
In the response payload, notice the following details: * The status code is 200, indicating success.
-* The state is "Active", indicating the live pipeline is now active.
+* The state is `Active`, indicating that the live pipeline is now active.
## Observe results
-The live pipeline that you created and activated above uses the motion detection processor node to detect motion in the incoming live video stream and sends events to IoT Hub sink. These events are then relayed to your IoT Hub as messages, which can now be observed. You will see messages in the OUTPUT window that have the following "body":
+The live pipeline that you created and activated uses the motion detection processor node to detect motion in the incoming live video stream and sends events to the IoT Hub sink. These events are then relayed to IoT Hub as messages, which can now be observed. Messages in the **OUTPUT** window will have the following "body":
```json
The live pipeline that you created and activated above uses the motion detection
} ```
-Notice this detail:
-
-* The inferences section indicates that the type is motion. It provides additional data about the motion event, and provides a bounding box for the region of the video frame (at the given timestamp) where motion was detected.
+The `inferences` section indicates that the type is motion. It provides more data about the motion event. It also provides a bounding box for the region of the video frame (at the given time stamp) where motion was detected.
-## Invoke additional direct method calls to clean up
+## Invoke more direct method calls to clean up
Next, you can invoke direct methods to deactivate and delete the live pipeline (in that order).
Invoke the`livePipelineDeactivate` direct method with the following payload:
} ```
-Within a few seconds, you see the following response in the **OUTPUT** window:
+Within a few seconds, the following response appears in the **OUTPUT** window:
```json {
Next, try to invoke `livePipelineGet` as indicated previously in this article. O
### Delete the live pipeline
-Invoke the direct method `livePipelineDelete` with the following payload
+Invoke the direct method `livePipelineDelete` with the following payload:
```json {
Invoke the direct method `livePipelineDelete` with the following payload
} ```
-Within a few seconds, you see the following response in the **OUTPUT** window:
+Within a few seconds, the following response appears in the **OUTPUT** window:
```json {
Within a few seconds, you see the following response in the **OUTPUT** window:
``` A status code of 200 indicates that the live pipeline was successfully deleted.
-If you also created the pipeline called "mdpipeline2", then you cannot delete the pipeline topology without also deleting this additional pipeline. Invoke the direct method `livePipelineDelete` again by using the following payload:
+If you also created the pipeline called *mdpipeline2*, then you can't delete the pipeline topology without also deleting this additional pipeline. Invoke the direct method `livePipelineDelete` again by using the following payload:
``` {
If you also created the pipeline called "mdpipeline2", then you cannot delete th
} ```
-Within a few seconds, you see the following response in the OUTPUT window:
+Within a few seconds, the following response appears in the **OUTPUT** window:
```json {
After all live pipelines have been deleted, you can invoke the `pipelineTopology
} ```
-Within a few seconds, you see the following response in the **OUTPUT** window.
+Within a few seconds, the following response appears in the **OUTPUT** window:
```json {
You can try to invoke `pipelineTopologyList` and observe that the module contain
## Next steps
-* Try the [quickstart for recording videos to the cloud when motion is detected](detect-motion-record-video-clips-cloud.md)
-* Try the [quickstart for analyzing live video](analyze-live-video-use-your-model-http.md)
-* Learn more about [diagnostic messages](monitor-log-edge.md)
+* Try the [quickstart for recording videos to the cloud when motion is detected](detect-motion-record-video-clips-cloud.md).
+* Try the [quickstart for analyzing live video](analyze-live-video-use-your-model-http.md).
+* Learn more about [diagnostic messages](monitor-log-edge.md).
azure-video-analyzer Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/troubleshoot.md
Title: Troubleshoot Azure Video Analyzer - Azure description: This article covers troubleshooting steps for Azure Video Analyzer. Previously updated : 05/04/2021 Last updated : 07/01/2021 # Troubleshoot Azure Video Analyzer
When self-guided troubleshooting steps don't resolve your problem, go the Azure
To gather the relevant logs that should be added to the ticket, follow the instructions below in order and upload the log files in the **Details** pane of the support request.
-1. [Configure the Video Analyzer module to collect Verbose Logs]()
-1. [Turn on Debug Logs]()
+1. [Configure the Video Analyzer module to collect Verbose Logs](#configure-video-analyzer-module-to-collect-verbose-logs)
+1. [Turn on Debug Logs](#video-analyzer-debug-logs)
1. Reproduce the issue 1. Connect to the virtual machine from the **IoT Hub** page in the portal
azure-vmware Tutorial Deploy Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-deploy-vmware-hcx.md
Title: Deploy and configure VMware HCX description: Learn how to deploy and configure a VMware HCX solution for your Azure VMware Solution private cloud. Previously updated : 04/23/2021 Last updated : 06/30/2021 # Deploy and configure VMware HCX
VMware HCX Advanced Connector is pre-deployed in Azure VMware Solution. It suppo
> > VMware HCX Enterprise is available with Azure VMware Solution as a preview service. It's free and is subject to terms and conditions for a preview service. After the VMware HCX Enterprise service is generally available, you'll get a 30-day notice that billing will switch over. You'll also have the option to turn off or opt out of the service. Downgrading from HCX Enterprise to HCX Advanced is possible without redeploying, but you'll have to log a support ticket for that action to take place. If planning a downgrade, make sure no migrations are scheduled and features such as RAV, MON are not in use.
-First, review [Before you begin](#before-you-begin), [Software version requirements](https://docs.vmware.com/en/VMware-HCX/services/user-guide/GUID-54E5293B-8707-4D29-BFE8-EE63539CC49B.html), and the [Prerequisites](#prerequisites) sections.
+First, review [Before you begin](#before-you-begin), [Software version requirements](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-54E5293B-8707-4D29-BFE8-EE63539CC49B.html), and the [Prerequisites](#prerequisites) sections.
Then, we'll walk through all the necessary procedures to:
After you're finished, follow the recommended next steps at the end of this arti
As you prepare your deployment, we recommend that you review the following VMware documentation:
-* [VMware HCX user guide](https://docs.vmware.com/en/VMware-HCX/services/user-guide/GUID-E456F078-22BE-494B-8E4B-076EF33A9CF4.html)
-* [Migrating Virtual Machines with VMware HCX](https://docs.vmware.com/en/VMware-HCX/services/user-guide/GUID-D0CD0CC6-3802-42C9-9718-6DA5FEC246C6.html?hWord=N4IghgNiBcIBIGEAaACAtgSwOYCcwBcMB7AOxAF8g)
-* [VMware HCX Deployment Considerations](https://docs.vmware.com/en/VMware-HCX/services/install-checklist/GUID-C0A0E820-D5D0-4A3D-AD8E-EEAA3229F325.html)
+* [VMware HCX user guide](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-BFD7E194-CFE5-4259-B74B-991B26A51758.html)
+* [Migrating Virtual Machines with VMware HCX](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-D0CD0CC6-3802-42C9-9718-6DA5FEC246C6.html)
+* [Prepare for HCX installations](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-A631101E-8564-4173-8442-1D294B731CEB.html)
* [VMware blog series - cloud migration](https://blogs.vmware.com/vsphere/2019/10/cloud-migration-series-part-2.html)
-* [Network ports required for VMware HCX](https://ports.vmware.com/home/VMware-HCX)
+* [Network ports required for VMware HCX](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-E456F078-22BE-494B-8E4B-076EF33A9CF4.html)
## Prerequisites
azure-vmware Tutorial Network Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-network-checklist.md
Title: Tutorial - Network planning checklist description: Learn about the network requirements for network connectivity and network ports on Azure VMware Solution. Previously updated : 06/08/2021 Last updated : 07/01/2021 # Networking planning checklist for Azure VMware Solution
In this tutorial, you'll learn about:
Ensure that all gateways, including the ExpressRoute provider's service, support 4-byte Autonomous System Number (ASN). Azure VMware Solution uses 4-byte public ASNs for advertising routes. ## Virtual network and ExpressRoute circuit considerations
-When you create a virtual network connection in your subscription, the ExpressRoute circuit gets established through peering, uses an authorization key, and a peering ID you request in the Azure portal. The peering is a private, one-to-one connection between your private cloud and the virtual network.
+When you create a virtual network connection in your subscription, the ExpressRoute circuit gets established through peering, uses an authorization key and a peering ID you request in the Azure portal. The peering is a private, one-to-one connection between your private cloud and the virtual network.
> [!NOTE] > The ExpressRoute circuit is not part of a private cloud deployment. The on-premises ExpressRoute circuit is beyond the scope of this document. If you require on-premises connectivity to your private cloud, you can use one of your existing ExpressRoute circuits or purchase one in the Azure portal. When deploying a private cloud, you receive IP addresses for vCenter and NSX-T Manager. To access those management interfaces, you'll need to create more resources in your subscription's virtual network. You can find the procedures for creating those resources and establishing [ExpressRoute private peering](tutorial-expressroute-global-reach-private-cloud.md) in the tutorials.
-The private cloud logical networking comes with pre-provisioned NSX-T. A Tier-0 gateway and Tier-1 gateway is pre-provisioned for you. You can create a segment and attach it to the existing Tier-1 gateway or attach it to a new Tier-1 gateway that you define. NSX-T logical networking components provide East-West connectivity between workloads and provide North-South connectivity to the internet and Azure services.
+The private cloud logical networking comes with pre-provisioned NSX-T. A Tier-0 gateway and Tier-1 gateway are pre-provisioned for you. You can create a segment and attach it to the existing Tier-1 gateway or attach it to a new Tier-1 gateway that you define. NSX-T logical networking components provide East-West connectivity between workloads and provide North-South connectivity to the internet and Azure services.
## Routing and subnet considerations The Azure VMware Solution private cloud is connected to your Azure virtual network using an Azure ExpressRoute connection. This high bandwidth, low latency connection allows you to access services running in your Azure subscription from your private cloud environment. The routing is Border Gateway Protocol (BGP) based, automatically provisioned, and enabled by default for each private cloud deployment.
backup Backup Azure Database Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-database-postgresql.md
You may use this solution independently or in addition to the native backup solu
- Only the data is recovered at the time of restore. "Roles" aren't restored. - In preview, we recommend that you run the solution only on your test environment.
+## Prerequisite permissions for configure backup and restore
+
+Azure Backup follows strict security guidelines. Even though it's a native Azure service, permissions on the resource aren't assumed, and need to be explicitly given by the user. Similarly, credentials to connect to the database aren't stored. This is important to safeguard your data. Instead, we use Azure Active Directory authentication.
+
+[Download this document](https://download.microsoft.com/download/7/4/d/74d689aa-909d-4d3e-9b18-f8e465a7ebf5/OSSbkpprep_automated.docx) to get an automated script and related instructions. It will grant an appropriate set of permissions to an Azure PostgreSQL server, for backup and restore.
+ ## Backup process 1. This solution uses **pg_dump** to take backups of your Azure PostgreSQL databases.
Follow this step-by-step guide to trigger a restore:
>[!NOTE] >Archive support for Azure Database for PostgreSQL is in limited public preview.
-## Prerequisite permissions for configure backup and restore
-Azure Backup follows strict security guidelines. Even though it's a native Azure service, permissions on the resource aren't assumed, and need to be explicitly given by the user. Similarly, credentials to connect to the database aren't stored. This is important to safeguard your data. Instead, we use Azure Active Directory authentication.
-
-[Download this document](https://download.microsoft.com/download/7/4/d/74d689aa-909d-4d3e-9b18-f8e465a7ebf5/OSSbkpprep_automated.docx) to get an automated script and related instructions. It will grant an appropriate set of permissions to an Azure PostgreSQL server, for backup and restore.
## Manage the backed-up Azure PostgreSQL databases
backup Backup Azure Sql Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-sql-automation.md
Title: SQL DB in Azure VM backup & restore via PowerShell description: Back up and restore SQL Databases in Azure VMs using Azure Backup and PowerShell. Previously updated : 06/30/2019 Last updated : 06/30/2021 ms.assetid: 57854626-91f9-4677-b6a2-5d12b6a866e1
Azure Backup can restore SQL Server databases that are running on Azure VMs as f
Check the prerequisites mentioned [here](restore-sql-database-azure-vm.md#restore-prerequisites) before restoring SQL DBs.
+> [!WARNING]
+> Due to a security issue related to RBAC, we had to introduce a breaking change in the restore commands for SQL DB via Powershell. Please upgrade to Az 6.0.0 version or above for the proper restore commands to be submitted via Powershell. The latest PS commands are provided below.
+ First fetch the relevant backed up SQL DB using the [Get-AzRecoveryServicesBackupItem](/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupitem) PowerShell cmdlet. ```powershell
$OverwriteWithLogConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -Po
As outlined above, if the target SQLInstance lies within another Azure VM, make sure it's [registered to this vault](#registering-the-sql-vm) and the relevant SQLInstance appears as a protectable item. ```powershell
+$TargetContainer = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $targetVault.ID
$TargetInstance = Get-AzRecoveryServicesBackupProtectableItem -WorkloadType MSSQL -ItemType SQLInstance -Name "<SQLInstance Name>" -ServerName "<SQL VM name>" -VaultId $targetVault.ID ```
-Then just pass the relevant recovery point, target SQL instance with the right flag as shown below.
+Then just pass the relevant recovery point, target SQL instance with the right flag as shown below and the target container under which the target SQL instance exists.
##### Alternate restore with distinct Recovery point ```powershell
-$AnotherInstanceWithFullConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -RecoveryPoint $FullRP -TargetItem $TargetInstance -AlternateWorkloadRestore -VaultId $targetVault.ID
+$AnotherInstanceWithFullConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -RecoveryPoint $FullRP -TargetItem $TargetInstance -AlternateWorkloadRestore -VaultId $targetVault.ID -TargetContainer $TargetContainer[1]
``` ##### Alternate restore with log point-in-time ```powershell
-$AnotherInstanceWithLogConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -PointInTime $PointInTime -Item $bkpItem -TargetItem $TargetInstance -AlternateWorkloadRestore -VaultId $targetVault.ID
+$AnotherInstanceWithLogConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -PointInTime $PointInTime -Item $bkpItem -TargetItem $TargetInstance -AlternateWorkloadRestore -VaultId $targetVault.ID -TargetContainer $TargetContainer[1]
``` ##### Restore as Files
PointInTime : 1/1/0001 12:00:00 AM
#### Alternate workload restore to a vault in secondary region > [!IMPORTANT]
-> Support for secondary region restores for SQL from Powershell is available from Az 4.1.0
+> Support for secondary region restores for SQL from Powershell is available from Az 6.0.0
If you have enabled cross region restore, then the recovery points will be replicated to the secondary, paired region as well. Then, you can fetch those recovery points and trigger a restore to a machine, present in that paired region. As with the normal restore, the target machine should be registered to the target vault in the secondary region. The following sequence of steps should clarify the end-to-end process.
As documented [above](#determine-recovery-configuration) for the normal SQL rest
##### For full restores from secondary region ```powershell
-Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -RecoveryPoint $FullRPFromSec[0] -TargetItem $secSQLInstance -AlternateWorkloadRestore -VaultId $vault.ID
+Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -RecoveryPoint $FullRPFromSec[0] -TargetItem $secSQLInstance -AlternateWorkloadRestore -VaultId $vault.ID -TargetContainer $seccontainer[1]
``` ##### For log point in time restores from secondary region ```powershell
-Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -PointInTime $PointInTime -Item $secondaryBkpItems[0] -TargetItem $secSQLInstance -AlternateWorkloadRestore -VaultId $vault.ID
+Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -PointInTime $PointInTime -Item $secondaryBkpItems[0] -TargetItem $secSQLInstance -AlternateWorkloadRestore -VaultId $vault.ID -TargetContainer $seccontainer[1]
``` Once the relevant configuration is obtained for primary region restore or secondary region restore, the same restore command can be used to trigger restores and later tracked using the jobIDs.
backup Restore Blobs Storage Account Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/restore-blobs-storage-account-cli.md
Track all the jobs using the [az dataprotection job list](/cli/azure/dataprotect
You can also use Az.ResourceGraph to track all jobs across all Backup vaults. Use the [az dataprotection job list-from-resourcegraph](/cli/azure/dataprotection/job?view=azure-cli-latest&preserve-view=true#az_dataprotection_job_list_from_resourcegraph) command to get the relevant job which can be across any Backup vault. ```azurepowershell-interactive
-az dataprotection job list-from-resourcegraph --datasource-type AzureDisk --operation Restore
+az dataprotection job list-from-resourcegraph --datasource-type AzureBlob --operation Restore
``` ## Next steps
batch Batch Account Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-account-create-portal.md
Title: Create an account in the Azure portal
-description: Learn how to create an Azure Batch account in the Azure portal to run large-scale parallel workloads in the cloud
+description: Learn how to create an Azure Batch account in the Azure portal to run large-scale parallel workloads in the cloud.
Previously updated : 02/23/2021- Last updated : 07/01/2021+
When creating your first Batch account in user subscription mode, you need to re
:::image type="content" source="media/batch-account-create-portal/register_provider.png" alt-text="Screenshot showing the Microsoft.Batch resource provider.":::
-1. Return to the **Subscription** page, then select **Access control (IAM)** > **Role assignments** > **Add** > **Add role assignment**.
+1. Return to the **Subscription** page, then select **Access control (IAM)**.
- :::image type="content" source="media/batch-account-create-portal/subscription_iam.png" alt-text="Screenshot of the Role assignments page for a subscription.":::
+1. Assign the **Contributor** or **Owner** role to the Batch API. You can find this account by searching for **Microsoft Azure Batch** or **MicrosoftAzureBatch**. (The Object ID for the Batch API is **f520d84c-3fd3-4cc8-88d4-2ed25b00d27a**, and the Application ID is **ddbf3205-c6bd-46ae-8127-60eb93363864**.)
-1. On the **Add role assignment** page, select the **Contributor** or **Owner** role, then search for the Batch API. Search for **Microsoft Azure Batch** or **MicrosoftAzureBatch** to find the API. (**ddbf3205-c6bd-46ae-8127-60eb93363864** is the Application ID for the Batch API.)
-
-1. Once you find the Batch API, select it and select **Save**.
+ For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
### Create a Key Vault
batch Batch Task Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-task-dependencies.md
Title: Create task dependencies to run tasks description: Create tasks that depend on the completion of other tasks for processing MapReduce style and similar big data workloads in Azure Batch. Previously updated : 12/28/2020 Last updated : 06/29/2021 # Create task dependencies to run tasks that depend on other tasks
-With Batch task dependencies, you create tasks that are scheduled for execution on compute nodes after the completion of one or more parent tasks. For example, you can create a job that renders each frame of a 3D movie with separate, parallel tasks. The final task--the "merge task"--merges the rendered frames into the complete movie only after all frames have been successfully rendered.
+With Batch task dependencies, you create tasks that are scheduled for execution on compute nodes after the completion of one or more parent tasks. For example, you can create a job that renders each frame of a 3D movie with separate, parallel tasks. The final task merges the rendered frames into the complete movie only after all frames have been successfully rendered. In other words, the final task is dependent on the previous parent tasks.
Some scenarios where task dependencies are useful include:
Some scenarios where task dependencies are useful include:
- Pre-rendering and post-rendering processes, where each task must complete before the next task can begin. - Any other job in which downstream tasks depend on the output of upstream tasks.
-By default, dependent tasks are scheduled for execution only after the parent task has completed successfully. You can optionally specify a [dependency action](#dependency-actions) to override the default behavior and run tasks when the parent task fails.
-
-## Task dependencies with Batch .NET
+By default, dependent tasks are scheduled for execution only after the parent task has completed successfully. You can optionally specify a [dependency action](#dependency-actions) to override the default behavior and run the dependent task even if the parent task fails.
In this article, we discuss how to configure task dependencies by using the [Batch .NET](/dotnet/api/microsoft.azure.batch) library. We first show you how to [enable task dependency](#enable-task-dependencies) on your jobs, and then demonstrate how to [configure a task with dependencies](#create-dependent-tasks). We also describe how to specify a dependency action to run dependent tasks if the parent fails. Finally, we discuss the [dependency scenarios](#dependency-scenarios) that Batch supports.
There are three basic task dependency scenarios that you can use in Azure Batch:
> [!TIP] > You can create **many-to-many** relationships, such as where tasks C, D, E, and F each depend on tasks A and B. This is useful, for example, in parallelized preprocessing scenarios where your downstream tasks depend on the output of multiple upstream tasks. >
-> In the examples in this section, a dependent task runs only after the parent tasks complete successfully. This behavior is the default behavior for a dependent task. You can run a dependent task after a parent task fails by specifying a [dependency action](#dependency-actions) to override the default behavior.
+> In the examples in this section, a dependent task runs only after the parent tasks complete successfully. This behavior is the default behavior for a dependent task. You can run a dependent task after a parent task fails by specifying a [dependency action](#dependency-actions) to override the default behavior.
### One-to-one
new CloudTask("taskB", "cmd.exe /c echo taskB")
### One-to-many
-In a one-to-many relationship, a task depends on the completion of multiple parent tasks. To create the dependency, provide a collection of task IDs to the [TaskDependencies.OnIds](/dotnet/api/microsoft.azure.batch.taskdependencies.onids) static method when you populate the [CloudTask.DependsOn](/dotnet/api/microsoft.azure.batch.cloudtask.dependson) property.
+In a one-to-many relationship, a task depends on the completion of multiple parent tasks. To create the dependency, provide a collection of specific task IDs to the [TaskDependencies.OnIds](/dotnet/api/microsoft.azure.batch.taskdependencies.onids) static method when you populate the [CloudTask.DependsOn](/dotnet/api/microsoft.azure.batch.cloudtask.dependson) property.
```csharp // 'Rain' and 'Sun' don't depend on any other tasks
new CloudTask("Flowers", "cmd.exe /c echo Flowers")
}, ```
+> [!IMPORTANT]
+> Your dependent task creation will fail if the combined length of parent task IDs is greater than 64000 characters. To specify a large number of parent tasks, consider using a Task ID range instead.
+ ### Task ID range
-In a dependency on a range of parent tasks, a task depends on the completion of tasks whose IDs lie within a range.
-To create the dependency, provide the first and last task IDs in the range to the [TaskDependencies](/dotnet/api/microsoft.azure.batch.taskdependencies.onidrange) static method when you populate the [CloudTask.DependsOn](/dotnet/api/microsoft.azure.batch.cloudtask.dependson) property.
+In a dependency on a range of parent tasks, a task depends on the completion of tasks whose IDs lie within a range that you specify.
+
+To create the dependency, provide the first and last task IDs in the range to the [TaskDependencies.OnIdRange](/dotnet/api/microsoft.azure.batch.taskdependencies.onidrange) static method when you populate the [CloudTask.DependsOn](/dotnet/api/microsoft.azure.batch.cloudtask.dependson) property.
> [!IMPORTANT] > When you use task ID ranges for your dependencies, only tasks with IDs representing integer values will be selected by the range. For example, the range `1..10` will select tasks `3` and `7`, but not `5flamingoes`. >
-> Leading zeroes are not significant when evaluating range dependencies, so tasks with string identifiers `4`, `04` and `004` will all be *within* the range and they will all be treated as task `4`, so the first one to complete will satisfy the dependency.
+> Leading zeroes are not significant when evaluating range dependencies, so tasks with string identifiers `4`, `04` and `004` will all be *within* the range, Since they will all be treated as task `4`, the first one to complete will satisfy the dependency.
>
-> Every task in the range must satisfy the dependency, either by completing successfully or by completing with a failure that is mapped to a [dependency action](#dependency-actions) set to **Satisfy**.
+> For the dependent task to run, every task in the range must satisfy the dependency, either by completing successfully or by completing with a failure that is mapped to a [dependency action](#dependency-actions) set to **Satisfy**.
```csharp // Tasks 1, 2, and 3 don't depend on any other tasks. Because
new CloudTask("4", "cmd.exe /c echo 4")
## Dependency actions
-By default, a dependent task or set of tasks runs only after a parent task has completed successfully. In some scenarios, you may want to run dependent tasks even if the parent task fails. You can override the default behavior by specifying a dependency action.
+By default, a dependent task or set of tasks runs only after a parent task has completed successfully. In some scenarios, you may want to run dependent tasks even if the parent task fails. You can override the default behavior by specifying a *dependency action* that indicates whether a dependent task is eligible to run.
-A dependency action specifies whether a dependent task is eligible to run, based on the success or failure of the parent task. For example, suppose that a dependent task is awaiting data from the completion of the upstream task. If the upstream task fails, the dependent task may still be able to run using older data. In this case, a dependency action can specify that the dependent task is eligible to run despite the failure of the parent task.
+For example, suppose that a dependent task is awaiting data from the completion of the upstream task. If the upstream task fails, the dependent task may still be able to run using older data. In this case, a dependency action can specify that the dependent task is eligible to run despite the failure of the parent task.
A dependency action is based on an exit condition for the parent task. You can specify a dependency action for any of the following exit conditions:
A dependency action is based on an exit condition for the parent task. You can s
- When the task exits with an exit code that falls within a range specified by the **ExitCodeRanges** property. - The default case, if the task exits with an exit code not defined by **ExitCodes** or **ExitCodeRanges**, or if the task exits with a pre-processing error and the **PreProcessingError** property is not set, or if the task fails with a file upload error and the **FileUploadError** property is not set.
-For .NET, see the [ExitConditions](/dotnet/api/microsoft.azure.batch.exitconditions) class for more details on these conditions.
+For .NET, these conditions are defined as properties of the [ExitConditions](/dotnet/api/microsoft.azure.batch.exitconditions) class.
-To specify a dependency action in .NET, set the [ExitOptions.DependencyAction](/dotnet/api/microsoft.azure.batch.exitoptions.dependencyaction) property for the exit condition to one of the following:
+To specify a dependency action, set the [ExitOptions.DependencyAction](/dotnet/api/microsoft.azure.batch.exitoptions.dependencyaction) property for the exit condition to one of the following:
- **Satisfy**: Indicates that dependent tasks are eligible to run if the parent task exits with a specified error. - **Block**: Indicates that dependent tasks are not eligible to run.
batch Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/best-practices.md
Typically, virtual machines in a Batch pool are accessed through public IP add
### Testing connectivity with Cloud Services configuration
-You can't use the normal "ping"/ICMP protocol with cloud services, because the ICMP protocol is not permitted through the Azure load balancer. For more information, see [Connectivity and networking for Azure Cloud Services](../cloud-services/cloud-services-connectivity-and-networking-faq.md#can-i-ping-a-cloud-service).
+You can't use the normal "ping"/ICMP protocol with cloud services, because the ICMP protocol is not permitted through the Azure load balancer. For more information, see [Connectivity and networking for Azure Cloud Services](../cloud-services/cloud-services-connectivity-and-networking-faq.yml#can-i-ping-a-cloud-service-).
## Batch node underlying dependencies
cdn Cdn Purge Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-purge-endpoint.md
na ms.devlang: na Previously updated : 05/17/2019 Last updated : 06/30/2021
This tutorial walks you through purging assets from all edge nodes of an endpoin
3. **Root domain purge**: Purge the root of the endpoint with "/" in the path. > [!TIP]
- > Paths must be specified for purge and must be a relative URL that fit the following [regular expression](/dotnet/standard/base-types/regular-expression-language-quick-reference). **Purge all** and **Wildcard purge** not supported by **Azure CDN from Akamai** currently.
- > > Single URL purge `@"^\/(?>(?:[a-zA-Z0-9-_.%=\(\)\u0020]+\/?)*)$";`
- > > Query string `@"^(?:\?[-\@_a-zA-Z0-9\/%:;=!,.\+'&\(\)\u0020]*)?$";`
- > > Wildcard purge `@"^\/(?:[a-zA-Z0-9-_.%=\(\)\u0020]+\/)*\*$";`.
+ > 1. Paths must be specified for purge and must be a relative URL that fit the following [regular expression](/dotnet/standard/base-types/regular-expression-language-quick-reference). **Purge all** and **Wildcard purge** are not supported by **Azure CDN from Akamai** currently.
+ >
+ > 1. Single URL purge `@"^\/(?>(?:[a-zA-Z0-9-_.%=\(\)\u0020]+\/?)*)$";`
+ > 1. Query string `@"^(?:\?[-\@_a-zA-Z0-9\/%:;=!,.\+'&\(\)\u0020]*)?$";`
+ > 1. Wildcard purge `@"^\/(?:[a-zA-Z0-9-_.%=\(\)\u0020]+\/)*\*$";`.
>
- > More **Path** textboxes will appear after you enter text to allow you to build a list of multiple assets. You can delete assets from the list by clicking the ellipsis (...) button.
+ > More **Path** textboxes will appear after you enter text to allow you to build a list of multiple assets. You can delete assets from the list by clicking the ellipsis (...) button.
>
+ > 1. In Azure CDN from Microsoft, query strings in the purge URL path are not considered. If the path to purge is provided as `/TestCDN?myname=max`, only `/TestCDN` is considered. The query string `myname=max` is omitted. Both `TestCDN?myname=max` and `TestCDN?myname=clark` will be purged.
+ 5. Click the **Purge** button. ![Purge button](./media/cdn-purge-endpoint/cdn-purge-button.png)
certification How To Edit Published Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/how-to-edit-published-device.md
Previously updated : 03/03/2021 Last updated : 06/30/2021 # Edit your published device
-After your device has been certified and published to the Azure Certified Device catalog, you may need to update your device details. This may be due to an update to your distributor list, changes to purchase page URLs, or updates to the hardware specifications (such as operating system version or a new component addition). Using the Azure Certified Device portal, we make it easy to update your device information without removing your product from our catalog.
+After your device has been certified and published to the Azure Certified Device catalog, you may need to update your device details. This may be due to an update to your distributor list, changes to purchase page URLs, or updates to the hardware specifications (such as operating system version or a new component addition). You may also have to update your IoT Plug and Play device model from what you originally uploaded to the model repository.
+ ## Prerequisites - You should be signed in and have an **approved** project for your device on the [Azure Certified Device portal](https://certify.azure.com). If you don't have a certified device, you can view this [tutorial](tutorial-01-creating-your-project.md) to get started.
-## Editing your published project
-On the project summary, you should notice that your project is in read-only mode since it has already been reviewed and accepted. To make changes, you will have to request an edit to your project and have the update reapproved by the Azure Certification team.
+## Editing your published project information
+
+On the project summary, you should notice that your project is in read-only mode since it has already been reviewed and accepted. To make changes, you will have to request an edit to your project and have the update re-approved by the Azure Certification team.
1. Click the `Request Metadata Edit` button on the top of the page
On the project summary, you should notice that your project is in read-only mode
1. On the project summary page, click `Submit for review` to have your changes reapproved by the Azure Certification team. 1. After your changes have been reviewed and approved, you can then republish your changes to the catalog through the portal (See our [tutorial](./tutorial-04-publishing-your-device.md)).
+## Editing your IoT Plug and Play device model
+
+Once you have submitted your device model to the public model repository, it cannot be removed. If you update your device model and would like to re-link your certified device to the new model, you **must re-certify** your device as a new project. If you do this, please leave a note in the 'Comments for Reviewer' section so the certification team can remove your old device entry.
+ ## Next steps You've now successfully edited your device on the Azure Certified Device catalog. You can check out your changes on the catalog, or certify another device!
certification How To Using The Components Feature https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/how-to-using-the-components-feature.md
You may have questions regarding how many components to include, or what compone
| Finished Product | 1 | Customer Ready Product, Discrete | N/A | | Finished Product with **detachable peripheral(s)** | 2 or more | Customer Ready Product, Discrete | Peripheral / Discrete or Integrated | | Finished Product with **integrated component(s)** | 2 or more | Customer Ready Product, Discrete | Select appropriate type / Discrete or integrated |
-| Solution-Ready Dev Kit | 2 or more | Customer Ready Product, Discrete or Integrated| Select appropriate type / Discrete or integrated |
+| Solution-Ready Dev Kit | 1 or more | Customer Ready Product or Development Board, Discrete or Integrated| Select appropriate type / Discrete or integrated |
## Example component usage
cloud-services Cloud Services Application And Service Availability Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-application-and-service-availability-faq.md
- Title: Application and service availability issues FAQ
-description: This article lists the frequently asked questions about application and service availability for Microsoft Azure Cloud Services.
-- Previously updated : 10/14/2020------
-# Application and service availability issues for Azure Cloud Services (classic): Frequently asked questions (FAQs)
-
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
-
-This article includes frequently asked questions about application and service availability issues for [Microsoft Azure Cloud Services](https://azure.microsoft.com/services/cloud-services). You can also consult the [Cloud Services VM Size page](cloud-services-sizes-specs.md) for size information.
--
-## My role got recycled. Was there any update rolled out for my cloud service?
-Roughly once a month, Microsoft releases a new Guest OS version for Windows Azure PaaS VMs. The Guest OS is only one such update in the pipeline. A release can be affected by many other factors. In addition, Azure runs on hundreds of thousands of machines. Therefore, it's impossible to predict the exact date and time when your roles will reboot. We update the Guest OS Update RSS Feed with the latest information that we have, but you should consider that reported time to be an approximate value. We are aware that this is problematic for customers and are working on a plan to limit or precisely time reboots.
-
-For complete details about recent Guest OS updates, see [Azure Guest OS releases and SDK compatibility matrix](cloud-services-guestos-update-matrix.md).
-
-For helpful information on restarts and pointers to technical details of Guest and Host OS updates, see the MSDN blog post [Role Instance Restarts Due to OS Upgrades](/archive/blogs/kwill/role-instance-restarts-due-to-os-upgrades).
-
-## Why does the first request to my cloud service after the service has been idle for some time take longer than usual?
-When the Web Server receives the first request, it first recompiles the code and then processes the request. That's why the first request takes longer than the others. By default, the app pool gets shut down in cases of user inactivity. The app pool will also recycle by default every 1,740 minutes (29 hours).
-
-Internet Information Services (IIS) application pools can be periodically recycled to avoid unstable states that can lead to application crashes, hangs, or memory leaks.
-
-The following documents will help you understand and mitigate this issue:
-* [Fixing slow initial load for IIS](https://stackoverflow.com/questions/13386471/fixing-slow-initial-load-for-iis)
-* [IIS 7.5 web application first request after app-pool recycle very slow](https://stackoverflow.com/questions/13917205/iis-7-5-web-application-first-request-after-app-pool-recycle-very-slow)
-
-If you want to change the default behavior of IIS, you will need to use startup tasks, because if you manually apply changes to the Web Role instances, the changes will eventually be lost.
-
-For more information, see [How to configure and run startup tasks for a cloud service](cloud-services-startup-tasks.md).
cloud-services Cloud Services Certs Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-certs-create.md
You can upload service certificates to Azure either using the Azure portal or by
Service certificates can be managed separately from your services, and may be managed by different individuals. For example, a developer may upload a service package that refers to a certificate that an IT manager has previously uploaded to Azure. An IT manager can manage and renew that certificate (changing the configuration of the service) without needing to upload a new service package. Updating without a new service package is possible because the logical name, store name, and location of the certificate is in the service definition file and while the certificate thumbprint is specified in the service configuration file. To update the certificate, it's only necessary to upload a new certificate and change the thumbprint value in the service configuration file. >[!Note]
->The [Cloud Services FAQ - Configuration and Management](cloud-services-configuration-and-management-faq.md) article has some helpful information about certificates.
+>The [Cloud Services FAQ - Configuration and Management](cloud-services-configuration-and-management-faq.yml) article has some helpful information about certificates.
## What are management certificates? Management certificates allow you to authenticate with the classic deployment model. Many programs and tools (such as Visual Studio or the Azure SDK) use these certificates to automate configuration and deployment of various Azure services. These are not really related to cloud services.
cloud-services Cloud Services Configuration And Management Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-configuration-and-management-faq.md
- Title: Configuration and management issues FAQ
-description: This article lists the frequently asked questions about configuration and management for Microsoft Azure Cloud Services.
--- Previously updated : 10/14/2020------
-# Configuration and management issues for Azure Cloud Services (classic): Frequently asked questions (FAQs)
-
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
-
-This article includes frequently asked questions about configuration and management issues for [Microsoft Azure Cloud Services](https://azure.microsoft.com/services/cloud-services). You can also consult the [Cloud Services VM Size page](cloud-services-sizes-specs.md) for size information.
--
-**Certificates**
--- [Why is the certificate chain of my Cloud Service TLS/SSL certificate incomplete?](#why-is-the-certificate-chain-of-my-cloud-service-tlsssl-certificate-incomplete)-- [What is the purpose of the "Windows Azure Tools Encryption Certificate for Extensions"?](#what-is-the-purpose-of-the-windows-azure-tools-encryption-certificate-for-extensions)-- [How can I generate a Certificate Signing Request (CSR) without "RDP-ing" in to the instance?](#how-can-i-generate-a-certificate-signing-request-csr-without-rdp-ing-in-to-the-instance)-- [My Cloud Service Management Certificate is expiring. How to renew it?](#my-cloud-service-management-certificate-is-expiring-how-to-renew-it)-- [How to automate the installation of main TLS/SSL certificate(.pfx) and intermediate certificate(.p7b)?](#how-to-automate-the-installation-of-main-tlsssl-certificatepfx-and-intermediate-certificatep7b)-- [What is the purpose of the "Microsoft Azure Service Management for MachineKey" certificate?](#what-is-the-purpose-of-the-microsoft-azure-service-management-for-machinekey-certificate)-
-**Monitoring and logging**
--- [What are the upcoming Cloud Service capabilities in the Azure portal which can help manage and monitor applications?](#what-are-the-upcoming-cloud-service-capabilities-in-the-azure-portal-which-can-help-manage-and-monitor-applications)-- [Why does IIS stop writing to the log directory?](#why-does-iis-stop-writing-to-the-log-directory)-- [How do I enable WAD logging for Cloud Services?](#how-do-i-enable-wad-logging-for-cloud-services)-
-**Network configuration**
--- [How do I set the idle timeout for Azure load balancer?](#how-do-i-set-the-idle-timeout-for-azure-load-balancer)-- [How do I associate a static IP address to my Cloud Service?](#how-do-i-associate-a-static-ip-address-to-my-cloud-service)-- [What are the features and capabilities that Azure basic IPS/IDS and DDOS provides?](#what-are-the-features-and-capabilities-that-azure-basic-ipsids-and-ddos-provides)-- [How to enable HTTP/2 on Cloud Services VM?](#how-to-enable-http2-on-cloud-services-vm)-
-**Permissions**
--- [Can Microsoft internal engineers remote desktop to Cloud Service instances without permission?](#can-microsoft-internal-engineers-remote-desktop-to-cloud-service-instances-without-permission)-- [I cannot remote desktop to Cloud Service VM by using the RDP file. I get following error: An authentication error has occurred (Code: 0x80004005)](#i-cannot-remote-desktop-to-cloud-service-vm--by-using-the-rdp-file-i-get-following-error-an-authentication-error-has-occurred-code-0x80004005)-
-**Scaling**
--- [I cannot scale beyond X instances](#i-cannot-scale-beyond-x-instances)-- [How can I configure Auto-Scale based on Memory metrics?](#how-can-i-configure-auto-scale-based-on-memory-metrics)-
-**Generic**
--- [How do I add `nosniff` to my website?](#how-do-i-add-nosniff-to-my-website)-- [How do I customize IIS for a web role?](#how-do-i-customize-iis-for-a-web-role)-- [What is the quota limit for my Cloud Service?](#what-is-the-quota-limit-for-my-cloud-service)-- [Why does the drive on my Cloud Service VM show very little free disk space?](#why-does-the-drive-on-my-cloud-service-vm-show-very-little-free-disk-space)-- [How can I add an Antimalware extension for my Cloud Services in an automated way?](#how-can-i-add-an-antimalware-extension-for-my-cloud-services-in-an-automated-way)-- [How to enable Server Name Indication (SNI) for Cloud Services?](#how-to-enable-server-name-indication-sni-for-cloud-services)-- [How can I add tags to my Azure Cloud Service?](#how-can-i-add-tags-to-my-azure-cloud-service)-- [The Azure portal doesn't display the SDK version of my Cloud Service. How can I get that?](#the-azure-portal-doesnt-display-the-sdk-version-of-my-cloud-service-how-can-i-get-that)-- [I want to shut down the Cloud Service for several months. How to reduce the billing cost of Cloud Service without losing the IP address?](#i-want-to-shut-down-the-cloud-service-for-several-months-how-to-reduce-the-billing-cost-of-cloud-service-without-losing-the-ip-address)--
-## Certificates
-
-### Why is the certificate chain of my Cloud Service TLS/SSL certificate incomplete?
-
-We recommend that customers install the full certificate chain (leaf cert, intermediate certs, and root cert) instead of just the leaf certificate. When you install just the leaf certificate, you rely on Windows to build the certificate chain by walking the CTL. If intermittent network or DNS issues occur in Azure or Windows Update when Windows is trying to validate the certificate, the certificate may be considered invalid. By installing the full certificate chain, this problem can be avoided. The blog at [How to install a chained SSL certificate](/archive/blogs/azuredevsupport/how-to-install-a-chained-ssl-certificate) shows how to do this.
-
-### What is the purpose of the "Windows Azure Tools Encryption Certificate for Extensions"?
-
-These certificates are automatically created whenever an extension is added to the Cloud Service. Most commonly, this is the WAD extension or the RDP extension, but it could be others, such as the Antimalware or Log Collector extension. These certificates are only used for encrypting and decrypting the private configuration for the extension. The expiration date is never checked, so it doesnΓÇÖt matter if the certificate is expired.ΓÇ»
-
-You can ignore these certificates. If you want to clean up the certificates, you can try deleting them all. Azure will throw an error if you try to delete a certificate that is in use.
-
-### How can I generate a Certificate Signing Request (CSR) without "RDP-ing" in to the instance?
-
-See the following guidance document:
-
-[Obtaining a certificate for use with Windows Azure Web Sites (WAWS)](https://azure.microsoft.com/blog/obtaining-a-certificate-for-use-with-windows-azure-web-sites-waws/)
-
-The CSR is just a text file. It does not have to be created from the machine where the certificate will ultimately be used. Although this document is written for an App Service, the CSR creation is generic and applies also for Cloud Services.
-
-### My Cloud Service Management Certificate is expiring. How to renew it?
-
-You can use following PowerShell commands to renew your Management Certificates:
-
-```powershell
-Add-AzureAccount
-Select-AzureSubscription -Current -SubscriptionName <your subscription name>
-Get-AzurePublishSettingsFile
-```
-
-The **Get-AzurePublishSettingsFile** will create a new management certificate in **Subscription** > **Management Certificates** in the Azure portal. The name of the new certificate looks like "YourSubscriptionNam]-[CurrentDate]-credentials".
-
-### How to automate the installation of main TLS/SSL certificate(.pfx) and intermediate certificate(.p7b)?
-
-You can automate this task by using a startup script (batch/cmd/PowerShell) and register that startup script in the service definition file. Add both the startup script and certificate(.p7b file) in the project folder of the same directory of the startup script.
-
-### What is the purpose of the "Microsoft Azure Service Management for MachineKey" certificate?
-
-This certificate is used to encrypt machine keys on Azure Web Roles. To learn more, check out [this advisory](/security-updates/securityadvisories/2018/4092731).
-
-For more information, see the following articles:
-- [How to configure and run startup tasks for a Cloud Service](./cloud-services-startup-tasks.md)-- [Common Cloud Service startup tasks](./cloud-services-startup-tasks-common.md)-
-## Monitoring and logging
-
-### What are the upcoming Cloud Service capabilities in the Azure portal which can help manage and monitor applications?
-
-Ability to generate a new certificate for Remote Desktop Protocol (RDP) is coming soon. Alternatively, you can run this script:
-
-```powershell
-$cert = New-SelfSignedCertificate -DnsName yourdomain.cloudapp.net -CertStoreLocation "cert:\LocalMachine\My" -KeyLength 20 48 -KeySpec "KeyExchange"
-$password = ConvertTo-SecureString -String "your-password" -Force -AsPlainText
-Export-PfxCertificate -Cert $cert -FilePath ".\my-cert-file.pfx" -Password $password
-```
-Ability to choose blob or local for your csdef and cscfg upload location is coming soon. Using [New-AzureDeployment](/powershell/module/servicemanagement/azure.service/new-azuredeployment), you can set each location value.
-
-Ability to monitor metrics at the instance level. Additional monitoring capabilities are available in [How to Monitor Cloud Services](cloud-services-how-to-monitor.md).
-
-### Why does IIS stop writing to the log directory?
-You have exhausted the local storage quota for writing to the log directory. To correct this, you can do one of three things:
-* Enable diagnostics for IIS and have the diagnostics periodically moved to blob storage.
-* Manually remove log files from the logging directory.
-* Increase quota limit for local resources.
-
-For more information, see the following documents:
-* [Store and view diagnostic data in Azure Storage](../storage/common/storage-introduction.md)
-* [IIS Logs stop writing in Cloud Service](/archive/blogs/cie/iis-logs-stops-writing-in-cloud-service)
-
-### How do I enable WAD logging for Cloud Services?
-You can enable Windows Azure Diagnostics (WAD) logging through following options:
-1. [Enable from Visual Studio](/visualstudio/azure/vs-azure-tools-diagnostics-for-cloud-services-and-virtual-machines#turn-on-diagnostics-in-cloud-service-projects-before-you-deploy-them)
-2. [Enable through .NET code](./cloud-services-dotnet-diagnostics.md)
-3. [Enable through PowerShell](./cloud-services-diagnostics-powershell.md)
-
-In order to get the current WAD settings of your Cloud Service, you can use [Get-AzureServiceDiagnosticsExtensions](./cloud-services-diagnostics-powershell.md#get-current-diagnostics-extension-configuration) PowerShell cmd or you can view it through portal from ΓÇ£Cloud Services --> ExtensionsΓÇ¥ blade.
--
-## Network configuration
-
-### How do I set the idle timeout for Azure load balancer?
-You can specify the timeout in your service definition (csdef) file like this:
-
-```xml
-<?xml version="1.0" encoding="utf-8"?>
-<ServiceDefinition name="mgVS2015Worker" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" schemaVersion="2015-04.2.6">
-ΓÇ» <WorkerRole name="WorkerRole1" vmsize="Small">
-    <ConfigurationSettings>
-      <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" />
-    </ConfigurationSettings>
-    <Imports>
-      <Import moduleName="RemoteAccess" />
-      <Import moduleName="RemoteForwarder" />
-    </Imports>
- <Endpoints>
-      <InputEndpoint name="Endpoint1" protocol="tcp" port="10100"   idleTimeoutInMinutes="30" />
-    </Endpoints>
-ΓÇ» </WorkerRole>
-```
-See [New: Configurable Idle Timeout for Azure Load Balancer](https://azure.microsoft.com/blog/new-configurable-idle-timeout-for-azure-load-balancer/) for more information.
-
-### How do I associate a static IP address to my Cloud Service?
-To set up a static IP address, you need to create a reserved IP. This reserved IP can be associated to a new Cloud Service or to an existing deployment. See the following documents for details:
-* [How to create a reserved IP address](/previous-versions/azure/virtual-network/virtual-networks-reserved-public-ip#manage-reserved-vips)
-* [Reserve the IP address of an existing Cloud Service](/previous-versions/azure/virtual-network/virtual-networks-reserved-public-ip#reserve-the-ip-address-of-an-existing-cloud-service)
-* [Associate a reserved IP to a new Cloud Service](/previous-versions/azure/virtual-network/virtual-networks-reserved-public-ip#associate-a-reserved-ip-to-a-new-cloud-service)
-* [Associate a reserved IP to a running deployment](/previous-versions/azure/virtual-network/virtual-networks-reserved-public-ip#associate-a-reserved-ip-to-a-running-deployment)
-* [Associate a reserved IP to a Cloud Service by using a service configuration file](/previous-versions/azure/virtual-network/virtual-networks-reserved-public-ip#associate-a-reserved-ip-to-a-cloud-service-by-using-a-service-configuration-file)
-
-### What are the features and capabilities that Azure basic IPS/IDS and DDOS provides?
-Azure has IPS/IDS in datacenter physical servers to defend against threats. In addition, customers can deploy third-party security solutions, such as web application firewalls, network firewalls, antimalware, intrusion detection, prevention systems (IDS/IPS), and more. For more information, see [Protect your data and assets and comply with global security standards](https://www.microsoft.com/en-us/trustcenter/Security/AzureSecurity).
-
-Microsoft continuously monitors servers, networks, and applications to detect threats. Azure's multipronged threat-management approach uses intrusion detection, distributed denial-of-service (DDoS) attack prevention, penetration testing, behavioral analytics, anomaly detection, and machine learning to constantly strengthen its defense and reduce risks. Microsoft Antimalware for Azure protects Azure Cloud Services and virtual machines. You have the option to deploy third-party security solutions in addition, such as web application fire walls, network firewalls, antimalware, intrusion detection and prevention systems (IDS/IPS), and more.
-
-### How to enable HTTP/2 on Cloud Services VM?
-
-Windows 10 and Windows Server 2016 come with support for HTTP/2 on both client and server side. If your client (browser) is connecting to the IIS server over TLS that negotiates HTTP/2 via TLS extensions, then you do not need to make any change on the server-side. This is because, over TLS, the h2-14 header specifying use of HTTP/2 is sent by default. If on the other hand your client is sending an Upgrade header to upgrade to HTTP/2, then you need to make the change below on the server side to ensure that the Upgrade works and you end up with an HTTP/2 connection.
-
-1. Run regedit.exe.
-2. Browse to registry key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HTTP\Parameters.
-3. Create a new DWORD value named **DuoEnabled**.
-4. Set its value to 1.
-5. Restart your server.
-6. Go to your **Default Web Site** and under **Bindings**, create a new TLS binding with the self-signed certificate just created.
-
-For more information, see:
--- [HTTP/2 on IIS](https://blogs.iis.net/davidso/http2)-- [Video: HTTP/2 in Windows 10: Browser, Apps and Web Server](https://channel9.msdn.com/Events/Build/2015/3-88)
-
-
-These steps could be automated via a startup task, so that whenever a new PaaS instance gets created, it can do the changes above in the system registry. For more information, see [How to configure and run startup tasks for a Cloud Service](cloud-services-startup-tasks.md).
-
-
-Once this has been done, you can verify whether the HTTP/2 has been enabled or not by using one of the following methods:
--- Enable Protocol version in IIS logs and look into the IIS logs. It will show HTTP/2 in the logs. -- Enable F12 Developer Tool in Internet Explorer or Microsoft Edge and switch to the Network tab to verify the protocol. -
-For more information, see [HTTP/2 on IIS](https://blogs.iis.net/davidso/http2).
-
-## Permissions
-
-### How can I implement role-based access for Cloud Services?
-Cloud Services doesn't support the Azure role-based access control (Azure RBAC) model, as it's not an Azure Resource Manager based service.
-
-See [Understand the different roles in Azure](../role-based-access-control/rbac-and-directory-admin-roles.md).
-
-## Remote desktop
-
-### Can Microsoft internal engineers remote desktop to Cloud Service instances without permission?
-Microsoft follows a strict process that will not allow internal engineers to remote desktop into your Cloud Service without written permission (email or other written communication) from the owner or their designee.
-
-### I cannot remote desktop to Cloud Service VM by using the RDP file. I get following error: An authentication error has occurred (Code: 0x80004005)
-
-This error may occur if you use the RDP file from a machine that is joined to Azure Active Directory. To resolve this issue, follow these steps:
-
-1. Right-click the RDP file you downloaded and then select **Edit**.
-2. Add "&#92;" as prefix before the username. For example, use **.\username** instead of **username**.
-
-## Scaling
-
-### I cannot scale beyond X instances
-Your Azure Subscription has a limit on the number of cores you can use. Scaling will not work if you have used all the cores available. For example, if you have a limit of 100 cores, this means you could have 100 A1 sized virtual machine instances for your Cloud Service, or 50 A2 sized virtual machine instances.
-
-### How can I configure Auto-Scale based on Memory metrics?
-
-Auto-scale based on Memory metrics for a Cloud Services is not currently supported.
-
-To work around this problem, you can use Application Insights. Auto-Scale supports Application Insights as a Metrics Source and can scale the role instance count based on guest metric like "Memory". You have to configure Application Insights in your Cloud Service project package file (*.cspkg) and enable Azure Diagnostics extension on the service to implement this feat.
-
-For more details on how to utilize a custom metric via Application Insights to configure Auto-Scale on Cloud Services, see [Get started with auto scale by custom metric in Azure](../azure-monitor/autoscale/autoscale-custom-metric.md)
-
-For more information on how to integrate Azure Diagnostics with Application Insights for Cloud Services, see [Send Cloud Service, Virtual Machine, or Service Fabric diagnostic data to Application Insights](../azure-monitor/agents/diagnostics-extension-to-application-insights.md)
-
-For more information about to enable Application Insights for Cloud Services, see [Application Insights for Azure Cloud Services](../azure-monitor/app/cloudservices.md)
-
-For more information about how to enable Azure Diagnostics Logging for Cloud Services, see [Set up diagnostics for Azure Cloud Services and virtual machines](/visualstudio/azure/vs-azure-tools-diagnostics-for-cloud-services-and-virtual-machines#turn-on-diagnostics-in-cloud-service-projects-before-you-deploy-them)
-
-## Generic
-
-### How do I add `nosniff` to my website?
-To prevent clients from sniffing the MIME types, add a setting in your *web.config* file.
-
-```xml
-<configuration>
- <system.webServer>
- <httpProtocol>
- <customHeaders>
- <add name="X-Content-Type-Options" value="nosniff" />
- </customHeaders>
- </httpProtocol>
- </system.webServer>
-</configuration>
-```
-
-You can also add this as a setting in IIS. Use the following command with the [common startup tasks](cloud-services-startup-tasks-common.md#configure-iis-startup-with-appcmdexe) article.
-
-```cmd
-%windir%\system32\inetsrv\appcmd set config /section:httpProtocol /+customHeaders.[name='X-Content-Type-Options',value='nosniff']
-```
-
-### How do I customize IIS for a web role?
-Use the IIS startup script from the [common startup tasks](cloud-services-startup-tasks-common.md#configure-iis-startup-with-appcmdexe) article.
-
-### What is the quota limit for my Cloud Service?
-See [Service-specific limits](../azure-resource-manager/management/azure-subscription-service-limits.md#subscription-limits).
-
-### Why does the drive on my Cloud Service VM show very little free disk space?
-This is expected behavior, and it shouldn't cause any issue to your application. Journaling is turned on for the %approot% drive in Azure PaaS VMs, which essentially consumes double the amount of space that files normally take up. However there are several things to be aware of that essentially turn this into a non-issue.
-
-The %approot% drive size is calculated as <size of .cspkg + max journal size + a margin of free space>, or 1.5 GB, whichever is larger. The size of your VM has no bearing on this calculation. (The VM size only affects the size of the temporary C: drive.)ΓÇ»
-
-It is unsupported to write to the %approot% drive. If you are writing to the Azure VM, you must do so in a temporary LocalStorage resource (or other option, such as Blob storage, Azure Files, etc.). So the amount of free space on the %approot% folder is not meaningful. If you are not sure if your application is writing to the %approot% drive, you can always let your service run for a few days and then compare the "before" and "after" sizes.ΓÇ»
-
-Azure will not write anything to the %approot% drive. Once the VHD is created from your `.cspkg` and mounted into the Azure VM, the only thing that might write to this drive is your application.ΓÇ»
-
-The journal settings are non-configurable, so you can't turn it off.
-
-### How can I add an Antimalware extension for my Cloud Services in an automated way?
-
-You can enable Antimalware extension using PowerShell script in the Startup Task. Follow the steps in these articles to implement it:
-
-- [Create a PowerShell startup task](cloud-services-startup-tasks-common.md#create-a-powershell-startup-task)-- [Set-AzureServiceAntimalwareExtension](/powershell/module/servicemanagement/azure.service/Set-AzureServiceAntimalwareExtension)-
-For more information about Antimalware deployment scenarios and how to enable it from the portal, see [Antimalware Deployment Scenarios](../security/fundamentals/antimalware.md#antimalware-deployment-scenarios).
-
-### How to enable Server Name Indication (SNI) for Cloud Services?
-
-You can enable SNI in Cloud Services by using one of the following methods:
-
-**Method 1: Use PowerShell**
-
-The SNI binding can be configured using the PowerShell cmdlet **New-WebBinding** in a startup task for a Cloud Service role instance as below:
-
-```powershell
-New-WebBinding -Name $WebsiteName -Protocol "https" -Port 443 -IPAddress $IPAddress -HostHeader $HostHeader -SslFlags $sslFlags
-```
-
-As described [here](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ee790567(v=technet.10)), the $sslFlags could be one of the values as the following:
-
-|Value|Meaning|
-|
-|0|No SNI|
-|1|SNI Enabled|
-|2|Non SNI binding which uses Central Certificate Store|
-|3|SNI binding which uses Central Certificate store|
-
-**Method 2: Use code**
-
-The SNI binding could also be configured via code in the role startup as described on this [blog post](/archive/blogs/jianwu/expose-ssl-service-to-multi-domains-from-the-same-cloud-service):
-
-```csharp
-//<code snip>
- var serverManager = new ServerManager();
- var site = serverManager.Sites[0];
- var binding = site.Bindings.Add(":443:www.test1.com", newCert.GetCertHash(), "My");
- binding.SetAttributeValue("sslFlags", 1); //enables the SNI
- serverManager.CommitChanges();
- //</code snip>
-```
-
-Using any of the approaches above, the respective certificates (*.pfx) for the specific hostnames have to be first installed on the role instances using a startup task or via code in order for the SNI binding to be effective.
-
-### How can I add tags to my Azure Cloud Service?
-
-Cloud Service is a Classic resource. Only resources created through Azure Resource Manager support tags. You cannot apply tags to Classic resources such as Cloud Service.
-
-### The Azure portal doesn't display the SDK version of my Cloud Service. How can I get that?
-
-We are working on bringing this feature on the Azure portal. Meanwhile, you can use following PowerShell commands to get the SDK version:
-
-```powershell
-Get-AzureService -ServiceName "<Cloud Service name>" | Get-AzureDeployment | Where-Object -Property SdkVersion -NE -Value "" | select ServiceName,SdkVersion,OSVersion,Slot
-```
-
-### I want to shut down the Cloud Service for several months. How to reduce the billing cost of Cloud Service without losing the IP address?
-
-An already deployed Cloud Service gets billed for the Compute and Storage it uses. So even if you shut down the Azure VM, you will still get billed for the Storage.
-
-Here is what you can do to reduce your billing without losing the IP address for your service:
-
-1. [Reserve the IP address](/previous-versions/azure/virtual-network/virtual-networks-reserved-public-ip) before you delete the deployments. You will only be billed for this IP address. For more information about IP address billing, see [IP addresses pricing](https://azure.microsoft.com/pricing/details/ip-addresses/).
-2. Delete the deployments. DonΓÇÖt delete the xxx.cloudapp.net, so that you can use it for future.
-3. If you want to redeploy the Cloud Service by using the same reserve IP that you reserved in your subscription, see [Reserved IP addresses for Cloud Services and Virtual Machines](https://azure.microsoft.com/blog/reserved-ip-addresses/).
cloud-services Cloud Services Connectivity And Networking Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-connectivity-and-networking-faq.md
- Title: Connectivity and networking issues
-description: This article lists the frequently asked questions about connectivity and networking for Microsoft Azure Cloud Services.
-- Previously updated : 10/14/2020------
-# Connectivity and networking issues for Azure Cloud Services (classic): Frequently asked questions (FAQs)
-
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
-
-This article includes frequently asked questions about connectivity and networking issues for [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services). For size information, see the [Cloud Services VM size page](cloud-services-sizes-specs.md).
--
-## I can't reserve an IP in a multi-VIP cloud service.
-First, make sure that the virtual machine instance that you try to reserve the IP for is turned on. Second, make sure that you use reserved IPs for both the staging and production deployments. *Do not* change the settings while the deployment is upgrading.
-
-## How do I use Remote Desktop when I have an NSG?
-Add rules to the NSG that allow traffic on ports **3389** and **20000**. Remote Desktop uses port **3389**. Cloud service instances are load balanced, so you can't directly control which instance to connect to. The *RemoteForwarder* and *RemoteAccess* agents manage Remote Desktop Protocol (RDP) traffic and allow the client to send an RDP cookie and specify an individual instance to connect to. The *RemoteForwarder* and *RemoteAccess* agents require port **20000** to be open, which might be blocked if you have an NSG.
-
-## Can I ping a cloud service?
-
-No, not by using the normal "ping"/ICMP protocol. The ICMP protocol is not permitted through the Azure load balancer.
-
-To test connectivity, we recommend that you do a port ping. While Ping.exe uses ICMP, you can use other tools, such as PSPing, Nmap, and telnet, to test connectivity to a specific TCP port.
-
-For more information, see [Use port pings instead of ICMP to test Azure VM connectivity](/archive/blogs/mast/use-port-pings-instead-of-icmp-to-test-azure-vm-connectivity).
-
-## How do I prevent receiving thousands of hits from unknown IP addresses that might indicate a malicious attack to the cloud service?
-Azure implements a multilayer network security to protect its platform services against distributed denial-of-service (DDoS) attacks. The Azure DDoS defense system is part of Azure's continuous monitoring process, which is continually improved through penetration testing. This DDoS defense system is designed to withstand not only attacks from the outside but also from other Azure tenants. For more information, see [Azure network security](https://download.microsoft.com/download/C/A/3/CA3FC5C0-ECE0-4F87-BF4B-D74064A00846/AzureNetworkSecurity_v3_Feb2015.pdf).
-
-You also can create a startup task to selectively block some specific IP addresses. For more information, see [Block a specific IP address](cloud-services-startup-tasks-common.md#block-a-specific-ip-address).
-
-## When I try to RDP to my cloud service instance, I get the message "The user account has expired."
-You might get the error message "This user account has expired" when you bypass the expiration date that is configured in your RDP settings. You can change the expiration date from the portal by following these steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com), go to your cloud service, and select the **Remote Desktop** tab.
-
-2. Select the **Production** or **Staging** deployment slot.
-
-3. Change the **Expires On** date, and then save the configuration.
-
-You now should be able to RDP to your machine.
-
-## Why is Azure Load Balancer not balancing traffic equally?
-For information about how an internal load balancer works, see [Azure Load Balancer new distribution mode](https://azure.microsoft.com/blog/azure-load-balancer-new-distribution-mode/).
-
-The distribution algorithm used is a 5-tuple (source IP, source port, destination IP, destination port, and protocol type) hash to map traffic to available servers. It provides stickiness only within a transport session. Packets in the same TCP or UDP session are directed to the same datacenter IP (DIP) instance behind the load-balanced endpoint. When the client closes and reopens the connection or starts a new session from the same source IP, the source port changes and causes the traffic to go to a different DIP endpoint.
-
-## How can I redirect incoming traffic to the default URL of my cloud service to a custom URL?
-
-The URL Rewrite module of IIS can be used to redirect traffic that comes to the default URL for the cloud service (for example, \*.cloudapp.net) to some custom name/URL. Because the URL Rewrite module is enabled on web roles by default and its rules are configured in the application's web.config, it's always available on the VM regardless of reboots/reimages.For more information, see:
--- [Create rewrite rules for the URL Rewrite module](/iis/extensions/url-rewrite-module/creating-rewrite-rules-for-the-url-rewrite-module)-- [Remove a default link](https://stackoverflow.com/questions/32286487/azure-website-how-to-remove-default-link?answertab=votes#tab-top)-
-## How can I block/disable incoming traffic to the default URL of my cloud service?
-
-You can prevent incoming traffic to the default URL/name of your cloud service (for example, \*.cloudapp.net). Set the host header to a custom DNS name (for example, www\.MyCloudService.com) under site binding configuration in the cloud service definition (*.csdef) file, as indicated:
-
-```xml
-<?xml version="1.0" encoding="utf-8"?>
-<ServiceDefinition name="AzureCloudServicesDemo" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" schemaVersion="2015-04.2.6">
- <WebRole name="MyWebRole" vmsize="Small">
- <Sites>
- <Site name="Web">
- <Bindings>
- <Binding name="Endpoint1" endpointName="Endpoint1" hostHeader="www.MyCloudService.com" />
- </Bindings>
- </Site>
- </Sites>
- <Endpoints>
- <InputEndpoint name="Endpoint1" protocol="http" port="80" />
- </Endpoints>
- <ConfigurationSettings>
- <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" />
- </ConfigurationSettings>
- </WebRole>
-</ServiceDefinition>
-```
-
-Because this host header binding is enforced through the csdef file, the service is accessible only via the custom name "www.MyCloudService.com." All incoming requests to the "*.cloudapp.net" domain always fail. If you use a custom SLB probe or an internal load balancer in the service, blocking the default URL/name of the service might interfere with the probing behavior.
-
-## How can I make sure the public-facing IP address of a cloud service never changes?
-
-To make sure the public-facing IP address of your cloud service (also known as a VIP) never changes so that it can be customarily approved by a few specific clients, we recommend that you have a reserved IP associated with it. Otherwise, the virtual IP provided by Azure is deallocated from your subscription if you delete the deployment. For successful VIP swap operation, you need individual reserved IPs for both production and staging slots. Without them, the swap operation fails. To reserve an IP address and associate it with your cloud service, see these articles:
--- [Reserve the IP address of an existing cloud service](/previous-versions/azure/virtual-network/virtual-networks-reserved-public-ip#reserve-the-ip-address-of-an-existing-cloud-service)-- [Associate a reserved IP to a cloud service by using a service configuration file](/previous-versions/azure/virtual-network/virtual-networks-reserved-public-ip#associate-a-reserved-ip-to-a-cloud-service-by-using-a-service-configuration-file)-
-If you have more than one instance for your roles, associating RIP with your cloud service shouldn't cause any downtime. Alternatively, you can add the IP range of your Azure datacenter to an allow list. You can find all Azure IP ranges at the [Microsoft Download Center](https://www.microsoft.com/en-us/download/details.aspx?id=41653).
-
-This file contains the IP address ranges (including compute, SQL, and storage ranges) used in Azure datacenters. An updated file is posted weekly that reflects the currently deployed ranges and any upcoming changes to the IP ranges. New ranges that appear in the file aren't used in the datacenters for at least one week. Download the new .xml file every week, and perform the necessary changes on your site to correctly identify services running in Azure. Azure ExpressRoute users might note that this file used to update the BGP advertisement of Azure space in the first week of each month.
-
-## How can I use Azure Resource Manager virtual networks with cloud services?
-
-Cloud services can't be placed in Azure Resource Manager virtual networks. Resource Manager virtual networks and classic deployment virtual networks can be connected through peering. For more information, see [Virtual network peering](../virtual-network/virtual-network-peering-overview.md).
--
-## How can I get the list of public IPs used by my Cloud Services?
-
-You can use following PS script to get the list of public IPs for Cloud Services under your subscription
-
-```powershell
-$services = Get-AzureService | Group-Object -Property ServiceName
-
-foreach ($service in $services)
-{
- "Cloud Service '$($service.Name)'"
-
- $deployment = Get-AzureDeployment -ServiceName $service.Name
- "VIP - " + $deployment.VirtualIPs[0].Address
- "================================="
-}
-```
cloud-services Cloud Services Deployment Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-deployment-faq.md
- Title: Deployment issues for Microsoft Azure Cloud Services FAQ| Microsoft Docs
-description: This article lists the frequently asked questions about deployment for Microsoft Azure Cloud Services.
-- Previously updated : 10/14/2020------
-# Deployment issues for Azure Cloud Services (classic): Frequently asked questions (FAQs)
-
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
-This article includes frequently asked questions about deployment issues for [Microsoft Azure Cloud Services](https://azure.microsoft.com/services/cloud-services). You can also consult the [Cloud Services VM Size page](cloud-services-sizes-specs.md) for size information.
--
-## Why does deploying a cloud service to the staging slot sometimes fail with a resource allocation error if there is already an existing deployment in the production slot?
-If a cloud service has a deployment in either slot, the entire cloud service is pinned to a specific cluster. This means that if a deployment already exists in the production slot, a new staging deployment can only be allocated in the same cluster as the production slot.
-
-Allocation failures occur when the cluster where your cloud service is located does not have enough physical compute resources to satisfy your deployment request.
-
-For help with mitigating such allocation failures, see [Cloud Service allocation failure: Solutions](cloud-services-allocation-failures.md#solutions).
-
-## Why does scaling up or scaling out a cloud service deployment sometimes result in allocation failure?
-When a cloud service is deployed, it usually gets pinned to a specific cluster. This means scaling up/out an existing cloud service must allocate new instances in the same cluster. If the cluster is nearing capacity or the desired VM size/type is not available, the request may fail.
-
-For help with mitigating such allocation failures, see [Cloud Service allocation failure: Solutions](cloud-services-allocation-failures.md#solutions).
-
-## Why does deploying a cloud service into an affinity group sometimes result in allocation failure?
-A new deployment to an empty cloud service can be allocated by the fabric in any cluster in that region, unless the cloud service is pinned to an affinity group. Deployments to the same affinity group will be attempted on the same cluster. If the cluster is nearing capacity, the request may fail.
-
-For help with mitigating such allocation failures, see [Cloud Service allocation failure: Solutions](cloud-services-allocation-failures.md#solutions).
-
-## Why does changing VM size or adding a new VM to an existing cloud service sometimes result in allocation failure?
-The clusters in a datacenter may have different configurations of machine types (for example, A series, Av2 series, D series, Dv2 series, G series, H series, etc.). But not all the clusters would necessarily have all the kinds of VMs. For example, if you try to add a D series VM to a cloud service that is already deployed in an A series-only cluster, you will experience an allocation failure. This will also happen if you try to change VM SKU sizes (for example, switching from an A series to a D series).
-
-For help with mitigating such allocation failures, see [Cloud Service allocation failure: Solutions](cloud-services-allocation-failures.md#solutions).
-
-To check the sizes available in your region, see [Microsoft Azure: Products available by region](https://azure.microsoft.com/regions/services).
-
-## Why does deploying a cloud service sometime fail due to limits/quotas/constraints on my subscription or service?
-Deployment of a cloud service may fail if the resources that are required to be allocated exceed the default or maximum quota allowed for your service at the region/datacenter level. For more information, see [Cloud Services limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-cloud-services-limits).
-
-You could also track the current usage/quota for your subscription at the portal: Azure portalΓÇ»=> SubscriptionsΓÇ»=> \<appropriate subscription>ΓÇ»=> "Usage + quota".
-
-Resource usage/consumption-related information can also be retrieved via the Azure Billing APIs. See [Azure consumption API overview](../cost-management-billing/manage/consumption-api-overview.md).
-
-## How can I change the size of a deployed cloud service VM without redeploying it?
-You cannot change the VM size of a deployed cloud service without redeploying it. The VM size is built into the CSDEF, which can only be updated with a redeploy.
-
-For more information, see [How to update a cloud service](cloud-services-update-azure-service.md).
-
-## Why am I not able to deploy Cloud Services through Service Management APIs or PowerShell when using Azure Resource Manager Storage account? 
-
-Since the Cloud Service is a Classic resource that is not directly compatible with the Azure Resource Manager model, you can't associate it with the Azure Resource Manager Storage accounts. Here are few options: 
--- Deploying through REST API.-
- When you deploy through Service Management REST API, you could get around the limitation by specifying a SAS URL to the blob storage, which will work with both Classic and Azure Resource Manager Storage account. Read more about the 'PackageUrl' property [here](/previous-versions/azure/reference/ee460813(v=azure.100)).
--- Deploying through [Azure portal](https://portal.azure.com).-
- This will work from the [Azure portal](https://portal.azure.com) as the call goes through a proxy/shim that allows communication between Azure Resource Manager and Classic resources. 
-
-## Why does Azure portal require me to provide a storage account for deployment?
-
-In the classic portal, the package was uploaded to the management API layer directly, and then the API layer would temporarily put the package into an internal storage account. This process causes performance and scalability problems because the API layer was not designed to be a file upload service. In the Azure portal (Resource Manager deployment model), we have bypassed the interim step of first uploading to the API layer, resulting in faster and more reliable deployments.
-
-As for the cost, it is very small and you can reuse the same storage account across all deployments. You can use the [storage cost calculator](https://azure.microsoft.com/pricing/calculator/#storage1) to determine the cost to upload the service package (CSPKG), download the CSPKG, then delete the CSPKG.
cloud-services Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/security-baseline.md
+
+ Title: Azure security baseline for Azure Cloud Services
+description: The Azure Cloud Services security baseline provides procedural guidance and resources for implementing the security recommendations specified in the Azure Security Benchmark.
+++ Last updated : 02/17/2021+++
+# Important: This content is machine generated; do not modify this topic directly. Contact mbaldwin for more information.
+++
+# Azure security baseline for Azure Cloud Services
+
+This security
+baseline applies guidance from the [Azure Security Benchmark version
+1.0](../security/benchmarks/overview-v1.md) to Microsoft Azure Cloud Services. The Azure Security Benchmark
+provides recommendations on how you can secure your cloud solutions on Azure.
+The content is grouped by the **security controls** defined by the Azure
+Security Benchmark and the related guidance applicable to Cloud Services. **Controls** not applicable to Cloud Services have been excluded.
+
+
+To see how Cloud Services completely maps to the Azure
+Security Benchmark, see the [full Cloud Services security baseline mapping
+file](https://github.com/MicrosoftDocs/SecurityBenchmarks/tree/master/Azure%20Offer%20Security%20Baselines).
+
+## Network Security
+
+*For more information, see the [Azure Security Benchmark: Network Security](../security/benchmarks/security-control-network-security.md).*
+
+### 1.1: Protect Azure resources within virtual networks
+
+**Guidance**: Create a classic Azure Virtual Network with separate public and private subnets to enforce isolation based on trusted ports and IP ranges. These virtual network and subnets must be the classic Virtual Network (classic deployment) based resources, and not the current Azure Resource Manager resources.
+
+Allow or deny traffic using a network security group, which contains access control rules based on traffic direction, protocol, source address and port, and destination address and port. The rules of a network security group can be changed at any time, and changes are applied to all associated instances.
+
+Microsoft Azure Cloud Services (Classic) cannot be placed in Azure Resource Manager virtual networks. However, Resource Manager-based virtual networks and classic deployment-based virtual networks can be connected through peering.
+
+- [Network Security Group overview](../virtual-network/network-security-groups-overview.md)
+
+- [Virtual Network peering](./cloud-services-connectivity-and-networking-faq.yml#how-can-i-use-azure-resource-manager-virtual-networks-with-cloud-services-)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 1.2: Monitor and log the configuration and traffic of virtual networks, subnets, and NICs
+
+**Guidance**: Document your Azure Cloud Services configuration and monitor it for changes. Use the service's configuration file to specify the number of role instances to deploy for each role in the service, the values of any configuration settings, and the thumbprints for any certificates associated with a role.
+
+If the service is part of a virtual network, configuration information for the network must be provided in the service configuration file, as well as in the virtual networking configuration file. The default extension for the service configuration file is .cscfg. Note that Azure Policy is not supported for Classic deployments for configuration enforcement.
+
+Set a cloud service's configuration values in the service configuration file (.cscfg) and the definition in an service definition (.csdef) file. Use the service definition file to define the service model for an application. Define the roles, which are available to a cloud service and also specify the service endpoints. Log the configuration for Azure Cloud Services with service configuration file. Any reconfiguration can be done through the ServiceConfig.cscfg file.
+
+Monitor the optional NetworkTrafficRules element service definition which restricts which roles can communicate to specified internal endpoints. Configure the NetworkTrafficRules node, an optional element in the service definition file, to specify how roles should communicate with each other. Place limits on which roles can access the internal endpoints of the specific role. Note that the service definition cannot be altered.
+
+Enable network security group flow logs and send the logs to an Azure Storage account for auditing. Send the flow logs to a Log Analytics workspace and use Traffic Analytics to provide insights into traffic patterns in your Azure tenant. Some advantages of Traffic Analytics are the ability to visualize network activity, identify hot spots and security threats, understand traffic flow patterns, and pinpoint network misconfigurations.
+
+- [Azure Resource Manager vs. classic deployment - Understand deployment models and the state of your resources](../azure-resource-manager/management/deployment-models.md)
+
+- [Cloud Services Config file](schema-cscfg-file.md)
+
+- [List of services supported by Azure Policy](/cli/azure/azure-services-the-azure-cli-can-manage)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 1.3: Protect critical web applications
+
+**Guidance**: Microsoft uses the Transport Layer Security (TLS) protocol v1.2 to protect data when itΓÇÖs traveling between Azure Cloud Services and customers. Microsoft datacenters negotiate a TLS connection with client systems that connect to Azure services. TLS provides strong authentication, message privacy, and integrity (enabling detection of message tampering, interception, and forgery), interoperability, algorithm flexibility, and ease of deployment and use.
+
+- [Encryption Fundamentals](../security/fundamentals/encryption-overview.md)
+
+- [Configure TLS/SSL certificates](cloud-services-configure-ssl-certificate-portal.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 1.4: Deny communications with known malicious IP addresses
+
+**Guidance**: Azure Cloud implements a multilayer network security to protect its platform services against distributed denial-of-service (DDoS) attacks. The Azure DDoS Protection is part of Azure Cloud's continuous monitoring process, which is continually improved through penetration testing. This DDoS Protection is designed to withstand not only attacks from the outside but also from other Azure tenants.
+
+There are a few different ways to block or deny communication besides platform level protection within Azure Cloud Services. These are:
+
+- Create a startup task to selectively block some specific IP addresses
+- Restrict an Azure web role access to a set of specified IP addresses by modifying your IIS web.config file
+
+Prevent incoming traffic to the default URL or name of your Cloud Services, for example, *.cloudapp.net. Set the host header to a custom DNS name, under site binding configuration in the Cloud Services definition (*.csdef) file.
+
+Configure a Deny Apply rule to classic subscription administrator assignments. By default, after an internal endpoint is defined, communication can flow from any role to the internal endpoint of a role without any restrictions. To restrict communication, you must add a NetworkTrafficRules element to the ServiceDefinition element in the service definition file.
+
+- [How can I block/disable incoming traffic to the default URL of my cloud service](./cloud-services-connectivity-and-networking-faq.yml#how-can-i-block-disable-incoming-traffic-to-the-default-url-of-my-cloud-service-)
+
+- [Azure DDOS protection](./cloud-services-connectivity-and-networking-faq.yml#how-do-i-prevent-receiving-thousands-of-hits-from-unknown-ip-addresses-that-might-indicate-a-malicious-attack-to-the-cloud-service-)
+
+- [Block a specific IP address](./cloud-services-startup-tasks-common.md#block-a-specific-ip-address)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 1.5: Record network packets
+
+**Guidance**: Use Azure Network Watcher, network performance monitoring, diagnostic, and analytics service, that allows monitoring of Azure networks. The Network Watcher Agent virtual machine extension is a requirement for capturing network traffic on demand, and other advanced functionality on Azure Virtual Machines. Install the Network Watcher Agent virtual machine extension, and turn on network security group flow logs.
+
+Configure flow logging on a network security group. Review details on how to deploy the Network Watcher Virtual Machine extension to an existing Virtual Machine deployed through the classic deployment model.
+
+- [Configure flow logging on a network security group](../virtual-machines/extensions/network-watcher-linux.md)
+
+- [For more information about configuring flow logs visit](/cli/azure/azure-services-the-azure-cli-can-manage)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 1.6: Deploy network-based intrusion detection/intrusion prevention systems (IDS/IPS)
+
+**Guidance**: Azure Cloud Services has no built-in IDS or IPS capability. Customers can select and deploy a supplementary network-based IDS or IPS solution from the Azure Marketplace based on their organizational requirements. When using third-party solutions, make sure to thoroughly test your
+selected IDS or IPS solution with Azure Cloud Services to ensure proper operation and
+functionality.
+
+- [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/?term=Firewall)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 1.7: Manage traffic to web applications
+
+**Guidance**: Service certificates, which are attached to Azure Cloud Services, enable secure communication to and from the service. These certificates are defined in the services' definition and are automatically deployed to the virtual machine that is running an instance of a web role. As an example, for a web role, you can use a service certificate that can authenticate an exposed HTTPS endpoint.
+
+To update the certificate, it is only necessary to upload a new certificate and change the thumbprint value in the service configuration file.
+
+Use the TLS 1.2 protocol, the most commonly used method of securing data to provide confidentiality and integrity protection.
+
+Generally, to protect web applications and to secure them against attacks such as OWASP Top 10, you can deploy an Azure Web Application Firewall-enabled Azure Application Gateway for protecting web applications.
+
+- [Service Certificates](cloud-services-certs-create.md)
+
+- [Configuring TLS for an application in Azure](cloud-services-configure-ssl-certificate-portal.md)
+
+- [How to deploy Application Gateway](../application-gateway/quick-create-portal.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 1.9: Maintain standard security configurations for network devices
+
+**Guidance**: Harden your Azure Cloud Services configuration and monitor it for changes. The service configuration file specifies the number of role instances to deploy for each role in the service, the values of any configuration settings, and the thumbprints for any certificates associated with a role.
+
+If your service is part of a virtual network, the configuration information for the network must be provided in the service configuration file, as well as in the virtual networking configuration file. The default extension for the service configuration file is .cscfg.
+
+Note that Azure Policy is not supported with Azure Cloud Services for configuration enforcement.
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 1.10: Document traffic configuration rules
+
+**Guidance**: Azure network security groups can be used to filter network traffic to and from Azure resources in an Azure Virtual Network. A network security group contains security rules that allow or deny inbound network traffic to, or, outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol.
+
+Use the "Description" field for individual network security group rules within Azure Cloud Services to document the rules, which allow traffic to, or from a network.
+
+- [How to filter network traffic with network security group rules](../virtual-network/tutorial-filter-network-traffic.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 1.11: Use automated tools to monitor network resource configurations and detect changes
+
+**Guidance**: Use Azure Traffic Manager's built-in endpoint monitoring and automatic endpoint failover features. They help you deliver high-availability applications, which are resilient to endpoint and Azure region failures. To configure endpoint monitoring, you must specify certain settings on your Traffic Manager profile.
+
+Gather insight from Activity log, a platform log in Azure, into subscription-level events. It includes such information as when a resource is modified or when a virtual machine is started. View the Activity log in the Azure portal or retrieve entries with PowerShell and CLI.
+
+Create a diagnostic setting to send the Activity log to Azure Monitor, Azure Event Hubs to forward outside of Azure, or to Azure Storage for archival. Configure Azure Monitor for notification alerts when critical resources in your Azure Cloud Services are changed.
+
+- [Azure Activity log](../azure-monitor/essentials/activity-log.md)
+
+- [Create, view, and manage activity log alerts by using Azure Monitor](../azure-monitor/alerts/alerts-activity-log.md)
+
+- [Traffic Manager Monitoring](../traffic-manager/traffic-manager-monitoring.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Logging and Monitoring
+
+*For more information, see the [Azure Security Benchmark: Logging and Monitoring](../security/benchmarks/security-control-logging-monitoring.md).*
+
+### 2.1: Use approved time synchronization sources
+
+**Guidance**: Microsoft maintains time sources for Azure resources for Azure Cloud Services. Customers might need to create a network rule to allow access to a time server used in their environment, over port 123 with UDP protocol.
+
+- [NTP server access](../firewall/protect-windows-virtual-desktop.md#additional-considerations)
+
+**Responsibility**: Shared
+
+**Azure Security Center monitoring**: None
+
+### 2.2: Configure central security log management
+
+**Guidance**: Consume your cloud service streaming data programmatically with Azure Event Hubs. Integrate and send all this data to Azure Sentinel to monitor and review your logs, or use a third-party SIEM. For central security log management, configure continuous export of your chosen Azure Security Center data to Azure Event Hubs and set up the appropriate connector for your SIEM. Here are some options for Azure Sentinel including third-party tools:
+
+- Azure Sentinel - Use the native Security Center alerts data connector
+- Splunk - Use the Azure Monitor add-on for Splunk
+- IBM QRadar - Use a manually configured log source
+- ArcSight ΓÇô Use SmartConnector
+
+Review the Azure Sentinel documentation for additional details on available connectors with Azure Sentinel.
+
+- [Connect data sources](../sentinel/connect-data-sources.md)
+
+- [Integrate with a SIEM](../security-center/continuous-export.md)
+
+- [Store diagnostic data](diagnostics-extension-to-storage.md)
+
+- [Configuring SIEM integration via Azure Event Hubs](../security-center/continuous-export.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 2.3: Enable audit logging for Azure resources
+
+**Guidance**: Configure Visual Studio to set up Azure Diagnostics for troubleshooting Azure Cloud Services which captures system and logging data on virtual machines, including virtual machine instances running your Azure Cloud Services. The Diagnostics data is transferred to a storage account of your choice. Turn on diagnostics in Azure Cloud Services projects before their deployment.
+
+
+View the Change history for some events in the activity log within Azure Monitor. Audit what changes happened during an event time period. Choose an event from the Activity Log for deeper inspection with Change history (Preview) tab. Send the diagnostic data to Application Insights when you publish an Azure Cloud Services from Visual Studio. Create the Application Insights Azure resource at that time or send the data to an existing Azure resource.
+
+Azure Cloud Services can be monitored by Application Insights for availability, performance, failures, and usage. Custom charts can be added to Application Insights so that you can see the data that matters the most. Role instance data can be collected by using the Application Insights SDK in your Azure Cloud Services project.
+
+- [Turn on diagnostics in Visual Studio before deployment](/visualstudio/azure/vs-azure-tools-diagnostics-for-cloud-services-and-virtual-machines#to-turn-on-diagnostics-in-visual-studio-before-deployment)
+
+- [View change history](../azure-monitor/essentials/activity-log.md#view-change-history)
+
+- [Application Insights for Azure Cloud service (Classic)](../azure-monitor/app/cloudservices.md)
+
+- [Set up diagnostics for Azure Cloud service (Classic) and virtual machines](/visualstudio/azure/vs-azure-tools-diagnostics-for-cloud-services-and-virtual-machines)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 2.5: Configure security log storage retention
+
+**Guidance**: You can use advanced monitoring with Azure Cloud Services which allows for additional metrics are sampled and collected at intervals of 5 minutes, 1 hour, and 12 hours. The aggregated data is stored in storage account, in tables, and is purged after 10 days. However, the storage account used is configured by role and you can use different storage accounts for different roles. This is configured with a connection string in the .csdef and .cscfg files.
+
+Note that Advanced monitoring involves using the Azure Diagnostics extension (Application Insights SDK is optional) on the role you want to monitor. The diagnostics extension uses a config file (per role) named diagnostics.wadcfgx to configure the diagnostics metrics monitored. The Azure Diagnostic extension collects and stores data in an Azure Storage account. These settings are configured in the .wadcfgx, .csdef, and .cscfg files.
+
+- [Introduction to Cloud Service Monitoring](cloud-services-how-to-monitor.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 2.6: Monitor and review Logs
+
+**Guidance**: Basic or advanced monitoring modes are available for Azure Cloud Services. Azure Cloud Services automatically collects basic monitoring data (CPU percentage, network in/out, and disk read/write) from a host virtual machine. View the collected monitoring data on the overview and metrics pages of a cloud service in the Azure portal.
+
+Enable diagnostics in Azure Cloud Services to collect diagnostic data like application logs, performance counters, and more, while using the Azure Diagnostics extension. Enable or update diagnostics configuration on a cloud service that is already running with Set-AzureServiceDiagnosticsExtension cmdlet or deploy a cloud service with diagnostics extension automatically. Optionally, install the Application Insights SDK. Send performance counters to Azure Monitor.
+
+The Azure Diagnostic extension collects and stores data in an Azure Storage account. Transfer Diagnostic data to the Microsoft Azure Storage Emulator or to Azure Storage as it is not permanently stored. Once in storage, it can be viewed with one of several available tools, such as Server Explorer in Visual Studio, Microsoft Azure Storage Explorer, Azure Management Studio. Configure the diagnostics metrics to be monitored with a config file (per role) named diagnostics.wadcfgx in the diagnostics extension.
+
+- [Introduction to Cloud Service Monitoring](cloud-services-how-to-monitor.md)
+
+- [How to Enable Diagnostics in a Worker Role - Integrate with a SIEM](../security-center/continuous-export.md)
+
+- [Enable diagnostics in Azure Cloud Services using PowerShell](cloud-services-diagnostics-powershell.md)
+
+- [Store and view diagnostic data in Azure Storage](./diagnostics-extension-to-storage.md?preserve-view=)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 2.7: Enable alerts for anomalous activities
+
+**Guidance**: You can monitor Azure Cloud Services log data by integration with Azure Sentinel, or with a third-party SIEM, by enable alerting for anomalous activities.
+
+- [Integrate with a SIEM](../security-center/continuous-export.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 2.8: Centralize anti-malware logging
+
+**Guidance**: Microsoft Antimalware for Azure, protects Azure Cloud Services and virtual machines. You have the option to deploy third-party security solutions in addition, such as web application fire walls, network firewalls, antimalware, intrusion detection and prevention systems (IDS or IPS), and more.
+
+- [What are the features and capabilities that Azure basic IPS/IDS and DDOS provides](./cloud-services-configuration-and-management-faq.yml#what-are-the-features-and-capabilities-that-azure-basic-ips-ids-and-ddos-provides-)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Identity and Access Control
+
+*For more information, see the [Azure Security Benchmark: Identity and Access Control](../security/benchmarks/security-control-identity-access-control.md).*
+
+### 3.1: Maintain an inventory of administrative accounts
+
+**Guidance**: Microsoft recommends that you manage access to Azure resources using Azure role-based access control (Azure RBAC). Azure Cloud Services, however, does not support the Azure RBAC model, as it's not an Azure Resource Manager based service and you have to use a classic subscription
+
+By default, Account Administrator, Service Administrator, and Co-Administrator are the three classic subscription administrator roles in Azure.
+
+Classic subscription administrators have full access to the Azure subscription. They can manage resources using the Azure portal, Azure Resource Manager APIs, and the classic deployment model APIs. The account that is used to sign up for Azure is automatically set as both the Account Administrator and Service Administrator. Additional Co-Administrators can be added later.
+
+The Service Administrator and the Co-Administrators have equivalent access of users who have been assigned the Owner role (an Azure role) at the subscription scope. Manage Co-Administrators or view the Service Administrator by using the Classic administrators tab at the Azure portal.
+
+List role assignments for classic service administrator and coadministrators with PowerShell with the command:
+
+Get-AzRoleAssignment -IncludeClassicAdministrators
+
+Review the differences between classic subscription administrative roles.
+
+- [Differences between three classic subscription administrative roles](../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 3.3: Use dedicated administrative accounts
+
+**Guidance**: It is recommended to create standard operating procedures around the use of dedicated administrative accounts, based on available roles and the permissions required to operate and manage the Azure Cloud Services resources.
+
+- [Differences between the classic subscription administrative roles](../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 3.4: Use single sign-on (SSO) with Azure Active Directory
+
+**Guidance**: Avoid managing separate identities for applications that are running on Azure Cloud Services. Implement single sign-on to avoid requiring users to manage multiple identities and credentials.
+
+- [What is single sign-on (SSO)](../active-directory/manage-apps/what-is-single-sign-on.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 3.6: Use dedicated machines (Privileged Access Workstations) for all administrative tasks
+
+**Guidance**: It is recommended to use a secure, Azure-managed workstation (also known as a Privileged Access Workstation) for administrative tasks, which require elevated privileges.
+
+- [Understand secure, Azure-managed workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
+
+- [How to enable Azure Active Directory (Azure AD) multifactor authentication](../active-directory/authentication/howto-mfa-getstarted.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Data Protection
+
+*For more information, see the [Azure Security Benchmark: Data Protection](../security/benchmarks/security-control-data-protection.md).*
+
+### 4.1: Maintain an inventory of sensitive Information
+
+**Guidance**: Use the Azure Cloud Service REST APIs to inventory your Azure Cloud Service resources for sensitive information. Poll the deployed cloud service resources to get the configuration and .pkg resources.
+
+ As an example, a few APIs are listed below:
+
+- Get Deployment - The Get Deployment operation returns configuration information, status, and system properties for a deployment.
+- Get Package - The Get Package operation retrieves a cloud service package for a deployment and stores the package files in Microsoft Azure Blob storage
+- Get Cloud Service Properties - The Get Cloud Service Properties operation retrieves properties for the specified cloud service
+
+Review Azure Cloud Service REST APIs documentation and create a process for data protection of sensitive information, based on your organizational requirements.
+
+- [Get Deployment](/rest/api/compute/cloudservices/rest-get-deployment)
+
+- [Get Cloud Service Properties](/rest/api/compute/cloudservices/rest-get-cloud-service-properties)
+
+- [Get Package](/rest/api/compute/cloudservices/rest-get-package)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 4.2: Isolate systems storing or processing sensitive information
+
+**Guidance**: Implement isolation using separate subscriptions and management groups for individual security domains such as environment type and data sensitivity level for Azure Cloud Services.
+
+You can also edit the "permissionLevel" in Azure Cloud Service's Certificate element to specify the access permissions given to the role processes. If you want only elevated processes to be able to access the private key, then specify elevated permission. limitedOrElevated permission allows all role processes to access the private key. Possible values are limitedOrElevated or elevated. The default value is limitedOrElevated.
+
+- [How to create additional Azure subscriptions](../cost-management-billing/manage/create-subscription.md)
+
+- [How to create management groups](../governance/management-groups/create-management-group-portal.md)
+
+- [WebRole Schema](./schema-csdef-webrole.md#Certificate)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 4.3: Monitor and block unauthorized transfer of sensitive information
+
+**Guidance**: It is recommended to use a third-party solution from Azure Marketplace in network perimeters to monitor for unauthorized transfer of sensitive information and block such transfers while alerting information security professionals.
+
+- [Understand customer data protection in Azure](../security/fundamentals/protection-customer-data.md)
+
+**Responsibility**: Shared
+
+**Azure Security Center monitoring**: None
+
+### 4.4: Encrypt all sensitive information in transit
+
+**Guidance**: Configure TLS v2 for Azure Cloud Services. Use the Azure portal to add the certificate to your staged Azure Cloud Services deployment and add the certificate information to the services' CSDEF and CSCFG files. Re-package your application, and update your staged deployment to use the new package.
+
+Use service certificates in Azure which are attached to Azure Cloud Services to enable secure communication to and from the service. Provide a certificate that can authenticate an exposed HTTPS endpoint. Define Service certificates in the cloud service's service definition, and automatically deploy them to the Virtual Machine, running an instance of your role.
+
+Authenticate with the management API with management certificates) Management certificates allow you to authenticate with the classic deployment model. Many programs and tools (such as Visual Studio or the Azure SDK) use these certificates to automate configuration and deployment of various Azure services.
+
+For additional reference, the classic deployment model API provides programmatic access to the classic deployment model functionality available through the Azure portal. Azure SDK for Python can be used to manage Azure Cloud Services and Azure Storage accounts. The Azure SDK for Python wraps the classic deployment model API, a REST API. All API operations are performed over TLS and mutually authenticated by using X.509 v3 certificates. The management service can be accessed from within a service running in Azure. It also can be accessed directly over the Internet from any application that can send an HTTPS request and receive an HTTPS response.
+
+- [Configure TLS for an application in Azure](cloud-services-configure-ssl-certificate-portal.md)
+
+- [Use classic deployment model from Python](cloud-services-python-how-to-use-service-management.md)
+
+**Responsibility**: Shared
+
+**Azure Security Center monitoring**: None
+
+### 4.5: Use an active discovery tool to identify sensitive data
+
+**Guidance**: It is recommended to use a third-party active discovery tool to identify all sensitive information stored, processed, or transmitted by the organization's technology systems, including those located on-site, or at a remote service provider, and then update the organization's sensitive information inventory.
+
+- [Understand customer data protection in Azure](../security/fundamentals/protection-customer-data.md)
+
+**Responsibility**: Shared
+
+**Azure Security Center monitoring**: None
+
+### 4.7: Use host-based data loss prevention to enforce access control
+
+**Guidance**: Not applicable to Cloud service (Classic). It does not enforce data loss prevention.
+
+It is recommended to implement a third-party tool such as an automated host-based data loss prevention solution to enforce access controls on data even when data is copied off a system.
+
+For the underlying platform which is managed by Microsoft, Microsoft treats all customer content as sensitive and goes to great lengths to guard against customer data loss and exposure. To ensure customer data in Azure remains secure, Microsoft has implemented and maintains a suite of robust data protection controls and capabilities.
+
+- [Understand customer data protection in Azure](../security/fundamentals/protection-customer-data.md)
+
+**Responsibility**: Shared
+
+**Azure Security Center monitoring**: None
+
+### 4.8: Encrypt sensitive information at rest
+
+**Guidance**: Azure Cloud Services does not support encryption-at-rest. This is because Azure Cloud Services is designed to be stateless. Azure Cloud Services support external storage, for example, Azure Storage, which is by-default, encrypted at rest.
+
+The application data stored in temporary disks is not encrypted. The customer is responsible to manage and encrypt this data, as required.
+
+- [Understand encryption at rest in Azure](../security/fundamentals/encryption-atrest.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 4.9: Log and alert on changes to critical Azure resources
+
+**Guidance**: You can use classic metric alerts in Azure Monitor to get notified when one of your metrics applied to critical resources cross a threshold. Classic metric alerts are an older functionality that allows for alerting only on non-dimensional metrics. There is an existing newer functionality called Metric alerts which have improved functionality over classic metric alerts.
+
+Additionally, Application Insights can monitor Azure Cloud Services apps for availability, performance, failures, and usage. This uses combined data from Application Insights SDKs with Azure Diagnostics data from your Azure Cloud Services.
+
+- [Create, view, and manage classic metric alerts using Azure Monitor](../azure-monitor/alerts/alerts-classic-portal.md)
+
+- [Metric Alerts Overview](../azure-monitor/alerts/alerts-metric-overview.md)
+
+- [Application Insights for Azure Cloud service (Classic)](../azure-monitor/app/cloudservices.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Vulnerability Management
+
+*For more information, see the [Azure Security Benchmark: Vulnerability Management](../security/benchmarks/security-control-vulnerability-management.md).*
+
+### 5.2: Deploy automated operating system patch management solution
+
+**Guidance**: Note that this information relates to the Azure Guest operating system for Azure Cloud Services worker and web roles with Platform as a Service (PaaS). It does not however apply to Virtual Machines with Infrastructure as a service (IaaS).
+
+By default, Azure periodically updates customer's guest operating system to the latest supported image within the operating system family that they have specified in their service configuration (.cscfg), such as, Windows Server 2016.
+
+When a customer chooses a specific operating system version for their Azure Cloud Services deployment, it disables automatic operating system updates and makes patching their responsibility. The customer must ensure that their role instances are receiving updates or they could expose their application to security vulnerabilities.
+
+- [Azure Guest OS](cloud-services-guestos-msrc-releases.md)
+
+- [Azure Guest OS supportability and retirement policy](cloud-services-guestos-retirement-policy.md)
+
+- [How to Configure Cloud service (Classic)](cloud-services-how-to-configure-portal.md)
+
+- [Manage Guest OS version](./cloud-services-how-to-configure-portal.md#manage-guest-os-version)
+
+**Responsibility**: Shared
+
+**Azure Security Center monitoring**: None
+
+### 5.3: Deploy an automated patch management solution for third-party software titles
+
+**Guidance**: Use a third-party patch management solution. Customers already using Configuration Manager in their environment can also use System Center Updates Publisher, allowing them to publish custom updates into Windows Server Update Service.
+
+This allows Update Management to patch machines that use Configuration Manager as their update repository with third-party software.
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 5.5: Use a risk-rating process to prioritize the remediation of discovered vulnerabilities
+
+**Guidance**: It is recommended for a customer to understand the scope of their risk from a DDoS attack on an ongoing basis.
+
+We suggest thinking through these scenarios:
+
+- What new publicly available Azure resources need protection?
+- Is there a single point of failure in the service?
+- How can services be isolated to limit the impact of an attack while still making services available to valid customers?
+- Are there virtual networks where DDoS Protection Standard should be enabled but isn't?
+- Are my services active/active with failover across multiple regions?
+
+Supporting documentation:
+
+- [Risk evaluation of your Azure resources](../security/fundamentals/ddos-best-practices.md#risk-evaluation-of-your-azure-resources)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Inventory and Asset Management
+
+*For more information, see the [Azure Security Benchmark: Inventory and Asset Management](../security/benchmarks/security-control-inventory-asset-management.md).*
+
+### 6.1: Use automated asset discovery solution
+
+**Guidance**: Not applicable to Azure Cloud Services. This recommendation is applicable to IaaS compute resources.
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 6.3: Delete unauthorized Azure resources
+
+**Guidance**: It is recommended to reconcile inventory on a regular basis and ensure unauthorized resources are deleted from the subscription in a timely manner.
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 6.4: Define and maintain an inventory of approved Azure resources
+
+**Guidance**: The customer should define approved Azure resources and approved software for compute resources.
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 6.5: Monitor for unapproved Azure resources
+
+**Guidance**: Use the Adaptive Application Control feature, available in Azure Security Center. It is an intelligent, automated, end-to-end solution from Security Center which helps you control which applications can run on your Windows and Linux, Azure and non-Azure machines. It also helps harden your machines against malware.
+
+This feature is available for both Azure and non-Azure Windows (all versions, classic, or Azure Resource Manager) and Linux machines.
+
+Security Center uses machine learning to analyze the applications running on your machines and creates an allow list from this intelligence. This capability greatly simplifies the process of configuring and maintaining application allow list policies, enabling you to:
+- Block or alert on attempts to run malicious applications, including those that might otherwise be missed by antimalware solutions.
+
+- Comply with your organization's security policy that dictates the use of only licensed software.
+- Avoid unwanted software to be used in your environment.
+- Avoid old and unsupported apps to run.
+- Prevent specific software tools that are not allowed in your organization.
+- Enable IT to control the access to sensitive data through app usage.
+
+More details are available at the referenced links.
+
+- [Adaptive application controls](../security-center/security-center-adaptive-application.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 6.6: Monitor for unapproved software applications within compute resources
+
+**Guidance**: Use the Adaptive Application Control feature, available in Azure Security Center. It is an intelligent, automated, end-to-end solution from Security Center which helps you control which applications can run on your Windows and Linux, Azure and non-Azure machines. It also helps harden your machines against malware.
+
+This feature is available for both Azure and non-Azure Windows (all versions, classic, or Azure Resource Manager) and Linux machines.
+
+Security Center uses machine learning to analyze the applications running on your machines and creates an allow list from this intelligence. This capability greatly simplifies the process of configuring and maintaining application allow list policies, enabling you to:
+
+- Block or alert on attempts to run malicious applications, including those that might otherwise be missed by antimalware solutions.
+
+- Comply with your organization's security policy that dictates the use of only licensed software.
+
+- Avoid unwanted software to be used in your environment.
+
+- Avoid old and unsupported apps to run.
+
+- Prevent specific software tools that are not allowed in your organization.
+
+- Enable IT to control the access to sensitive data through app usage.
+
+More details are available at the referenced links.
+
+- [Adaptive application controls](../security-center/security-center-adaptive-application.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 6.7: Remove unapproved Azure resources and software applications
+
+**Guidance**: Use the Adaptive Application Control feature, available in Azure Security Center. It is an intelligent, automated, end-to-end solution from Security Center which helps you control which applications can run on your Windows and Linux, Azure and non-Azure machines. It also helps harden your machines against malware.
+
+This feature is available for both Azure and non-Azure Windows (all versions, classic, or Azure Resource Manager) and Linux machines.
+
+Security Center uses machine learning to analyze the applications running on your machines and creates an allow list from this intelligence. This capability greatly simplifies the process of configuring and maintaining application allow list policies, enabling you to:
+
+- Block or alert on attempts to run malicious applications, including those that might otherwise be missed by antimalware solutions.
+
+- Comply with your organization's security policy that dictates the use of only licensed software.
+
+- Avoid unwanted software to be used in your environment.
+
+- Avoid old and unsupported apps to run.
+
+- Prevent specific software tools that are not allowed in your organization.
+
+- Enable IT to control the access to sensitive data through app usage.
+
+More details are available at the referenced links.
+
+- [Adaptive application controls](../security-center/security-center-adaptive-application.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 6.8: Use only approved applications
+
+**Guidance**: Use the Adaptive Application Control feature, available in Azure Security Center. It is an intelligent, automated, end-to-end solution from Security Center which helps you control which applications can run on your Windows and Linux, Azure and non-Azure machines. It also helps harden your machines against malware.
+
+This feature is available for both Azure and non-Azure Windows (all versions, classic, or Azure Resource Manager) and Linux machines.
+
+Security Center uses machine learning to analyze the applications running on your machines and creates an allow list from this intelligence. This capability greatly simplifies the process of configuring and maintaining application allow list policies, enabling you to:
+
+- Block or alert on attempts to run malicious applications, including those that might otherwise be missed by antimalware solutions.
+
+- Comply with your organization's security policy that dictates the use of only licensed software.
+
+- Avoid unwanted software to be used in your environment.
+
+- Avoid old and unsupported apps to run.
+
+- Prevent specific software tools that are not allowed in your organization.
+
+- Enable IT to control the access to sensitive data through app usage.
+
+More details are available at the referenced links.
+
+- [Adaptive application controls](../security-center/security-center-adaptive-application.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 6.10: Maintain an inventory of approved software titles
+
+**Guidance**: Use the Adaptive Application Control feature, available in Azure Security Center. It is an intelligent, automated, end-to-end solution from Security Center which helps you control which applications can run on your Windows and Linux, Azure and non-Azure machines. It also helps harden your machines against malware.
+
+This feature is available for both Azure and non-Azure Windows (all versions, classic, or Azure Resource Manager) and Linux machines.
+
+Security Center uses machine learning to analyze the applications running on your machines and creates an allow list from this intelligence. This capability greatly simplifies the process of configuring and maintaining application allow list policies, enabling you to:
+- Block or alert on attempts to run malicious applications, including those that might otherwise be missed by antimalware solutions.
+
+- Comply with your organization's security policy that dictates the use of only licensed software.
+- Avoid unwanted software to be used in your environment.
+- Avoid old and unsupported apps to run.
+- Prevent specific software tools that are not allowed in your organization.
+- Enable IT to control the access to sensitive data through app usage.
+
+More details are available at the referenced links.
+
+- [Adaptive application controls](../security-center/security-center-adaptive-application.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 6.12: Limit users' ability to execute scripts in compute resources
+
+**Guidance**: Use the Adaptive Application Control feature, available in Azure Security Center. It is an intelligent, automated, end-to-end solution from Security Center which helps you control which applications can run on your Windows and Linux, Azure and non-Azure machines. It also helps harden your machines against malware.
+
+This feature is available for both Azure and non-Azure Windows (all versions, classic, or Azure Resource Manager) and Linux machines.
+
+Security Center uses machine learning to analyze the applications running on your machines and creates an allow list from this intelligence. This capability greatly simplifies the process of configuring and maintaining application allow list policies, enabling you to:
+
+- Block or alert on attempts to run malicious applications, including those that might otherwise be missed by antimalware solutions.
+
+- Comply with your organization's security policy that dictates the use of only licensed software.
+
+- Avoid unwanted software to be used in your environment.
+
+- Avoid old and unsupported apps to run.
+
+- Prevent specific software tools that are not allowed in your organization.
+
+- Enable IT to control the access to sensitive data through app usage.
+
+More details are available at the referenced links.
+
+- [Adaptive application controls](../security-center/security-center-adaptive-application.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 6.13: Physically or logically segregate high risk applications
+
+**Guidance**: For sensitive or high-risk applications with Azure Cloud Services, implement separate subscriptions, or management groups to provide isolation.
+
+Use a network security group, create an Inbound security rule, choose a service such as http, choose a custom port as well, give it a priority and a name. The priority affects the order in which the rules are applied, the lower the numerical value, the earlier the rule is applied. You will need to associate your network security group to a subnet or a specific network interface to isolate or segment the network traffic based on your business needs.
+
+More details are available at the referenced links.
+
+- [Tutorial - Filter network traffic with a network security group using the Azure portal](../virtual-network/tutorial-filter-network-traffic.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Secure Configuration
+
+*For more information, see the [Azure Security Benchmark: Secure Configuration](../security/benchmarks/security-control-secure-configuration.md).*
+
+### 7.1: Establish secure configurations for all Azure resources
+
+**Guidance**: Use the recommendations from Azure Security Center as a secure configuration baseline for your Azure Cloud Services resources.
+
+On the Azure portal, choose Security Center, then Compute &amp; apps, and Azure Cloud Services to see the recommendations applicable to your service resources.
+
+- [Security recommendations - a reference guide](../security-center/recommendations-reference.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 7.3: Maintain secure Azure resource configurations
+
+**Guidance**: Not applicable to Azure Cloud Services. It is based on the classic deployment model. It is recommended to use a third-party solution to maintain secure Azure resource configurations
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 7.5: Securely store configuration of Azure resources
+
+**Guidance**: Azure Cloud Service's configuration file stores the operating attributes for a resource. You can store a copy of the configuration files to a secure storage account.
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 7.7: Deploy configuration management tools for Azure resources
+
+**Guidance**: Not applicable to Azure Cloud Services. It is based on the classic deployment model and cannot be managed by Azure Resource Manager deployment-based configuration tools.
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 7.8: Deploy configuration management tools for operating systems
+
+**Guidance**: Not applicable to Azure Cloud Services. This recommendation is applicable to Infrastructure as a service (IaaS) based compute resources.
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 7.9: Implement automated configuration monitoring for Azure resources
+
+**Guidance**: Use Azure Security Center to perform baseline scans for your Azure Resources.
+
+- [How to remediate recommendations in Azure Security Center](../security-center/security-center-remediate-recommendations.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 7.10: Implement automated configuration monitoring for operating systems
+
+**Guidance**: In Azure Security Center, choose Compute &amp; Apps feature, and follow the recommendations for virtual machines, servers, and containers.
+
+- [Understand Azure Security Center container recommendations](../security-center/container-security.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 7.11: Manage Azure secrets securely
+
+**Guidance**: Azure Cloud Services is based on a classic deployment model and does not integrate with Azure Key Vault.
+
+You can secure secrets such as credentials which are used in Azure Cloud Services so that you do not have to type in a password each time.
+To begin with, specify a plain text password, convert it to a secure string using ConvertTo-SecureString, PowerShell command. Next, convert this secure string into an encrypted standard string using ConvertFrom-SecureString. You can now save this encrypted standard string to a file using Set-Content.
+
+Additionally, it is recommended to store the private keys for certificates used in Azure Cloud Services to a secured storage.
+
+- [Configure Remote Desktop from PowerShell](./cloud-services-role-enable-remote-desktop-powershell.md#configure-remote-desktop-from-powershell)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 7.13: Eliminate unintended credential exposure
+
+**Guidance**: Secure secrets such as credentials used in Azure Cloud Services so that you do not have to type in a password each time.
+
+
+To begin, specify a plain text password, change it to a secure string using ConvertTo-SecureString, PowerShell command. Next, convert this secure string into an encrypted standard string using ConvertFrom-SecureString. Now save this encrypted standard string to a file using Set-Content command.
+
+Store the private keys for certificates used in Azure Cloud Services to a secured storage location.
+
+- [Configure Remote Desktop from PowerShell](./cloud-services-role-enable-remote-desktop-powershell.md#configure-remote-desktop-from-powershell)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Malware Defense
+
+*For more information, see the [Azure Security Benchmark: Malware Defense](../security/benchmarks/security-control-malware-defense.md).*
+
+### 8.1: Use centrally managed antimalware software
+
+**Guidance**: Microsoft Antimalware for Azure is available for Azure Cloud Services and Virtual Machines. It is a free real-time protection that helps identify and remove viruses, spyware, and other malicious software. It generates alerts when known malicious or unwanted software tries to install itself or run on your Azure systems.
+
+Use the PowerShell based Antimalware cmdlet to get the Antimalware configuration, with "Get-AzureServiceAntimalwareConfig".
+
+Enable the Antimalware extension with a PowerShell script in the Startup Task in Azure Cloud Services.
+
+Choose the Adaptive application control feature in Azure Security Center, an intelligent, automated, end-to-end solution. It helps harden your machines against malware and enables you to block or alert on attempts to run malicious applications, including those that might otherwise be missed by antimalware solutions.
+
+- [How can I add an Antimalware extension for my Azure Cloud Services in an automated way](./cloud-services-configuration-and-management-faq.yml#how-can-i-add-an-antimalware-extension-for-my-cloud-services-in-an-automated-way-)
+
+- [Antimalware Deployment Scenarios](../security/fundamentals/antimalware.md#antimalware-deployment-scenarios)
+
+- [Adaptive application controls](../security-center/security-center-adaptive-application.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Incident Response
+
+*For more information, see the [Azure Security Benchmark: Incident Response](../security/benchmarks/security-control-incident-response.md).*
+
+### 10.1: Create an incident response guide
+
+**Guidance**: Build out an incident response guide for your organization. Ensure that there are written incident response plans that define all roles of personnel as well as phases of incident handling/management from detection to post-incident review.
+
+- [How to configure Workflow Automations within Azure Security Center](../security-center/security-center-planning-and-operations-guide.md)
+
+- [Guidance on building your own security incident response process](https://msrc-blog.microsoft.com/2019/07/01/inside-the-msrc-building-your-own-security-incident-response-process)
+
+- [Microsoft Security Response Center's Anatomy of an Incident](https://msrc-blog.microsoft.com/2019/07/01/inside-the-msrc-building-your-own-security-incident-response-process)
+
+- [Customer may also leverage NIST's Computer Security Incident Handling Guide to aid in the creation of their own incident response plan](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r2.pdf)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 10.2: Create an incident scoring and prioritization procedure
+
+**Guidance**: Azure Security Center assigns a severity to each alert to help you prioritize which alerts should be investigated first. The severity is based on how confident Security Center is in the finding or the analytics used to issue the alert as well as the confidence level that there was malicious intent behind the activity that led to the alert.
+
+Clearly mark subscriptions (for example, production, non-production) and create a naming system to clearly identify and categorize Azure resources.
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 10.3: Test security response procedures
+
+**Guidance**: Conduct exercises to test your systemsΓÇÖ incident response capabilities on a regular cadence. Identify weak points and gaps and revise plan as needed.
+
+- [Refer to NIST's publication: Guide to Test, Training, and Exercise Programs for IT Plans and Capabilities](https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-84.pdf)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 10.4: Provide security incident contact details and configure alert notifications for security incidents
+
+**Guidance**: Security incident contact information will be used by Microsoft to contact you if the Microsoft Security Response Center (MSRC) discovers that the customer's data has been accessed by an unlawful or unauthorized party. Review incidents after the fact to ensure that issues are resolved.
+
+- [How to set the Azure Security Center Security Contact](../security-center/security-center-provide-security-contact-details.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 10.5: Incorporate security alerts into your incident response system
+
+**Guidance**: Export your Azure Security Center alerts and recommendations using the Continuous Export feature. Continuous Export allows you to export alerts and recommendations either manually or in an ongoing, continuous fashion. You may use the Security Center data connector to stream the alerts to Azure Sentinel.
+
+- [How to configure continuous export](../security-center/continuous-export.md)
+
+- [How to stream alerts into Azure Sentinel](../sentinel/connect-azure-security-center.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### 10.6: Automate the response to security alerts
+
+**Guidance**: Use the Workflow Automation feature in Azure Security Center to automatically trigger responses via "Logic Apps" on security alerts and recommendations.
+
+- [How to configure Workflow Automation and Logic Apps](../security-center/workflow-automation.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Penetration Tests and Red Team Exercises
+
+*For more information, see the [Azure Security Benchmark: Penetration Tests and Red Team Exercises](../security/benchmarks/security-control-penetration-tests-red-team-exercises.md).*
+
+### 11.1: Conduct regular penetration testing of your Azure resources and ensure remediation of all critical security findings
+
+**Guidance**: Follow the Microsoft Cloud Penetration Testing Rules of Engagement to ensure your penetration tests are not in violation of Microsoft policies. Use Microsoft's strategy and execution of Red Teaming and live site penetration testing against Microsoft-managed cloud infrastructure, services, and applications.
+
+- [Penetration Testing Rules of Engagement](https://www.microsoft.com/msrc/pentest-rules-of-engagement?rtc=1)
+
+- [Microsoft Cloud Red Teaming](https://gallery.technet.microsoft.com/Cloud-Red-Teaming-b837392e)
+
+**Responsibility**: Shared
+
+**Azure Security Center monitoring**: None
+
+## Next steps
+
+- See the [Azure Security Benchmark V2 overview](../security/benchmarks/overview.md)
+- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)
cognitive-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/custom-speech-overview.md
This diagram highlights the pieces that make up the [Custom Speech area of the S
You need to have an Azure account and Speech service subscription before you can use the [Speech Studio](https://speech.microsoft.com/customspeech) to create a custom model. If you don't have an account and subscription, [try the Speech service for free](overview.md#try-the-speech-service-for-free).
-> [!NOTE]
-> Please be sure to create a standard (S0) subscription. Free (F0) subscriptions aren't supported.
- If you plan to train a custom model with **audio data**, pick one of the following regions that have dedicated hardware available for training. This will reduce the time it takes to train a model and allow you to use more audio for training. In these regions, the Speech service will use up to 20 hours of audio for training; in other regions it will only use up to 8 hours. * Australia East
cognitive-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/create-sas-tokens.md
Last updated 03/05/2021
-# Create SAS tokens for Document Translation processing
+# Create SAS tokens for your storage containers
In this article, you'll learn how to create shared access signature (SAS) tokens using the Azure Storage Explorer or the Azure portal. An SAS token provides secure, delegated access to resources in your Azure storage account.
cognitive-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/managed-identity.md
+
+ Title: Create and use managed identities
+
+description: Understand how to create and use managed identities in the Azure portal
+++++ Last updated : 07/01/2021+++
+# Create and use managed identities
+
+> [!IMPORTANT]
+>
+> Managed identity for Document Translation is currently unavailable in the global region. If you intend to use managed identities for Document Translation operations, [create your Translator resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in a non-global Azure region.
+
+## What are managed identities?
+
+ Azure managed identities are service principals that create an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources. You can use managed identities to grant access to any resource that supports Azure AD authentication. To grant access, assign a role to a managed identity using [Azure role-based access control](/azure/role-based-access-control/overview) (Azure RBAC). There is no added cost to use managed identities in Azure.
+
+Managed Identities support both privately and publicly accessible Azure blob storage accounts. For storage accounts with public access, you can opt to use a shared access signature (SAS) to grant limited access. In this article, we will examine how to manage access to translation documents in your Azure blob storage account using system-assigned managed identities.
+
+> [!NOTE]
+>
+> For all operations using an Azure blob storage account available on the public Internet, you can provide a shared access signature (**SAS**) URL with restricted rights for a limited period, and pass it in your POST requests:
+>
+> * To retrieve your SAS URL, go to your storage resource in the Azure portal and select the **Storage Explorer** tab.
+> * Navigate to your container, right-click, and select **Get shared access signature**. It's important to get the SAS for your container, not for the storage account itself.
+> * Make sure the **Read**, **Write**, **Delete** and **List** permissions are checked, and click **Create**.
+> * Then copy the value in the **URL** section to a temporary location. It should have the form: `https://<storage account>.blob.core.windows.net/<container name>?<SAS value>`.
+
+## Prerequisites
+
+To get started, you'll need:
+
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+
+* A [**single-service Translator**](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (not a multi-service Cognitive Services) resource assigned to a **non-global** region. For detailed steps, _see_ [Create a Cognitive Services resource using the Azure portal](/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows).
+
+* An [**Azure blob storage account**](https://ms.portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the same region as your Translator resource. You'll create containers to store and organize your blob data within your storage account. If the account has a firewall, you must have the [exception for trusted Microsoft services](/azure/storage/common/storage-network-security?tabs=azure-portal#manage-exceptions) checkbox enabled.
+
+ :::image type="content" source="../media/managed-identities/allow-trusted-services-checkbox-portal-view.png" alt-text="Screenshot: allow trusted services checkbox, portal view":::
+
+* A brief understanding of [**Azure role-based access control (Azure RBAC)**](/azure/role-based-access-control/role-assignments-portal) using the Azure portal.
+
+## Managed Identity assignments
+
+There are two types of managed identities, **system-assigned** and **user-assigned**. Right now, Document Translation does not support user-assigned managed identities. A system-assigned managed identity is **enabled** directly on a service instance. It is not enabled by default; you must go to your resource and update the identity setting. The system-assigned managed identity is tied to your resource throughout its lifecycle. If you delete your resource, the managed identity will be deleted as well.
+
+In the following steps, we'll enable a system-assigned managed identity and grant your Translator resource limited access to your Azure blob storage account.
+
+## Enable a system-assigned managed identity using the Azure portal
+
+>[!IMPORTANT]
+>
+> To enable a system-assigned managed identity, you need **Microsoft.Authorization/roleAssignments/write** permissions, such as [**Owner**](/azure/role-based-access-control/built-in-roles#owner) or [**User Access Administrator**](/azure/role-based-access-control/built-in-roles#user-access-administrator). You can specify a scope at four levels: management group, subscription, resource group, or resource.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using an account associated with your Azure subscription.
+
+1. Navigate to your **Translator** resource page in the Azure portal.
+
+1. In the left rail, select **Identity** from the **Resource Management** list:
+
+ :::image type="content" source="../media/managed-identities/resource-management-identity-tab.png" alt-text="Screenshot: resource management identity tab in the Azure portal.":::
+
+1. In the main window, toggle the **System assigned Status** tab to **On**.
+
+1. Under **Permissions** select **Azure role assignments**:
+
+ :::image type="content" source="../media/managed-identities/enable-system-assigned-managed-identity-portal.png" alt-text="Screenshot: enable system-assigned managed identity in Azure portal.":::
+
+1. An Azure role assignments page will open. Choose your subscription from the drop-down menu then select **&plus; Add role assignment**.
+
+ :::image type="content" source="../media/managed-identities/azure-role-assignments-page-portal.png" alt-text="Screenshot: Azure role assignments page in the Azure portal.":::
+
+>[!NOTE]
+>
+> If you are unable to assign a role in the Azure portal because the Add > Add role assignment option is disabled or get the permissions error, "you do not have permissions to add role assignment at this scope", check that you are currently signed in as a user with an assigned a role that has Microsoft.Authorization/roleAssignments/write permissions such as [**Owner**](/azure/role-based-access-control/built-in-roles#owner) or[**User Access Administrator**](/azure/role-based-access-control/built-in-roles#user-access-administrator) at the storage scope for the storage resource.
+
+7. In the **Add role assignment** pop-up window complete the fields as follows and select **Save**:
+
+ | Field | Value|
+ ||--|
+ |**Scope**| ***Storage***.|
+ |**Subscription**| ***The subscription associated with your storage resource***.|
+ |**Resource**| ***The name of your storage resource***.|
+ |**Role** | ***Storage Blob Data Contributor***.|
+
+ :::image type="content" source="../media/managed-identities/add-role-assignment-window.png" alt-text="Screenshot: add role assignments page in the Azure portal.":::
+
+Great! You have completed the steps to enable a service-assigned managed identity. With this identity credential, you can grant specific access rights to a single Azure service.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Managed identities for Azure resources frequently asked questions](/azure/active-directory/managed-identities-azure-resources/managed-identities-faq)
+
+> [!div class="nextstepaction"]
+>[Use managed identities to acquire an access token](/azure/app-service/overview-managed-identity?tabs=dotnet#obtain-tokens-for-azure-resources)
cognitive-services Concept Business Cards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-business-cards.md
Previously updated : 04/30/2021 Last updated : 07/01/2021
The data extracted with the Business Card API can be used to perform various tas
The Business Card API also powers the [AI Builder Business Card Processing feature](/ai-builder/prebuilt-business-card). - ## Try it out To try out the Form Recognizer receipt service, go to the online Sample UI Tool:
The prebuilt Business Card API extracts key fields from business cards and retur
| WorkPhones | array of phone numbers | Work phone number extracted from business card | ["+1 (987) 213-5674"] | +19872135674 | | OtherPhones | array of phone numbers | Other phone number extracted from business card | ["+1 (987) 213-5673"] | +19872135673 | - The Business Card API can also return all recognized text from the Business Card. This OCR output is included in the JSON response. ### Input Requirements
The Business Card API can also return all recognized text from the Business Card
**Pre-built business cards v2.1** supports the following locales: **en-us**, **en-au**, **en-ca**, **en-gb**, **en-in**
-## The Analyze Business Card operation
+## Analyze Business Card
The [Analyze Business Card](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync) takes an image or PDF of a business card as the input and extracts the values of interest. The call returns a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Result ID to be used in the next step.
The [Analyze Business Card](https://westus.dev.cognitive.microsoft.com/docs/serv
|:--|:-| |Operation-Location | `https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync` |
-## The Get Analyze Business Card Result operation
+## Get Analyze Business Card Result
The second step is to call the [Get Analyze Business Card Result](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/GetAnalyzeBusinessCardResult) operation. This operation takes as input the Result ID that was created by the Analyze Business Card operation. It returns a JSON response that contains a **status** field with the following possible values. You call this operation iteratively until it returns with the **succeeded** value. Use an interval of 3 to 5 seconds to avoid exceeding the requests per second (RPS) rate.
See the following example of a successful JSON response (the output has been sho
```json {
- "status": "succeeded",
- "createdDateTime": "2021-05-27T02:18:35Z",
- "lastUpdatedDateTime": "2021-05-27T02:18:37Z",
- "analyzeResult": {
- "version": "2.1.0",
- "readResults": [
- {
- "page": 1,
- "angle": 0.0255,
- "width": 2592,
- "height": 4608,
- "unit": "pixel",
- "lines": [
- {
- "text": "CONTOSO",
- "boundingBox": [
- 533,
- 1570,
- 1334,
- 1570,
- 1333,
- 1721,
- 533,
- 1720
- ],
- "words": [
- {
- "text": "CONTOSO",
- "boundingBox": [
- 535,
- 1571,
- 1278,
- 1571,
- 1279,
- 1722,
- 534,
- 1719
- ],
- "confidence": 0.994
- }
- ],
- "appearance": {
- "style": {
- "name": "other",
- "confidence": 0.878
- }
- }
- },
+ "status": "succeeded",
+ "createdDateTime": "2021-05-27T02:18:35Z",
+ "lastUpdatedDateTime": "2021-05-27T02:18:37Z",
+ "analyzeResult": {
+ "version": "2.1.0",
+ "readResults": [
+ {
+ "page": 1,
+ "angle": 0.0255,
+ "width": 2592,
+ "height": 4608,
+ "unit": "pixel",
+ "lines": [
+ {
+ "text": "CONTOSO",
+ "boundingBox": [
+ 533,
+ 1570,
+ 1334,
+ 1570,
+ 1333,
+ 1721,
+ 533,
+ 1720
+ ],
+ "words": [
+ {
+ "text": "CONTOSO",
+ "boundingBox": [
+ 535,
+ 1571,
+ 1278,
+ 1571,
+ 1279,
+ 1722,
+ 534,
+ 1719
+ ],
+ "confidence": 0.994
+ }
+ ],
+ "appearance": {
+ "style": {
+ "name": "other",
+ "confidence": 0.878
+ }
+ }
+ },
... ] } ],
- "documentResults": [
- {
- "docType": "prebuilt:businesscard",
- "pageRange": [
- 1,
- 1
- ],
- "fields": {
- "Addresses": {
- "type": "array",
- "valueArray": [
- {
- "type": "string",
- "valueString": "4001 1st Ave NE Redmond, WA 98052",
- "text": "4001 1st Ave NE Redmond, WA 98052",
- "boundingBox": [
- 400,
- 2789,
- 1514,
- 2789,
- 1514,
- 2857,
- 400,
- 2857
- ],
- "page": 1,
- "confidence": 0.986,
- "elements": [
- "#/readResults/0/lines/9/words/0",
- "#/readResults/0/lines/9/words/1",
- "#/readResults/0/lines/9/words/2",
- "#/readResults/0/lines/9/words/3",
- "#/readResults/0/lines/9/words/4",
- "#/readResults/0/lines/9/words/5",
- "#/readResults/0/lines/9/words/6"
- ]
- }
- ]
- },
- "CompanyNames": {
- "type": "array",
- "valueArray": [
- {
- "type": "string",
- "valueString": "CONTOSO",
- "text": "CONTOSO",
- "boundingBox": [
- 535,
- 1571,
- 1278,
- 1571,
- 1279,
- 1722,
- 534,
- 1719
- ],
- "page": 1,
- "confidence": 0.985,
- "elements": [
- "#/readResults/0/lines/0/words/0"
- ]
- }
- ]
- },
- "ContactNames": {
- "type": "array",
- "valueArray": [
- {
- "type": "object",
- "valueObject": {
- "FirstName": {
- "type": "string",
- "valueString": "Chris",
- "text": "Chris",
- "boundingBox": [
- 1556,
- 2018,
- 1915,
- 2021,
- 1915,
- 2156,
- 1558,
- 2154
- ],
- "page": 1,
- "elements": [
- "#/readResults/0/lines/1/words/0"
- ]
- },
- "LastName": {
- "type": "string",
- "valueString": "Smith",
- "text": "Smith",
- "boundingBox": [
- 1944,
- 2021,
- 2368,
- 2016,
- 2364,
- 2156,
- 1944,
- 2156
- ],
- "page": 1,
- "elements": [
- "#/readResults/0/lines/1/words/1"
- ]
- }
- },
- "text": "Chris Smith",
- "boundingBox": [
- 1556.1,
- 2010.3,
- 2368,
- 2016,
- 2367,
- 2159.6,
- 1555.1,
- 2154
- ],
- "page": 1,
- "confidence": 0.99,
- "elements": [
- "#/readResults/0/lines/1/words/0",
- "#/readResults/0/lines/1/words/1"
- ]
- }
- ]
- },
- "Departments": {
- "type": "array",
- "valueArray": [
- {
- "type": "string",
- "valueString": "Cloud & Al Department",
- "text": "Cloud & Al Department",
- "boundingBox": [
- 1578,
- 2288.8,
- 2277,
- 2295.1,
- 2276.3,
- 2367.8,
- 1577.3,
- 2361.5
- ],
- "page": 1,
- "confidence": 0.989,
- "elements": [
- "#/readResults/0/lines/3/words/0",
- "#/readResults/0/lines/3/words/1",
- "#/readResults/0/lines/3/words/2",
- "#/readResults/0/lines/3/words/3"
- ]
- }
- ]
- },
- "Emails": {
- "type": "array",
- "valueArray": [
- {
- "type": "string",
- "valueString": "chris.smith@contoso.com",
- "text": "chris.smith@contoso.com",
- "boundingBox": [
- 1583,
- 2381,
- 2309,
- 2382,
- 2308,
- 2445,
- 1584,
- 2447
- ],
- "page": 1,
- "confidence": 0.99,
- "elements": [
- "#/readResults/0/lines/4/words/0"
- ]
- }
- ]
- },
- "Faxes": {
- "type": "array",
- "valueArray": [
- {
- "type": "phoneNumber",
- "valuePhoneNumber": "+19873126745",
- "text": "+1 (987) 312-6745",
- "boundingBox": [
- 740,
- 2703.8,
- 1273,
- 2702.1,
- 1273.2,
- 2774.1,
- 740.2,
- 2775.8
- ],
- "page": 1,
- "confidence": 0.99,
- "elements": [
- "#/readResults/0/lines/8/words/1",
- "#/readResults/0/lines/8/words/2",
- "#/readResults/0/lines/8/words/3"
- ]
- }
- ]
- },
- "JobTitles": {
- "type": "array",
- "valueArray": [
- {
- "type": "string",
- "valueString": "Senior Researcher",
- "text": "Senior Researcher",
- "boundingBox": [
- 1578,
- 2206,
- 2117,
- 2207.6,
- 2116.8,
- 2272.6,
- 1577.8,
- 2271
- ],
- "page": 1,
- "confidence": 0.99,
- "elements": [
- "#/readResults/0/lines/2/words/0",
- "#/readResults/0/lines/2/words/1"
- ]
- }
- ]
- },
- "MobilePhones": {
- "type": "array",
- "valueArray": [
- {
- "type": "phoneNumber",
- "valuePhoneNumber": "+19871234567",
- "text": "+1 (987) 123-4567",
- "boundingBox": [
- 744,
- 2529,
- 1281,
- 2529,
- 1281,
- 2603,
- 744,
- 2603
- ],
- "page": 1,
- "confidence": 0.99,
- "elements": [
- "#/readResults/0/lines/5/words/1",
- "#/readResults/0/lines/5/words/2",
- "#/readResults/0/lines/5/words/3"
- ]
- }
- ]
- },
- "Websites": {
- "type": "array",
- "valueArray": [
- {
- "type": "string",
- "valueString": "https://www.contoso.com/",
- "text": "https://www.contoso.com/",
- "boundingBox": [
- 1576,
- 2462,
- 2383,
- 2462,
- 2380,
- 2535,
- 1576,
- 2535
- ],
- "page": 1,
- "confidence": 0.99,
- "elements": [
- "#/readResults/0/lines/6/words/0"
- ]
- }
- ]
- },
- "WorkPhones": {
- "type": "array",
- "valueArray": [
- {
- "type": "phoneNumber",
- "valuePhoneNumber": "+19872135674",
- "text": "+1 (987) 213-5674",
- "boundingBox": [
- 736,
- 2617.6,
- 1267.1,
- 2618.5,
- 1267,
- 2687.5,
- 735.9,
- 2686.6
- ],
- "page": 1,
- "confidence": 0.984,
- "elements": [
- "#/readResults/0/lines/7/words/1",
- "#/readResults/0/lines/7/words/2",
- "#/readResults/0/lines/7/words/3"
- ]
- }
- ]
- }
- }
- }
- ]
- }
+ "documentResults": [
+ {
+ "docType": "prebuilt:businesscard",
+ "pageRange": [
+ 1,
+ 1
+ ],
+ "fields": {
+ "Addresses": {
+ "type": "array",
+ "valueArray": [
+ {
+ "type": "string",
+ "valueString": "4001 1st Ave NE Redmond, WA 98052",
+ "text": "4001 1st Ave NE Redmond, WA 98052",
+ "boundingBox": [
+ 400,
+ 2789,
+ 1514,
+ 2789,
+ 1514,
+ 2857,
+ 400,
+ 2857
+ ],
+ "page": 1,
+ "confidence": 0.986,
+ "elements": [
+ "#/readResults/0/lines/9/words/0",
+ "#/readResults/0/lines/9/words/1",
+ "#/readResults/0/lines/9/words/2",
+ "#/readResults/0/lines/9/words/3",
+ "#/readResults/0/lines/9/words/4",
+ "#/readResults/0/lines/9/words/5",
+ "#/readResults/0/lines/9/words/6"
+ ]
+ }
+ ]
+ },
+ "CompanyNames": {
+ "type": "array",
+ "valueArray": [
+ {
+ "type": "string",
+ "valueString": "CONTOSO",
+ "text": "CONTOSO",
+ "boundingBox": [
+ 535,
+ 1571,
+ 1278,
+ 1571,
+ 1279,
+ 1722,
+ 534,
+ 1719
+ ],
+ "page": 1,
+ "confidence": 0.985,
+ "elements": [
+ "#/readResults/0/lines/0/words/0"
+ ]
+ }
+ ]
+ },
+ "ContactNames": {
+ "type": "array",
+ "valueArray": [
+ {
+ "type": "object",
+ "valueObject": {
+ "FirstName": {
+ "type": "string",
+ "valueString": "Chris",
+ "text": "Chris",
+ "boundingBox": [
+ 1556,
+ 2018,
+ 1915,
+ 2021,
+ 1915,
+ 2156,
+ 1558,
+ 2154
+ ],
+ "page": 1,
+ "elements": [
+ "#/readResults/0/lines/1/words/0"
+ ]
+ },
+ "LastName": {
+ "type": "string",
+ "valueString": "Smith",
+ "text": "Smith",
+ "boundingBox": [
+ 1944,
+ 2021,
+ 2368,
+ 2016,
+ 2364,
+ 2156,
+ 1944,
+ 2156
+ ],
+ "page": 1,
+ "elements": [
+ "#/readResults/0/lines/1/words/1"
+ ]
+ }
+ },
+ "text": "Chris Smith",
+ "boundingBox": [
+ 1556.1,
+ 2010.3,
+ 2368,
+ 2016,
+ 2367,
+ 2159.6,
+ 1555.1,
+ 2154
+ ],
+ "page": 1,
+ "confidence": 0.99,
+ "elements": [
+ "#/readResults/0/lines/1/words/0",
+ "#/readResults/0/lines/1/words/1"
+ ]
+ }
+ ]
+ },
+ "Departments": {
+ "type": "array",
+ "valueArray": [
+ {
+ "type": "string",
+ "valueString": "Cloud & Al Department",
+ "text": "Cloud & Al Department",
+ "boundingBox": [
+ 1578,
+ 2288.8,
+ 2277,
+ 2295.1,
+ 2276.3,
+ 2367.8,
+ 1577.3,
+ 2361.5
+ ],
+ "page": 1,
+ "confidence": 0.989,
+ "elements": [
+ "#/readResults/0/lines/3/words/0",
+ "#/readResults/0/lines/3/words/1",
+ "#/readResults/0/lines/3/words/2",
+ "#/readResults/0/lines/3/words/3"
+ ]
+ }
+ ]
+ },
+ "Emails": {
+ "type": "array",
+ "valueArray": [
+ {
+ "type": "string",
+ "valueString": "chris.smith@contoso.com",
+ "text": "chris.smith@contoso.com",
+ "boundingBox": [
+ 1583,
+ 2381,
+ 2309,
+ 2382,
+ 2308,
+ 2445,
+ 1584,
+ 2447
+ ],
+ "page": 1,
+ "confidence": 0.99,
+ "elements": [
+ "#/readResults/0/lines/4/words/0"
+ ]
+ }
+ ]
+ },
+ "Faxes": {
+ "type": "array",
+ "valueArray": [
+ {
+ "type": "phoneNumber",
+ "valuePhoneNumber": "+19873126745",
+ "text": "+1 (987) 312-6745",
+ "boundingBox": [
+ 740,
+ 2703.8,
+ 1273,
+ 2702.1,
+ 1273.2,
+ 2774.1,
+ 740.2,
+ 2775.8
+ ],
+ "page": 1,
+ "confidence": 0.99,
+ "elements": [
+ "#/readResults/0/lines/8/words/1",
+ "#/readResults/0/lines/8/words/2",
+ "#/readResults/0/lines/8/words/3"
+ ]
+ }
+ ]
+ },
+ "JobTitles": {
+ "type": "array",
+ "valueArray": [
+ {
+ "type": "string",
+ "valueString": "Senior Researcher",
+ "text": "Senior Researcher",
+ "boundingBox": [
+ 1578,
+ 2206,
+ 2117,
+ 2207.6,
+ 2116.8,
+ 2272.6,
+ 1577.8,
+ 2271
+ ],
+ "page": 1,
+ "confidence": 0.99,
+ "elements": [
+ "#/readResults/0/lines/2/words/0",
+ "#/readResults/0/lines/2/words/1"
+ ]
+ }
+ ]
+ },
+ "MobilePhones": {
+ "type": "array",
+ "valueArray": [
+ {
+ "type": "phoneNumber",
+ "valuePhoneNumber": "+19871234567",
+ "text": "+1 (987) 123-4567",
+ "boundingBox": [
+ 744,
+ 2529,
+ 1281,
+ 2529,
+ 1281,
+ 2603,
+ 744,
+ 2603
+ ],
+ "page": 1,
+ "confidence": 0.99,
+ "elements": [
+ "#/readResults/0/lines/5/words/1",
+ "#/readResults/0/lines/5/words/2",
+ "#/readResults/0/lines/5/words/3"
+ ]
+ }
+ ]
+ },
+ "Websites": {
+ "type": "array",
+ "valueArray": [
+ {
+ "type": "string",
+ "valueString": "https://www.contoso.com/",
+ "text": "https://www.contoso.com/",
+ "boundingBox": [
+ 1576,
+ 2462,
+ 2383,
+ 2462,
+ 2380,
+ 2535,
+ 1576,
+ 2535
+ ],
+ "page": 1,
+ "confidence": 0.99,
+ "elements": [
+ "#/readResults/0/lines/6/words/0"
+ ]
+ }
+ ]
+ },
+ "WorkPhones": {
+ "type": "array",
+ "valueArray": [
+ {
+ "type": "phoneNumber",
+ "valuePhoneNumber": "+19872135674",
+ "text": "+1 (987) 213-5674",
+ "boundingBox": [
+ 736,
+ 2617.6,
+ 1267.1,
+ 2618.5,
+ 1267,
+ 2687.5,
+ 735.9,
+ 2686.6
+ ],
+ "page": 1,
+ "confidence": 0.984,
+ "elements": [
+ "#/readResults/0/lines/7/words/1",
+ "#/readResults/0/lines/7/words/2",
+ "#/readResults/0/lines/7/words/3"
+ ]
+ }
+ ]
+ }
+ }
+ }
+ ]
+ }
} ```
cognitive-services Concept Identification Cards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-identification-cards.md
Previously updated : 04/30/2021 Last updated : 07/01/2021
The IDs API also returns the following information:
> > Currently supported ID types include worldwide passport and U.S. Driver's Licenses. We are actively seeking to expand our ID support to other identity documents around the world.
-## The Analyze ID Document operation
+## Analyze ID Document
The [Analyze ID](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/5f74a7daad1f2612c46f5822) operation takes an image or PDF of an ID as the input and extracts the values of interest. The call returns a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Result ID to be used in the next step.
The [Analyze ID](https://westus.dev.cognitive.microsoft.com/docs/services/form-r
|:--|:-| |Operation-Location | `https://cognitiveservice/formrecognizer/v2.1/prebuilt/idDocument/analyzeResults/49a36324-fc4b-4387-aa06-090cfbf0064f` |
-## The Get Analyze ID Document Result operation
+## Get Analyze ID Document Result
<! Need to update this with updated APIM links when available
cognitive-services Concept Invoices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-invoices.md
Previously updated : 04/30/2021 Last updated : 07/01/2021
You will need an Azure subscription ([create one for free](https://azure.microso
**Pre-built invoice v2.1** supports invoices in the **en-us** locale.
-## The Analyze Invoice operation
+## Analyze Invoice
The [Analyze Invoice](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/5ed8c9843c2794cbb1a96291) operation takes an image or PDF of an invoice as the input and extracts the values of interest. The call returns a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Result ID to be used in the next step.
The [Analyze Invoice](https://westus.dev.cognitive.microsoft.com/docs/services/f
|:--|:-| |Operation-Location | `https://cognitiveservice/formrecognizer/v2.1/prebuilt/invoice/analyzeResults/49a36324-fc4b-4387-aa06-090cfbf0064f` |
-## The Get Analyze Invoice Result operation
+## Get Analyze Invoice Result
The second step is to call the [Get Analyze Invoice Result](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/5ed8c9acb78c40a2533aee83) operation. This operation takes as input the Result ID that was created by the Analyze Invoice operation. It returns a JSON response that contains a **status** field with the following possible values. You call this operation iteratively until it returns with the **succeeded** value. Use an interval of 3 to 5 seconds to avoid exceeding the requests per second (RPS) rate.
cognitive-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-layout.md
Previously updated : 05/12/2021 Last updated : 07/01/2021
You will need an Azure subscription ([create one for free](https://azure.microso
[!INCLUDE [input requirements](./includes/input-requirements-receipts.md)]
-## The Analyze Layout operation
+## Analyze Layout
First, call the [Analyze Layout](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeLayoutAsync) operation. Analyze Layout takes a document (image, TIFF, or PDF file) as the input and extracts the text, tables, selection marks, and structure of the document. The call returns a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Result ID to be used in the next step.
First, call the [Analyze Layout](https://westcentralus.dev.cognitive.microsoft.c
|:--|:-| |Operation-Location | `https://cognitiveservice/formrecognizer/v2.1/layout/analyzeResults/{resultId}' |
-## The Get Analyze Layout Result operation
+## Get Analyze Layout Result
The second step is to call the [Get Analyze Layout Result](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/GetAnalyzeLayoutResult) operation. This operation takes as input the Result ID that was created by the Analyze Layout operation. It returns a JSON response that contains a **status** field with the following possible values.
cognitive-services Concept Receipts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-receipts.md
Previously updated : 04/30/2021 Last updated : 07/01/2021
The Receipt API also returns the following information:
> > Prebuilt Receipt v2.1 has an optional request parameter to specify a receipt locale from additional English markets. For sales receipts in English from Australia (en-au), Canada (en-ca), Great Britain (en-gb), and India (en-in), you can specify the locale to get improved results. If no locale is specified in v2.1, the model will automatically detect the locale.
-## The Analyze Receipt operation
+## Analyze Receipt
The [Analyze Receipt](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeReceiptAsync) takes an image or PDF of a receipt as the input and extracts the values of interest and text. The call returns a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Result ID to be used in the next step.
The [Analyze Receipt](https://westus.dev.cognitive.microsoft.com/docs/services/f
|:--|:-| |Operation-Location | `https://cognitiveservice/formrecognizer/v2.0/prebuilt/receipt/analyzeResults/56a36454-fc4d-4354-aa07-880cfbf0064f` |
-## The Get Analyze Receipt Result operation
+## Get Analyze Receipt Result
The second step is to call the [Get Analyze Receipt Result](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/GetAnalyzeReceiptResult) operation. This operation takes as input the Result ID that was created by the Analyze Receipt operation. It returns a JSON response that contains a **status** field with the following possible values. You call this operation iteratively until it returns with the **succeeded** value. Use an interval of 3 to 5 seconds to avoid exceeding the requests per second (RPS) rate.
cognitive-services Form Recognizer Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/containers/form-recognizer-container-configuration.md
Title: How to configure a container for Form Recognizer
+ Title: Configure Form Recognizer containers
description: Learn how to configure the Form Recognizer container to parse form and table data.
Previously updated : 06/23/2021 Last updated : 07/01/2021 # Configure Form Recognizer containers
> > Form Recognizer containers are in gated preview. To use them, you must submit an [online request](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUNlpBU1lFSjJUMFhKNzVHUUVLN1NIOEZETiQlQCN0PWcu), and have it approved. See [**Request approval to run container**](form-recognizer-container-install-run.md#request-approval-to-run-the-container) below for more information.
-With Azure Form Recognizer containers, you can build an application architecture that's optimized to take advantage of both robust cloud capabilities and edge locality. Containers provide a minimalist, virtually-isolated environment that can be easily deployed on-premise and in the cloud. In this article, you will learn to configure the Form Recognizer container run-time environment by using the `docker compose` command arguments. Form Recognizer features are supported by seven Form Recognizer containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom API**, and **Custom Supervised**ΓÇöplus the **Read** OCR container. These containers have several required settings and a few optional settings. For a few examples, see the [Example docker-compose.yml file](#example-docker-composeyml-file) section.
+With Azure Form Recognizer containers, you can build an application architecture that's optimized to take advantage of both robust cloud capabilities and edge locality. Containers provide a minimalist, virtually-isolated environment that can be easily deployed on-premise and in the cloud. In this article, you will learn to configure the Form Recognizer container run-time environment by using the `docker compose` command arguments. Form Recognizer features are supported by six Form Recognizer feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom**. These containers have several required settings and a few optional settings. For a few examples, see the [Example docker-compose.yml file](#example-docker-composeyml-file) section.
## Configuration settings
The `ApiKey` setting specifies the Azure resource key that's used to track billi
The `Billing` setting specifies the endpoint URI of the resource on Azure that's used to meter billing information for the container. The value for this configuration setting must be a valid endpoint URI for a resource on Azure. The container reports usage about every 10 to 15 minutes.
- You can find these settings in the Azure portal on the **Keys and Endpoint* *page.
+ You can find these settings in the Azure portal on the **Keys and Endpoint** page.
:::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
cognitive-services Form Recognizer Container Install Run https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/containers/form-recognizer-container-install-run.md
Previously updated : 06/23/2021 Last updated : 07/01/2021 keywords: on-premises, Docker, container, identify
keywords: on-premises, Docker, container, identify
> [!IMPORTANT] >
-> Form Recognizer containers are in gated preview. To use them, you must submit an [online request](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUNlpBU1lFSjJUMFhKNzVHUUVLN1NIOEZETiQlQCN0PWcu), and have it approved. See [**Request approval to run container**](#request-approval-to-run-the-container) below for more information.
+> Form Recognizer containers are in gated preview. To use them, you must submit an [online request](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUNlpBU1lFSjJUMFhKNzVHUUVLN1NIOEZETiQlQCN0PWcu), and receive approval. See [**Request approval to run container**](#request-approval-to-run-the-container) below for more information.
Azure Form Recognizer is an Azure Applied AI Service that lets you build automated data processing software using machine learning technology. Form Recognizer enables you to identify and extract text, key/value pairs, selection marks, table data, and more from your form documents and output structured data that includes the relationships in the original file.
-In this article you'll learn how to download, install, and run Form Recognizer containers. Containers enable you to run the Form Recognizer service in your own environment. Containers are great for specific security and data governance requirements. Form Recognizer features are supported by seven Form Recognizer containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom API**, and **Custom Supervised** (for reciept, business cards and ID Document containers you will also need the **Read** OCR container).
+In this article you'll learn how to download, install, and run Form Recognizer containers. Containers enable you to run the Form Recognizer service in your own environment. Containers are great for specific security and data governance requirements. Form Recognizer features are supported by six Form Recognizer feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, and **Custom** (for Receipt, Business Card and ID Document containers you will also need the **Read** OCR container).
## Prerequisites
The following host machine requirements are applicable to **train and analyze**
| Container | Minimum | Recommended | |--||-|
-| Custom API| 0.3 cores, 0.5-GB memory| 0.6 cores, 1-GB memory |
+| Custom API| 0.5 cores, 0.5-GB memory| 1 cores, 1-GB memory |
|Custom Supervised | 4 cores, 2-GB memory | 8 cores, 4-GB memory| If you are only making analyze calls, the host machine requirements are as follows:
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/whats-new.md
Previously updated : 05/25/2021 Last updated : 07/01/2021
<!-- markdownlint-disable MD036 --> # What's new in Azure Form Recognizer
-Form Recognizer service is updated on an ongoing basis. Bookmark this page to stay up-to-date with release notes, feature enhancements, and documentation updates.
+Form Recognizer service is updated on an ongoing basis. Bookmark this page to stay up-to-date with release notes, feature enhancements, and documentation updates.
+
+## June 2021
+
+### Form Recognizer containers v2.1 released in gated preview
+
+Form Recognizer features are now supported by six feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, and **Custom**. To use them, you must submit an [online request](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUNlpBU1lFSjJUMFhKNzVHUUVLN1NIOEZETiQlQCN0PWcu), and receive approval.
+
+*See* [**Install and run Docker containers for Form Recognizer**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout) and [**Configure Form Recognizer containers**](containers/form-recognizer-container-configuration.md?branch=main)
+
+### Form Recognizer connector released in preview
+
+ The [**Form Recognizer connector**](/connectors/formrecognizer) integrates with [Azure Logic Apps](/azure/logic-apps/logic-apps-overview), [Microsoft Power Automate](/power-automate/getting-started), and [Microsoft Power Apps](/powerapps/powerapps-overview). The connector supports workflow actions and triggers to extract and analyze document data and structure from custom and prebuilt forms, invoices, receipts, business cards and ID documents.
+
+### Form Recognizer SDK v3.1.0 patched to v3.1.1 for C#, Java, and Python
+
+The patch addresses invoices that do not have sub-line item fields detected such as a `FormField` with `Text` but no `BoundingBox` or `Page` information.
+
+### [**C#**](#tab/csharp)
+
+| [Reference documentation](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet&preserve-view=true) | [NuGet package version 3.1.1](https://www.nuget.org/packages/Azure.AI.FormRecognizer) |
+
+### [**Java**](#tab/java)
+
+ | [Reference documentation](/java/api/com.azure.ai.formrecognizer.models?view=azure-java-stable&preserve-view=true)| [Maven artifact package depend