Updates from: 08/31/2022 01:10:19
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner Deduce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-deduce.md
In order to collect the user_agent from client-side, create your own `**ContentD
To customize the user interface, you specify a URL in the `ContentDefinition` element with customized HTML content. In the self-asserted technical profile or orchestration step, you point to that ContentDefinition identifier.
-1. Open the `TrustFrameworksExtension.xml` and define a new **ContentDefinition** to customize the [self-asserted technical profile](https://docs.microsoft.com/azure/active-directory-b2c/self-asserted-technical-profile).
+1. Open the `TrustFrameworksExtension.xml` and define a new **ContentDefinition** to customize the [self-asserted technical profile](/azure/active-directory-b2c/self-asserted-technical-profile).
1. Find the `BuildingBlocks` element and add the `**api.selfassertedDeduce**` ContentDefinition:
The **ClaimsSchema** element defines the claim types that can be referenced as p
### Step 6: Add Deduce ClaimsProvider
-A **claims provider** is an interface to communicate with different types of parties via its [technical profiles](https://docs.microsoft.com/azure/active-directory-b2c/technicalprofiles).
+A **claims provider** is an interface to communicate with different types of parties via its [technical profiles](/azure/active-directory-b2c/technicalprofiles).
- `SelfAsserted-UserAgent` self-asserted technical profile is used to collect user_agent from client-side. -- `deduce_insight_api` technical profile sends data to the Deduce RESTful service in an input claims collection and receives data back in an output claims collection. For more information, see [integrate REST API claims exchanges in your Azure AD B2C custom policy](https://docs.microsoft.com/azure/active-directory-b2c/api-connectors-overview?pivots=b2c-custom-policy)
+- `deduce_insight_api` technical profile sends data to the Deduce RESTful service in an input claims collection and receives data back in an output claims collection. For more information, see [integrate REST API claims exchanges in your Azure AD B2C custom policy](/azure/active-directory-b2c/api-connectors-overview?pivots=b2c-custom-policy)
You can define Deduce as a claims provider by adding it to the **ClaimsProvider** element in the extension file of your policy.
active-directory-b2c Tutorial Delete Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-delete-tenant.md
Previously updated : 09/20/2021 Last updated : 08/30/2022 # Clean up resources and delete the tenant
-When you've finished the Azure AD B2C tutorials, you can delete the tenant you used for testing or training. To delete the tenant, you'll first need to delete all tenant resources. In this article, you'll:
+When you've finished the Azure Active Directory B2C (Azure AD B2C) tutorials, you can delete the tenant you used for testing or training. To delete the tenant, you'll first need to delete all tenant resources. In this article, you'll:
> [!div class="checklist"] > * Use the **Delete tenant** option to identify cleanup tasks
When you've finished the Azure AD B2C tutorials, you can delete the tenant you u
## Identify cleanup tasks 1. Sign in to the [Azure portal](https://portal.azure.com/) with a global administrator or subscription administrator role. Use the same work or school account or the same Microsoft account that you used to sign up for Azure.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
-1. Select the **Azure Active Directory** service.
-1. Under **Manage**, select **Properties**.
+1. Make sure you're using the directory that contains your Azure AD B2C tenant:
+ 1. Select the **Directories + subscriptions** icon in the portal toolbar.
+ 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch** button next to it.
+1. In the Azure portal, search for and select the **Azure Active Directory** service.
+1. In the left menu, under **Manage**, select **Properties**.
1. Under **Access management for Azure resources**, select **Yes**, and then select **Save**.
-1. Sign out of the Azure portal and then sign back in to refresh your access. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
-1. Select the **Azure Active Directory** service.
-1. On the **Overview** page, select **Delete tenant**. The **Required action** column indicates the resources you'll need to remove before you can delete the tenant.
+1. Sign out of the Azure portal and then sign back in to refresh your access.
+1. Repeat step two to make sure you're using the directory that contains your Azure AD B2C tenant.
+1. In the Azure portal, search for and select the **Azure Active Directory** service
+1. On the **Overview** page, select **Manage tenants**.
+1. On the **Manage tenants** page, select (by check marking) the tenant you want to delete, and then, at the top of the page, select the **Delete** button. The **Required action** column indicates the resources you need to remove before you can delete the tenant.
![Delete tenant tasks](media/tutorial-delete-tenant/delete-tenant-tasks.png) ## Delete tenant resources
-If you have the confirmation page open from the previous section, you can use the links in the **Required action** column to open the Azure portal pages where you can remove these resources. Or, you can remove tenant resources from within the Azure AD B2C service using the following steps.
+If you've the confirmation page open from the previous section, you can use the links in the **Required action** column to open the Azure portal pages where you can remove these resources. Or, you can remove tenant resources from within the Azure AD B2C service using the following steps.
1. Sign in to the [Azure portal](https://portal.azure.com/) with a global administrator or subscription administrator role. Use the same work or school account or the same Microsoft account that you used to sign up for Azure.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
-1. Select the **Azure AD B2C** service. Or use the search box to find and select **Azure AD B2C**.
-1. Delete all users *except* the admin account you're currently signed in as: Under **Manage**, select **Users**. On the **All users** page, select the checkbox next to each user (except the admin account you're currently signed in as). Select **Delete**, and then select **Yes** when prompted.
+1. Make sure you're using the directory that contains your Azure AD B2C tenant:
+ 1. Select the **Directories + subscriptions** icon in the portal toolbar.
+ 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch** button next to it.
+1. In the Azure portal, select the **Azure AD B2C** service, or search for and select **Azure AD B2C**.
+1. Delete all users *except* the admin account you're currently signed in as:
+ 1. Under **Manage**, select **Users**.
+ 1. On the **All users** page, select the checkbox next to each user (except the admin account you're currently signed in as).
+ 1. At the top of the page, select **Delete user**, and then select **Yes** when prompted.
![Delete users](media/tutorial-delete-tenant/delete-users.png)
-1. Delete app registrations and the *b2c-extensions-app*: Under **Manage**, select **App registrations**. Select the **All applications** tab. Select an application, and then select **Delete**. Repeat for all applications, including the **b2c-extensions-app** application.
+1. Delete app registrations and the *b2c-extensions-app*:
+ 1. Under **Manage**, select **App registrations**.
+ 1. Select the **All applications** tab.
+ 1. Select an application to open it, and then select **Delete** button. Repeat for all applications, including the **b2c-extensions-app** application.
![Delete application](media/tutorial-delete-tenant/delete-applications.png)
-1. Delete any identity providers you configured: Under **Manage**, select **Identity providers**. Select an identity provider you configured, and then select **Remove**.
+1. Delete any identity providers you configured:
+ 1. Under **Manage**, select **Identity providers**.
+ 1. Select an identity provider you configured, and then select **Remove**.
![Delete identity provider](media/tutorial-delete-tenant/identity-providers.png)
-1. Delete user flows: Under **Policies**, select **User flows**. Next to each user flow, select the ellipses (...) and then select **Delete**.
+1. Delete user flows:
+ 1. Under **Policies**, select **User flows**.
+ 1. Next to each user flow, select the ellipses (...) and then select **Delete**.
![Delete user flows](media/tutorial-delete-tenant/user-flow.png)
-1. Delete policy keys: Under **Policies**, select **Identity Experience Framework**, and then select **Policy keys**. Next to each policy key, select the ellipses (...) and then select **Delete**.
+1. Delete policy keys:
+ 1. Under **Policies**, select **Identity Experience Framework**, and then select **Policy keys**.
+ 1. Next to each policy key, select the ellipses (...) and then select **Delete**.
-1. Delete custom policies: Under **Policies**, select **Identity Experience Framework**, select **Custom policies**, and then delete all policies.
+1. Delete custom policies:
+ 1. Under **Policies**, select **Identity Experience Framework**, and then select **Custom policies**.
+ 1. Next to each Custom policy, select the ellipses (...) and then select **Delete**.
## Delete the tenant
+Once you delete all the tenant resources, you can now delete the tenant itself:
+ 1. Sign in to the [Azure portal](https://portal.azure.com/) with a global administrator or subscription administrator role. Use the same work or school account or the same Microsoft account that you used to sign up for Azure.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
-1. Select the **Azure Active Directory** service.
+1. Make sure you're using the directory that contains your Azure AD B2C tenant:
+ 1. Select the **Directories + subscriptions** icon in the portal toolbar.
+ 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch** button next to it.
+1. In the Azure portal, search for and select the **Azure Active Directory** service.
1. If you haven't already granted yourself access management permissions, do the following:
- * Under **Manage**, select **Properties**.
- * Under **Access management for Azure resources**, select **Yes**, and then select **Save**.
- * Sign out of the Azure portal and then sign back in to refresh your access, and select the **Azure Active Directory** service.
+ 1. Under **Manage**, select **Properties**.
+ 1. Under **Access management for Azure resources**, select **Yes**, and then select **Save**.
+ 1. Sign out of the Azure portal and then sign back in to refresh your access, and select the **Azure Active Directory** service.
-1. On the **Overview** page, select **Delete tenant**.
+1. On the **Overview** page, select **Manage tenants**.
- ![Delete the tenant](media/tutorial-delete-tenant/delete-tenant.png)
+ :::image type="content" source="media/tutorial-delete-tenant/manage-tenant.png" alt-text="Screenshot of how to manage tenant for deletion.":::
+1. On the **Manage tenants** page, select (by check marking) the tenant you want to delete, and then, at the top of the page, select the **Delete** button
1. Follow the on-screen instructions to complete the process. ## Next steps
active-directory How To Mfa Additional Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-additional-context.md
description: Learn how to use additional context in MFA notifications
Previously updated : 08/08/2022 Last updated : 08/18/2022
# How to use additional context in Microsoft Authenticator app notifications (Preview) - Authentication Methods Policy
-This article covers how to improve the security of user sign-in by adding the application and location in Microsoft Authenticator app push notifications.
+This topic covers how to improve the security of user sign-in by adding the application name and geographic location of the sign-in to Microsoft Authenticator push and passwordless notifications. The schema for the API to enable application name and geographic location is currently being updated. **While the API is updated over the next two weeks, you should only use the Azure AD portal to enable application name and geographic location.**
## Prerequisites
-Your organization will need to enable Authenticator app push notifications for some users or groups using the new Authentication Methods Policy API.
+Your organization will need to enable Microsoft Authenticator push notifications for some users or groups by using the Azure AD portal. The new Authentication Methods Policy API will soon be ready as another configuration option.
>[!NOTE] >Additional context can be targeted to only a single group, which can be dynamic or nested. On-premises synchronized security groups and cloud-only security groups are supported for the Authentication Method Policy. ## Passwordless phone sign-in and multifactor authentication
-When a user receives a Passwordless phone sign-in or MFA push notification in the Authenticator app, they'll see the name of the application that requests the approval and the location based on the IP address where the sign-in originated from.
+When a user receives a passwordless phone sign-in or MFA push notification in the Authenticator app, they'll see the name of the application that requests the approval and the location based on the IP address where the sign-in originated from.
:::image type="content" border="false" source="./media/howto-authentication-passwordless-phone/location.png" alt-text="Screenshot of additional context in the MFA push notification.":::
The additional context can be combined with [number matching](how-to-mfa-number-
:::image type="content" border="false" source="./media/howto-authentication-passwordless-phone/location-with-number-match.png" alt-text="Screenshot of additional context with number matching in the MFA push notification.":::
-### Policy schema changes
+## Enable additional context
->[!NOTE]
->In Graph Explorer, ensure you've consented to the **Policy.Read.All** and **Policy.ReadWrite.AuthenticationMethod** permissions.
+To enable application name or geographic location, complete the following steps:
-Identify a single target group for the schema configuration. Then use the following API endpoint to change the displayAppInformationRequiredState property to **enabled**:
+1. In the Azure AD portal, click **Security** > **Authentication methods** > **Microsoft Authenticator**.
+1. On the **Basics** tab, click **Yes** and **All users** to enable the policy for everyone, and change **Authentication mode** to **Any**.
+
+ Only users who are enabled for Microsoft Authenticator here can be included in the policy to show the application name or geographic location of the sign-in, or excluded from it. Users who aren't enabled for Microsoft Authenticator can't see application name or geographic location.
-https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
+ :::image type="content" border="true" source="./media/how-to-mfa-additional-context/enable-settings-additional-context.png" alt-text="Screenshot of how to enable Microsoft Authenticator settings for Any authentication mode.":::
->[!NOTE]
->For Passwordless phone sign-in, the Authenticator app does not retrieve policy information just in time for each sign-in request. Instead, the Authenticator app does a best effort retrieval of the policy once every 7 days. We understand this limitation is less than ideal and are working to optimize the behavior. In the meantime, if you want to force a policy update to test using additional context with Passwordless phone sign-in, you can remove and re-add the account in the Authenticator app.
-
-#### MicrosoftAuthenticatorAuthenticationMethodConfiguration properties
-
-**PROPERTIES**
-
-| Property | Type | Description |
-|||-|
-| ID | String | The authentication method policy identifier. |
-| state | authenticationMethodState | Possible values are: **enabled**<br>**disabled** |
-
-**RELATIONSHIPS**
-
-| Relationship | Type | Description |
-|--||-|
-| includeTargets | [microsoftAuthenticatorAuthenticationMethodTarget](/graph/api/resources/passwordlessmicrosoftauthenticatorauthenticationmethodtarget) |
-| collection | A collection of users or groups who are enabled to use the authentication method. |
-
-#### MicrosoftAuthenticator includeTarget properties
-
-**PROPERTIES**
-
-| Property | Type | Description |
-|-||-|
-| authenticationMode | String | Possible values are:<br>**any**: Both passwordless phone sign-in and traditional second factor notifications are allowed.<br>**deviceBasedPush**: Only passwordless phone sign-in notifications are allowed.<br>**push**: Only traditional second factor push notifications are allowed. |
-| ID | String | Object ID of an Azure AD user or group. |
-| targetType | authenticationMethodTargetType | Possible values are: **user**, **group**.<br>You can only set one group or user for additional context. |
-| displayAppInformationRequiredState | advancedConfigState | Possible values are:<br>**enabled** explicitly enables the feature for the selected group.<br>**disabled** explicitly disables the feature for the selected group.<br>**default** allows Azure AD to manage whether the feature is enabled or not for the selected group. |
+1. On the **Configure** tab, for **Show application name in push and passwordless notifications (Preview)**, change **Status** to **Enabled**, choose who to include or exclude from the policy, and click **Save**.
->[!NOTE]
->Additional context can only be enabled for a single group.
-
-#### Example of how to enable additional context for all users
-
-Change the **displayAppInformationRequiredState** from **default** to **enabled**.
-
-The value of Authentication Mode can be either **any** or **push**, depending on whether or not you also want to enable passwordless phone sign-in. In these examples, we'll use **any**, but if you don't want to allow passwordless, use **push**.
-
-You need to PATCH the entire includeTarget to prevent overwriting any previous configuration. In that case, do a GET first, update only the relevant fields, and then PATCH. The following example only shows the update to the **displayAppInformationRequiredState**.
-
-```json
-//Retrieve your existing policy via a GET.
-//Leverage the Response body to create the Request body section. Then update the Request body similar to the Request body as shown below.
-//Change the Query to PATCH and Run query
-
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "all_users",
- "authenticationMode": "any",
- "displayAppInformationRequiredState": "enabled",
- "numberMatchingRequiredState": "enabled"
- }
- ]
-}
-
-```
-
-To confirm this update has applied, run the GET request below using the endpoint below.
-GET - https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
-
-
-#### Example of how to enable additional context for a single group
-
-Change the **displayAppInformationRequiredState** value from **default** to **enabled.**
-Change the **id** from **all_users** to the ObjectID of the group from the Azure AD portal.
-
-You need to PATCH the entire includeTarget to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The example below only shows the update to the **displayAppInformationRequiredState**.
-
-```json
-//Copy paste the below in the Request body section as shown below.
-//Leverage the Response body to create the Request body section. Then update the Request body similar to the Request body as shown below.
-//Change query to PATCH and run query
-
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "1ca44590-e896-4dbe-98ed-b140b1e7a53a",
- "authenticationMode": "any",
- "displayAppInformationRequiredState": "enabled",
- "numberMatchingRequiredState": "enabled"
- }
- ]
-}
-```
-
-To verify, RUN GET again and verify the ObjectID
-GET https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
-
-
-#### Example of error when enabling additional context for multiple groups
-
-The PATCH request will fail with 400 Bad Request and the error will contain the following message:
-
-`Persistance of policy failed with error: You cannot enable multiple targets for feature 'Require Display App Information'. Choose only one of the following includeTargets to enable: aede0efe-c1b4-40dc-8ae7-2c402f23e312,aede0efe-c1b4-40dc-8ae7-2c402f23e317.`
-
-### Test the end-user experience
-Add the test user account to the Authenticator app. The account **doesn't** need to be enabled for phone sign-in.
-
-See the end-user experience of an Authenticator multifactor authentication push notification with additional context by signing into aka.ms/MFAsetup.
-
-### Turn off additional context
-
-To turn off additional context, you'll need to PATCH remove **displayAppInformationRequiredState** from **enabled** to **disabled**/**default**.
-
-```json
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "all_users",
- "authenticationMode": "any",
- "displayAppInformationRequiredState": "enabled",
- "numberMatchingRequiredState": "default"
- }
- ]
-}
-```
-
-## Enable additional context in the portal
-
-To enable additional context in the Azure AD portal, complete the following steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com) using an account with *global administrator* permissions.
-1. Search for and select **Azure Active Directory**, then choose **Security** from the menu on the left-hand side.
-1. Under the **Manage** menu header, select **Authentication methods** > **Policies**.
-1. From the list of available authentication methods, select **Microsoft Authenticator**.
-
- ![Screenshot that shows how to select the Microsoft Authenticator policy.](./media/how-to-mfa-additional-context/select-microsoft-authenticator-policy.png)
-
-1. Select the target users, select the three dots on the right, and choose **Configure**.
-
- ![Screenshot of configuring Microsoft authenticator additional context.](./media/how-to-mfa-additional-context/configure-microsoft-authenticator.png)
+ :::image type="content" border="true" source="./media/how-to-mfa-additional-context/enable-app-name.png" alt-text="Screenshot of how to enable application name.":::
+
+ Then do the same for **Show geographic location in push and passwordless notifications (Preview)**.
+
+ :::image type="content" border="true" source="./media/how-to-mfa-additional-context/enable-geolocation.png" alt-text="Screenshot of how to enable geographic location.":::
-1. Select the **Authentication mode**, and then for **Show additional context in notifications (Preview)**, select **Enable**, and then select **Done**.
+ You can configure application name and geographic location separately. For example, the following policy enables application name and geographic location for all users but excludes the Operations group from seeing geographic location.
- ![Screenshot of enabling additional context.](media/howto-authentication-passwordless-phone/enable-additional-context.png)
+ :::image type="content" border="true" source="./media/how-to-mfa-additional-context/exclude.png" alt-text="Screenshot of how to enable application name and geographic location separately.":::
## Known issues
-Additional context isn't supported for Network Policy Server (NPS).
+Additional context is not supported for Network Policy Server (NPS) or Active Directory Federation Services (AD FS).
## Next steps
-[Authentication methods in Azure Active Directory - Microsoft Authenticator app](concept-authentication-authenticator-app.md)
+[Authentication methods in Azure Active Directory - Microsoft Authenticator app](concept-authentication-authenticator-app.md)
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
# How to use number matching in multifactor authentication (MFA) notifications (Preview) - Authentication Methods Policy
-This article covers how to enable number matching in Microsoft Authenticator push notifications to improve user sign-in security.
+This topic covers how to enable number matching in Microsoft Authenticator push notifications to improve user sign-in security. The schema for the API to enable number match is currently being updated. **While the API is updated over the next two weeks, you should only use the Azure AD portal to enable number match.**
>[!NOTE]
->Number matching is a key security upgrade to traditional second factor notifications in the Authenticator app that will be enabled by default for all tenants a few months after general availability (GA).<br>
+>Number matching is a key security upgrade to traditional second factor notifications in Microsoft Authenticator that will be enabled by default for all tenants a few months after general availability (GA).<br>
>We highly recommend enabling number matching in the near-term for improved sign-in security. ## Prerequisites
-Your organization will need to enable Authenticator (traditional second factor) push notifications for some users or groups using the new Authentication Methods Policy API. If your organization is using ADFS adapter or NPS extensions, please upgrade to the latest versions for a consistent experience.
+Your organization will need to enable Authenticator (traditional second factor) push notifications for some users or groups only by using the Azure AD portal. The new Authentication Methods Policy API will soon be ready as another configuration option. If your organization is using ADFS adapter or NPS extensions, please upgrade to the latest versions for a consistent experience.
## Number matching
Number matching is available for the following scenarios. When enabled, all scen
>[!NOTE] >For passwordless users, enabling or disabling number matching has no impact because it's already part of the passwordless experience.
+Number matching is not supported for Apple Watch notifications. Apple Watch users need to use their phone to approve notifications when number matching is enabled.
+ ### Multifactor authentication When a user responds to an MFA push notification using the Authenticator app, they'll be presented with a number. They need to type that number into the app to complete the approval.
To create the registry key that overrides push notifications:
Value = TRUE 1. Restart the NPS Service.
-### Policy schema changes
-
->[!NOTE]
->In Graph Explorer, ensure you've consented to the **Policy.Read.All** and **Policy.ReadWrite.AuthenticationMethod** permissions.
-
-Identify your single target group for the schema configuration. Then use the following API endpoint to change the numberMatchingRequiredState property to **enabled**:
-
-https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
--
-#### MicrosoftAuthenticatorAuthenticationMethodConfiguration properties
-
-**PROPERTIES**
+## Enable number matching
-| Property | Type | Description |
-|||-|
-| ID | String | The authentication method policy identifier. |
-| state | authenticationMethodState | Possible values are: **enabled**<br>**disabled** |
-
-**RELATIONSHIPS**
+To enable number matching, complete the following steps:
-| Relationship | Type | Description |
-|--||-|
-| includeTargets | [microsoftAuthenticatorAuthenticationMethodTarget](/graph/api/resources/passwordlessmicrosoftauthenticatorauthenticationmethodtarget) |
-| collection | A collection of users or groups who are enabled to use the authentication method. |
-
-#### MicrosoftAuthenticator includeTarget properties
-
-**PROPERTIES**
+1. In the Azure AD portal, click **Security** > **Authentication methods** > **Microsoft Authenticator**.
+1. On the **Basics** tab, click **Yes** and **All users** to enable the policy for everyone, and change **Authentication mode** to **Push**.
-| Property | Type | Description |
-|-||-|
-| authenticationMode | String | Possible values are:<br>**any**: Both passwordless phone sign-in and traditional second factor notifications are allowed.<br>**deviceBasedPush**: Only passwordless phone sign-in notifications are allowed.<br>**push**: Only traditional second factor push notifications are allowed. |
-| ID | String | Object ID of an Azure AD user or group. |
-| targetType | authenticationMethodTargetType | Possible values are: **user**, **group**.<br>Note: You'll be able to only set one group or user for number matching. |
-| numberMatchingRequiredState | advancedConfigState | Possible values are:<br>**enabled** explicitly enables the feature for the selected group.<br>**disabled** explicitly disables the feature for the selected group.<br>**default** allows Azure AD to manage whether the feature is enabled or not for the selected group. |
+ Only users who are enabled for Microsoft Authenticator here can be included in the policy to require number matching for sign-in, or excluded from it. Users who aren't enabled for Microsoft Authenticator can't see a number match.
->[!NOTE]
->Number matching can only be enabled for a single group.
-
-#### Example of how to enable number matching for all users
-
-You'll need to change the **numberMatchingRequiredState** from **default** to **enabled**.
-
-Note that the value of Authentication Mode can be either **any** or **push**, depending on whether or not you also want to enable passwordless phone sign-in. In these examples, we'll use **any**, but if you don't want to allow passwordless, use **push**.
+ :::image type="content" border="true" source="./media/how-to-mfa-number-match/enable-settings-number-match.png" alt-text="Screenshot of how to enable Microsoft Authenticator settings for Push authentication mode.":::
->[!NOTE]
->For passwordless users, enabling or disabling number matching has no impact because it's already part of the passwordless experience.
-
-You might need to patch the entire includeTarget to prevent overwriting any previous configuration. In that case, do a GET first, update only the relevant fields, and then PATCH. The following example only shows the update to the **numberMatchingRequiredState**.
-
-```json
-//Retrieve your existing policy via a GET.
-//Leverage the Response body to create the Request body section. Then update the Request body similar to the Request body as shown below.
-//Change the Query to PATCH and Run query
-
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "all_users",
- "authenticationMode": "any",
- "displayAppInformationRequiredState": "enabled",
- "numberMatchingRequiredState": "enabled"
- }
- ]
-}
-
-```
-
-To confirm this update has applied, please run the GET request below using the endpoint below.
-GET - https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
-
-
-#### Example of how to enable number matching for a single group
-
-We'll need to change the **numberMatchingRequiredState** value from **default** to **enabled.**
-You'll need to change the **id** from **all_users** to the ObjectID of the group from the Azure AD portal.
-
-You need to PATCH the entire includeTarget to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The example below only shows the update to the **numberMatchingRequiredState**.
-
-```json
-//Copy paste the below in the Request body section as shown below.
-//Leverage the Response body to create the Request body section. Then update the Request body similar to the Request body as shown below.
-//Change query to PATCH and run query
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "1ca44590-e896-4dbe-98ed-b140b1e7a53a",
- "authenticationMode": "any",
- "displayAppInformationRequiredState": "enabled",
- "numberMatchingRequiredState": "enabled"
- }
- ]
-}
-```
-
-To verify, RUN GET again and verify the ObjectID
-GET https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
-
-
-#### Example of error when enabling number matching for multiple groups
-
-The PATCH request will fail with 400 Bad Request and the error will contain the following message:
--
-`Persistance of policy failed with error: You cannot enable multiple targets for feature 'Require Number Matching'. Choose only one of the following includeTargets to enable: aede0efe-c1b4-40dc-8ae7-2c402f23e312,aede0efe-c1b4-40dc-8ae7-2c402f23e317.`
-
-### Test the end user experience
-Add the test user account to the Authenticator app. The account does **not** need to be enabled for phone sign-in.
-
-See the end user experience of an Authenticator MFA push notification with number matching by signing into aka.ms/MFAsetup.
-
-### Turn off number matching
-
-To turn number matching off, you'll need to PATCH remove **numberMatchingRequiredState** from **enabled** to **disabled**/**default**.
-
-```json
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "all_users",
- "authenticationMode": "any",
- "displayAppInformationRequiredState": "enabled",
- "numberMatchingRequiredState": "default"
- }
- ]
-}
-```
-
-## Enable number matching in the portal
-
-To enable number matching in the Azure portal, complete the following steps:
-
-1. Sign-in to the [Azure portal](https://portal.azure.com) using an account with *global administrator* permissions.
-1. Search for and select **Azure Active Directory**, then choose **Security** from the menu on the left-hand side.
-1. Under the **Manage** menu header, select **Authentication methods** > **Policies**.
-1. From the list of available authentication methods, select **Microsoft Authenticator**.
-
- ![Screenshot that shows how to select the Microsoft Authenticator policy.](./media/how-to-mfa-number-match/select-microsoft-authenticator-policy.png)
-
-1. Select the target users, select the three dots on the right, and choose **Configure**.
-
- ![Screenshot of configuring number match.](./media/how-to-mfa-number-match/configure-microsoft-authenticator.png)
-
-1. Select the **Authentication mode**, and then for **Require number matching (Preview)**, select **Enable**, and then select **Done**.
-
- ![Screenshot of enabling number match configuration.](media/howto-authentication-passwordless-phone/enable-number-matching.png)
-
->[!NOTE]
->[Least privileged role in Azure Active Directory - Multifactor authentication](../roles/delegate-by-task.md#multi-factor-authentication)
+1. On the **Configure** tab, for **Require number matching for push notifications (Preview)**, change **Status** to **Enabled**, choose who to include or exclude from number matching, and click **Save**.
-Number matching isn't supported for Apple Watch notifications. Apple Watch need to use their phone to approve notifications when number matching is enabled.
+ :::image type="content" border="true" source="./media/how-to-mfa-number-match/number-match.png" alt-text="Screenshot of how to enable number matching.":::
## Next steps
active-directory Active Directory How To Integrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-how-to-integrate.md
Integration with the Microsoft identity platform comes with benefits that do not
### Advanced security features
-**Multi-factor authentication.** The Microsoft identity platform provides native multi-factor authentication. IT administrators can require multi-factor authentication to access your application, so that you do not have to code this support yourself. Learn more about [Multi-Factor Authentication](https://azure.microsoft.com/documentation/services/multi-factor-authentication/).
+**Multi-factor authentication.** The Microsoft identity platform provides native multi-factor authentication. IT administrators can require multi-factor authentication to access your application, so that you do not have to code this support yourself. Learn more about [Multi-Factor Authentication](/azure/multi-factor-authentication/).
**Anomalous sign in detection.** The Microsoft identity platform processes more than a billion sign-ins a day, while using machine learning algorithms to detect suspicious activity and notify IT administrators of possible problems. By supporting the Microsoft identity platform sign-in, your application gets the benefit of this protection. Learn more about [viewing Azure Active Directory access report](../reports-monitoring/overview-reports.md).
Integration with the Microsoft identity platform comes with benefits that do not
[Get started writing code](v2-overview.md#getting-started).
-[Sign users in using the Microsoft identity platform](./authentication-vs-authorization.md)
+[Sign users in using the Microsoft identity platform](./authentication-vs-authorization.md)
active-directory Howto Create Service Principal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-create-service-principal-portal.md
Previously updated : 10/11/2021 Last updated : 08/26/2022 # Use the portal to create an Azure AD application and service principal that can access resources
-This article shows you how to create a new Azure Active Directory (Azure AD) application and service principal that can be used with the role-based access control. When you have applications, hosted services, or automated tools that needs to access or modify resources, you can create an identity for the app. This identity is known as a service principal. Access to resources is restricted by the roles assigned to the service principal, giving you control over which resources can be accessed and at which level. For security reasons, it's always recommended to use service principals with automated tools rather than allowing them to log in with a user identity.
+This article shows you how to create a new Azure Active Directory (Azure AD) application and service principal that can be used with the role-based access control. When you have applications, hosted services, or automated tools that need to access or modify resources, you can create an identity for the app. This identity is known as a service principal. Access to resources is restricted by the roles assigned to the service principal, giving you control over which resources can be accessed and at which level. For security reasons, it's always recommended to use service principals with automated tools rather than allowing them to log in with a user identity.
This article shows you how to use the portal to create the service principal in the Azure portal. It focuses on a single-tenant application where the application is intended to run within only one organization. You typically use single-tenant applications for line-of-business applications that run within your organization. You can also [use Azure PowerShell to create a service principal](howto-authenticate-service-principal-powershell.md).
To check your subscription permissions:
1. Search for and select **Subscriptions**, or select **Subscriptions** on the **Home** page.
- ![Search](./media/howto-create-service-principal-portal/select-subscription.png)
+ :::image type="content" source="media/howto-create-service-principal-portal/select-subscription.png" alt-text="Screenshot how to search subscription permissions.":::
1. Select the subscription you want to create the service principal in.
- ![Select subscription for assignment](./media/howto-create-service-principal-portal/select-one-subscription.png)
+ :::image type="content" source="media/howto-create-service-principal-portal/select-one-subscription.png" alt-text="Select subscription for assignment.":::
If you don't see the subscription you're looking for, select **global subscriptions filter**. Make sure the subscription you want is selected for the portal. 1. Select **My permissions**. Then, select **Click here to view complete access details for this subscription**.
- ![Select the subscription you want to create the service principal in](./media/howto-create-service-principal-portal/view-details.png)
+ :::image type="content" source="media/howto-create-service-principal-portal/view-details.png" alt-text="Select the subscription you want to create the service principal in.":::
1. Select **Role assignments** to view your assigned roles, and determine if you have adequate permissions to assign a role to an AD app. If not, ask your subscription administrator to add you to User Access Administrator role. In the following image, the user is assigned the Owner role, which means that user has adequate permissions.
Let's jump straight into creating the identity. If you run into a problem, check
1. Select **Azure Active Directory**. 1. Select **App registrations**. 1. Select **New registration**.
-1. Name the application. Select a supported account type, which determines who can use the application. Under **Redirect URI**, select **Web** for the type of application you want to create. Enter the URI where the access token is sent to. You can't create credentials for a [Native application](../app-proxy/application-proxy-configure-native-client-application.md). You can't use that type for an automated application. After setting the values, select **Register**.
+1. Name the application, for example "example-app". Select a supported account type, which determines who can use the application. Under **Redirect URI**, select **Web** for the type of application you want to create. Enter the URI where the access token is sent to. You can't create credentials for a [Native application](../app-proxy/application-proxy-configure-native-client-application.md). You can't use that type for an automated application. After setting the values, select **Register**.
- ![Type a name for your application](./media/howto-create-service-principal-portal/create-app.png)
+ :::image type="content" source="media/howto-create-service-principal-portal/create-app.png" alt-text="Type a name for your application.":::
You've created your Azure AD application and service principal.
You can set the scope at the level of the subscription, resource group, or resou
1. In the Azure portal, select the level of scope you wish to assign the application to. For example, to assign a role at the subscription scope, search for and select **Subscriptions**, or select **Subscriptions** on the **Home** page.
- ![For example, assign a role at the subscription scope](./media/howto-create-service-principal-portal/select-subscription.png)
+ :::image type="content" source="media/howto-create-service-principal-portal/select-subscription.png" alt-text="For example, assign a role at the subscription scope.":::
1. Select the particular subscription to assign the application to.
- ![Select subscription for assignment](./media/howto-create-service-principal-portal/select-one-subscription.png)
+ :::image type="content" source="media/howto-create-service-principal-portal/select-one-subscription.png" alt-text="Select subscription for assignment.":::
If you don't see the subscription you're looking for, select **global subscriptions filter**. Make sure the subscription you want is selected for the portal. 1. Select **Access control (IAM)**.
-1. Select Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-1. Select the role you wish to assign to the application. For example, to allow the application to execute actions like **reboot**, **start** and **stop** instances, select the **Contributor** role. Read more about the [available roles](../../role-based-access-control/built-in-roles.md) By default, Azure AD applications aren't displayed in the available options. To find your application, search for the name and select it.
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
+1. In the **Role** tab, select the role you wish to assign to the application in the list. For example, to allow the application to execute actions like **reboot**, **start** and **stop** instances, select the **Contributor** role. Read more about the [available roles](../../role-based-access-control/built-in-roles.md).
- Assign the Contributor role to the application at the subscription scope. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+ Select the **Next** button to move to the **Members** tab. Select **Assign access to**-> **User, group, or service principal** and then select **Select members**. By default, Azure AD applications aren't displayed in the available options. To find your application, search by name (for example, "example-app") and select it from the returned list. Click the **Select** button. Then click the **Review + assign** button.
+ :::image type="content" source="media/howto-create-service-principal-portal/add-role-assignment.png" alt-text="Screenshot showing role assignment.":::
+
Your service principal is set up. You can start using it to run your scripts or apps. To manage your service principal (permissions, user consented permissions, see which users have consented, review permissions, see sign in information, and more), go to **Enterprise applications**. The next section shows how to get values that are needed when signing in programmatically.
When programmatically signing in, pass the tenant ID with your authentication re
1. From **App registrations** in Azure AD, select your application. 1. Copy the Directory (tenant) ID and store it in your application code.
- ![Copy the directory (tenant ID) and store it in your app code](./media/howto-create-service-principal-portal/copy-tenant-id.png)
+ :::image type="content" source="media/howto-create-service-principal-portal/copy-tenant-id.png" alt-text="Copy the directory (tenant ID) and store it in your app code.":::
The directory (tenant) ID can also be found in the default directory overview page. 1. Copy the **Application ID** and store it in your application code.
- ![Copy the application (client) ID](./media/howto-create-service-principal-portal/copy-app-id.png)
+ :::image type="content" source="media/howto-create-service-principal-portal/copy-app-id.png" alt-text="Copy the application (client) ID.":::
## Authentication: Two options
To upload the certificate:
1. Select **Certificates & secrets**. 1. Select **Certificates** > **Upload certificate** and select the certificate (an existing certificate or the self-signed certificate you exported).
- ![Select Upload certificate and select the one you want to add](./media/howto-create-service-principal-portal/upload-cert.png)
+ :::image type="content" source="media/howto-create-service-principal-portal/upload-cert.png" alt-text="Select Upload certificate and select the one you want to add.":::
1. Select **Add**.
If you choose not to use a certificate, you can create a new application secret.
After saving the client secret, the value of the client secret is displayed. Copy this value because you won't be able to retrieve the key later. You will provide the key value with the application ID to sign in as the application. Store the key value where your application can retrieve it.
- ![Copy the secret value because you can't retrieve this later](./media/howto-create-service-principal-portal/copy-secret.png)
+ :::image type="content" source="media/howto-create-service-principal-portal/copy-secret.png" alt-text="Copy the secret value because you can't retrieve this later.":::
## Configure access policies on resources Keep in mind, you might need to configure additional permissions on resources that your application needs to access. For example, you must also [update a key vault's access policies](../../key-vault/general/security-features.md#privileged-access) to give your application access to keys, secrets, or certificates.
active-directory Groups Dynamic Rule Member Of https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-rule-member-of.md
# Group membership in a dynamic group (preview) in Azure Active Directory
-This feature preview in Azure Active Directory (Azure AD), part of Microsoft Entra, enables admins to create dynamic groups that populate by adding members of other groups using the memberOf attribute. Apps that couldn't read group-based membership previously in Azure AD can now read the entire membership of these new memberOf groups. Not only can these groups be used for apps, they can also be used for licensing assignments. The following diagram illustrates how you could create Dynamic-Group-A with members of Security-Group-X and Security-Group-Y. Members of the groups inside of Security-Group-X and Security-Group-Y don't become members of Dynamic-Group-A.
+This feature preview in Azure Active Directory (Azure AD), part of Microsoft Entra, enables admins to create dynamic groups and administrative units that populate by adding members of other groups using the memberOf attribute. Apps that couldn't read group-based membership previously in Azure AD can now read the entire membership of these new memberOf groups. Not only can these groups be used for apps, they can also be used for licensing assignments. The following diagram illustrates how you could create Dynamic-Group-A with members of Security-Group-X and Security-Group-Y. Members of the groups inside of Security-Group-X and Security-Group-Y don't become members of Dynamic-Group-A.
:::image type="content" source="./media/groups-dynamic-rule-member-of/member-of-diagram.png" alt-text="Diagram showing how the memberOf attribute works.":::
Only administrators in the Global Administrator, Intune Administrator, or User A
- MemberOf can't be used with other rules. For example, a rule that states dynamic group A should contain members of group B and also should contain only users located in Redmond will fail. - Dynamic group rule builder and validate feature can't be used for memberOf at this time. - MemberOf can't be used with other operators. For example, you can't create a rule that states ΓÇ£Members Of group A can't be in Dynamic group B.ΓÇ¥
+- The objects specified in the rule can't be administrative units.
## Getting started
active-directory Plan Hybrid Identity Design Considerations Data Protection Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-data-protection-strategy.md
Once authenticated, the user principal name (UPN) is read from the authenticatio
Moving data from your on-premises datacenter into Azure Storage over an Internet connection may not always be feasible due to data volume, bandwidth availability, or other considerations. The [Azure Storage Import/Export Service](../../import-export/storage-import-export-service.md) provides a hardware-based option for placing/retrieving large volumes of data in blob storage. It allows you to send [BitLocker-encrypted](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn306081(v=ws.11)#BKMK_BL2012R2) hard disk drives directly to an Azure datacenter where cloud operators upload the contents to your storage account, or they can download your Azure data to your drives to return to you. Only encrypted disks are accepted for this process (using a BitLocker key generated by the service itself during the job setup). The BitLocker key is provided to Azure separately, thus providing out of band key sharing.
-Since data in transit can take place in different scenarios, is also relevant to know that Microsoft Azure uses [virtual networking](https://azure.microsoft.com/documentation/services/virtual-network/) to isolate tenantsΓÇÖ traffic from one another, employing measures such as host- and guest-level firewalls, IP packet filtering, port blocking, and HTTPS endpoints. However, most of AzureΓÇÖs internal communications, including infrastructure-to-infrastructure and infrastructure-to-customer (on-premises), are also encrypted. Another important scenario is the communications within Azure datacenters; Microsoft manages networks to assure that no VM can impersonate or eavesdrop on the IP address of another. TLS/SSL is used when accessing Azure Storage or SQL Databases, or when connecting to Cloud Services. In this case, the customer administrator is responsible for obtaining a TLS/SSL certificate and deploying it to their tenant infrastructure. Data traffic moving between Virtual Machines in the same deployment or between tenants in a single deployment via Microsoft Azure Virtual Network can be protected through encrypted communication protocols such as HTTPS, SSL/TLS, or others.
+Since data in transit can take place in different scenarios, is also relevant to know that Microsoft Azure uses [virtual networking](/azure/virtual-network/) to isolate tenantsΓÇÖ traffic from one another, employing measures such as host- and guest-level firewalls, IP packet filtering, port blocking, and HTTPS endpoints. However, most of AzureΓÇÖs internal communications, including infrastructure-to-infrastructure and infrastructure-to-customer (on-premises), are also encrypted. Another important scenario is the communications within Azure datacenters; Microsoft manages networks to assure that no VM can impersonate or eavesdrop on the IP address of another. TLS/SSL is used when accessing Azure Storage or SQL Databases, or when connecting to Cloud Services. In this case, the customer administrator is responsible for obtaining a TLS/SSL certificate and deploying it to their tenant infrastructure. Data traffic moving between Virtual Machines in the same deployment or between tenants in a single deployment via Microsoft Azure Virtual Network can be protected through encrypted communication protocols such as HTTPS, SSL/TLS, or others.
Depending on how you answered the questions in [Determine data protection requirements](plan-hybrid-identity-design-considerations-dataprotection-requirements.md), you should be able to determine how you want to protect your data and how the hybrid identity solution can assist you with that process. The following table shows the options supported by Azure that are available for each data protection scenario.
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history.md
This article helps you keep track of the versions that have been released and un
You can upgrade your Azure AD Connect server from all supported versions with the latest versions:
+You can download the latest version of Azure AD Connect 2.0 from the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=47594). See the [release notes for the latest V2.0 release](reference-connect-version-history.md#20280).
The following table lists related topics:
Required permissions | For permissions required to apply an update, see [Azure A
## Retiring Azure AD Connect 1.x versions > [!IMPORTANT]
-> *On August 31, 2022, all 1.x versions of Azure AD Connect will be retired because they include SQL Server 2012 components that will no longer be supported.* Upgrade to the most recent version of Azure AD Connect (2.x version) by that date or [evaluate and switch to Azure AD cloud sync](../cloud-sync/what-is-cloud-sync.md).
+> *As of August 31, 2022, all 1.x versions of Azure AD Connect are retired because they include SQL Server 2012 components that will no longer be supported.* Upgrade to the most recent version of Azure AD Connect (2.x version) by that date or [evaluate and switch to Azure AD cloud sync](../cloud-sync/what-is-cloud-sync.md).
+> AADConnect V1.x will stop working on December 31st, due to the decommisioning of the ADAL library service on that date.
## Retiring Azure AD Connect 2.x versions > [!IMPORTANT]
Required permissions | For permissions required to apply an update, see [Azure A
> > The following versions will retire on 15 March 2023: >
+> - 2.0.91.0
> - 2.0.89.0 > - 2.0.88.0 > - 2.0.28.0
Required permissions | For permissions required to apply an update, see [Azure A
> > If you are not already using the latest release version of Azure AD Connect Sync, you should upgrade your Azure AD Connect Sync software before that date. >
-> This policy does not change the retirement of all 1.x versions of Azure AD Connect Sync on 31 August 2022, which is due to the retirement of the SQL Server 2012 and Azure AD Authentication Library (ADAL) components.
If you run a retired version of Azure AD Connect, it might unexpectedly stop working. You also might not have the latest security fixes, performance improvements, troubleshooting and diagnostic tools, and service enhancements. If you require support, we might not be able to provide you with the level of service your organization needs.
active-directory How To Assign Managed Identity Via Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md
For example, if the policy in this document is updating the managed identities o
## Next steps -- [Deploy Azure Monitor Agent](../../azure-monitor/agents/azure-monitor-agent-manage.md#using-azure-policy)
+- [Deploy Azure Monitor Agent](../../azure-monitor/agents/azure-monitor-agent-manage.md#use-azure-policy)
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Users with this role can manage alerts and have global read-only access on secur
| [Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | All permissions of the Security Reader role<br>View, investigate, and respond to security alerts | | [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/prepare-deployment) | All permissions of the Security Reader role<br>View, investigate, and respond to security alerts | | [Intune](/intune/role-based-access-control) | All permissions of the Security Reader role |
-| [Cloud App Security](/cloud-app-security/manage-admins) | All permissions of the Security Reader role |
+| [Microsoft Defender for Cloud Apps](/cloud-app-security/manage-admins) | All permissions of the Security Reader role<br>View, investigate, and respond to security alerts |
| [Microsoft 365 service health](/microsoft-365/enterprise/view-service-health) | View the health of Microsoft 365 services | > [!div class="mx-tableFixed"]
Identity Protection Center | Read all security reports and settings information
[Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | View security policies<br>View and investigate security threats<br>View reports [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/prepare-deployment) | View and investigate alerts. When you turn on role-based access control in Microsoft Defender for Endpoint, users with read-only permissions such as the Azure AD Security Reader role lose access until they are assigned to a Microsoft Defender for Endpoint role. [Intune](/intune/role-based-access-control) | Views user, device, enrollment, configuration, and application information. Cannot make changes to Intune.
-[Cloud App Security](/cloud-app-security/manage-admins) | Has read permissions and can manage alerts
+[Microsoft Defender for Cloud Apps](/cloud-app-security/manage-admins) | Has read permissions.
[Microsoft 365 service health](/office365/enterprise/view-service-health) | View the health of Microsoft 365 services > [!div class="mx-tableFixed"]
active-directory Amazon Web Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/amazon-web-service-tutorial.md
To configure the integration of AWS Single-Account Access into Azure AD, you nee
1. In the **Add from the gallery** section, type **AWS Single-Account Access** in the search box. 1. Select **AWS Single-Account Access** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
## Configure and test Azure AD SSO for AWS Single-Account Access
active-directory Atlassian Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atlassian-cloud-tutorial.md
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Atlassian Cloud single sign-on (SSO) enabled subscription.
-* To enable Security Assertion Markup Language (SAML) single sign-on for Atlassian Cloud products, you need to set up Atlassian Access. Learn more about [Atlassian Access]( https://www.atlassian.com/enterprise/cloud/identity-manager).
+* To enable Security Assertion Markup Language (SAML) single sign-on for Atlassian Cloud products, you need to set up Atlassian Access. Learn more about [Atlassian Access](https://www.atlassian.com/enterprise/cloud/identity-manager).
> [!NOTE] > This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
To configure the integration of Atlassian Cloud into Azure AD, you need to add A
1. In the **Add from the gallery** section, type **Atlassian Cloud** in the search box. 1. Select **Atlassian Cloud** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
## Configure and test Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a Single sign-on method** page, select **SAML**.
- ![SAML in azure](./media/atlassian-cloud-tutorial/azure.png)
+ ![SAML in Azure](./media/atlassian-cloud-tutorial/azure.png)
1. On the **Set up Single Sign-On with SAML** page, scroll down to **Set Up Atlassian Cloud**.
active-directory Aws Single Sign On Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aws-single-sign-on-tutorial.md
To configure the integration of AWS IAM Identity Center into Azure AD, you need
1. In the **Add from the gallery** section, type **AWS IAM Identity Center** in the search box. 1. Select **AWS IAM Identity Center** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
## Configure and test Azure AD SSO for AWS IAM Identity Center
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps
-Once you configure AWS IAM Identity Center you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure AWS IAM Identity Center you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Cisco Anyconnect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-anyconnect.md
To configure the integration of Cisco AnyConnect into Azure AD, you need to add
1. In the **Add from the gallery** section, type **Cisco AnyConnect** in the search box. 1. Select **Cisco AnyConnect** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
## Configure and test Azure AD SSO for Cisco AnyConnect
active-directory Docusign Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/docusign-tutorial.md
To configure the integration of DocuSign into Azure AD, you must add DocuSign fr
1. In the **Add from the gallery** section, type **DocuSign** in the search box. 1. Select **DocuSign** from the results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
## Configure and test Azure AD SSO for DocuSign
active-directory Fortigate Ssl Vpn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortigate-ssl-vpn-tutorial.md
To configure the integration of FortiGate SSL VPN into Azure AD, you need to add
1. In the **Add from the gallery** section, enter **FortiGate SSL VPN** in the search box. 1. Select **FortiGate SSL VPN** in the results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
## Configure and test Azure AD SSO for FortiGate SSL VPN
active-directory Google Apps Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/google-apps-tutorial.md
To configure the integration of Google Cloud / G Suite Connector by Microsoft in
1. In the **Add from the gallery** section, type **Google Cloud / G Suite Connector by Microsoft** in the search box. 1. Select **Google Cloud / G Suite Connector by Microsoft** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
## Configure and test Azure AD single sign-on for Google Cloud / G Suite Connector by Microsoft
active-directory Saml Toolkit Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/saml-toolkit-tutorial.md
To configure the integration of Azure AD SAML Toolkit into Azure AD, you need to
1. In the **Add from the gallery** section, type **Azure AD SAML Toolkit** in the search box. 1. Select **Azure AD SAML Toolkit** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
## Configure and test Azure AD SSO for Azure AD SAML Toolkit
active-directory Servicenow Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicenow-tutorial.md
To configure the integration of ServiceNow into Azure AD, you need to add Servic
1. In the **Add from the gallery** section, enter **ServiceNow** in the search box. 1. Select **ServiceNow** from results panel, and then add the app. Wait a few seconds while the app is added to your tenant.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
## Configure and test Azure AD SSO for ServiceNow
active-directory Slack Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/slack-tutorial.md
To configure the integration of Slack into Azure AD, you need to add Slack from
1. In the **Add from the gallery** section, type **Slack** in the search box. 1. Select **Slack** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
## Configure and test Azure AD SSO for Slack
api-management Api Management Howto Developer Portal Customize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-developer-portal-customize.md
Learn more about the developer portal:
- [Azure API Management developer portal overview](api-management-howto-developer-portal.md) - [Migrate to the new developer portal](developer-portal-deprecated-migration.md) from the deprecated legacy portal.
+- Learn more about [customizing and extending](developer-portal-extend-custom-functionality.md) the functionality of the developer portal.
api-management Api Management Howto Developer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-developer-portal.md
Migration to the new developer portal is described in the [dedicated documentati
Your API Management service includes a built-in, always up-to-date, **managed** developer portal. You can access it from the Azure portal interface.
-Customize and style the managed portal through the built-in, drag-and-drop visual editor:
+[Customize and style](api-management-howto-developer-portal-customize.md) the managed portal through the built-in, drag-and-drop visual editor:
* Use the visual editor to modify pages, media, layouts, menus, styles, or website settings. * Take advantage of built-in widgets to add text, images, buttons, and other objects that the portal supports out-of-the-box.
-* [Add custom HTML](developer-portal-faq.md#how-do-i-add-custom-html-to-my-developer-portal) - for example, add HTML for a form or to embed a video player. The custom code is rendered in an inline frame (iframe).
-
-See [this tutorial](api-management-howto-developer-portal-customize.md) for example customizations.
- > [!NOTE] > The managed developer portal receives and applies updates automatically. Changes that you've saved but not published to the developer portal remain in that state during an update.
-## <a name="managed-vs-self-hosted"></a> Extensibility
-
-In some cases you might need functionality beyond the customization and styling options supported in the managed developer portal. If you need to implement custom logic, which isn't supported out-of-the-box, you can modify the portal's codebase, available on [GitHub](https://github.com/Azure/api-management-developer-portal). For example, you could create a new widget to integrate with a third-party support system. When you implement new functionality, you can choose one of the following options:
--- **Self-host** the resulting portal outside of your API Management service. When you self-host the portal, you become its maintainer and you are responsible for its upgrades. Azure Support's assistance is limited only to the [basic setup of self-hosted portals](developer-portal-self-host.md).-- Open a pull request for the API Management team to merge new functionality to the **managed** portal's codebase.-
-For extensibility details and instructions, refer to the [GitHub repository](https://github.com/Azure/api-management-developer-portal) and the tutorial to [implement a widget](developer-portal-implement-widgets.md). The tutorial to [customize the managed portal](api-management-howto-developer-portal-customize.md) walks you through the portal's administrative panel, which is common for **managed** and **self-hosted** versions.
+## <a name="managed-vs-self-hosted"></a> Options to extend portal functionality
+In some cases you might need functionality beyond the customization and styling options provided in the managed developer portal. If you need to implement custom logic, which isn't supported out-of-the-box, you have [several options](developer-portal-extend-custom-functionality.md):
+* [Add custom HTML](developer-portal-extend-custom-functionality.md#use-custom-html-code-widget) directly through a developer portal widget designed for small customizations - for example, add HTML for a form or to embed a video player. The custom code is rendered in an inline frame (iframe).
+* [Create and upload a custom widget](developer-portal-extend-custom-functionality.md#create-and-upload-custom-widget) to develop and add more complex custom portal features.
+* [Self-host the portal](developer-portal-self-host.md), only if you need to make modifications to the core of the developer portal [codebase](https://github.com/Azure/api-management-developer-portal). This option requires advanced configuration. Azure Support's assistance is limited only to the basic setup of self-hosted portals.
+> [!NOTE]
+> Because the API Management developer portal codebase is maintained on [GitHub](https://github.com/Azure/api-management-developer-portal), you can open issues and make pull requests for the API Management team to merge new functionality at any time.
+>
## Next steps Learn more about the developer portal: - [Access and customize the managed developer portal](api-management-howto-developer-portal-customize.md)
+- [Extend functionality of the managed developer portal](developer-portal-extend-custom-functionality.md)
- [Set up self-hosted version of the portal](developer-portal-self-host.md)-- [Implement your own widget](developer-portal-implement-widgets.md) Browse other resources:
api-management Developer Portal Extend Custom Functionality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-extend-custom-functionality.md
+
+ Title: Add custom functionality to the Azure API Management developer portal
+
+description: How to customize the managed API Management developer portal with custom functionality such as custom widgets.
++ Last updated : 08/29/2022++++
+# Extend the developer portal with custom features
+
+The API Management [developer portal](api-management-howto-developer-portal.md) features a visual editor and built-in widgets so that you can customize and style the portal's appearance. However, you may need to customize the developer portal further with custom functionality. For example, you might want to integrate your developer portal with a support system that involves adding a custom interface. This article explains ways to add custom functionality such as custom widgets to your API Management developer portal.
+
+The following table summarizes three options, with links to more detail.
++
+|Method |Description |
+|||
+|[Custom HTML code widget](#use-custom-html-code-widget) | - Lightweight solution for API publishers to add custom logic for basic use cases<br/><br/>- Copy and paste custom HTML code into a form, and developer portal renders it in an iframe |
+|[Create and upload custom widget](#create-and-upload-custom-widget) | - Developer solution for more advanced widget use cases<br/><br/>- Requires local implementation in React, Vue, or plain TypeScript<br/><br/>- Widget scaffold and tools provided to help developers create widget and upload to developer portal<br/><br/>- Supports workflows for source control, versioning, and code reuse<br/><br/> |
+|[Self-host developer portal](developer-portal-self-host.md) | - Legacy extensibility option for customers who need to customize source code of the entire portal core<br/><br/> - Gives complete flexibility for customizing portal experience<br/><br/>- Requires advanced configuration<br/><br/>- Customer responsible for managing complete code lifecycle: fork code base, develop, deploy, host, patch, and upgrade |
++
+## Use Custom HTML code widget
+
+The managed developer portal includes a **Custom HTML code** widget where you can insert HTML code for small portal customizations. For example, use custom HTML to embed a video or to add a form. The portal renders the custom widget in an inline frame (iframe).
+
+1. In the administrative interface for the developer portal, go to the page or section where you want to insert the widget.
+1. Select the grey "plus" (**+**) icon that appears when you hover the pointer over the page.
+1. In the **Add widget** window, select **Custom HTML code**.
+
+ :::image type="content" source="media/developer-portal-extend-custom-functionality/add-custom-html-code-widget.png" alt-text="Screenshot that shows how to add a widget for custom HTML code in the developer portal.":::
+1. Select the "pencil" icon to customize the widget.
+1. Enter a **Width** and **Height** (in pixels) for the widget.
+1. To inherit styles from the developer portal (recommended), select **Apply developer portal styling**.
+ > [!NOTE]
+ > If this setting isn't selected, the embedded elements will be plain HTML controls, without the styles of the developer portal.
+
+ :::image type="content" source="media/developer-portal-extend-custom-functionality/configure-html-custom-code.png" alt-text="Screenshot that shows how to configure HTML custom code in the developer portal.":::
+1. Replace the sample **HTML code** with your custom content.
+1. When configuration is complete, close the window.
+1. Save your changes, and [republish the portal](api-management-howto-developer-portal-customize.md#publish).
+
+> [!NOTE]
+> Microsoft does not support the HTML code you add in the Custom HTML Code widget.
+
+## Create and upload custom widget
+
+### Prerequisites
+
+* Install [Node.JS runtime](https://nodejs.org/en/) locally
+* Basic knowledge of programming and web development
+
+### Create widget
+
+1. In the administrative interface for the developer portal, select **Custom widgets** > **Create new custom widget**.
+1. Enter a widget name and choose a **Technology**. For more information, see [Widget templates](#widget-templates), later in this article.
+1. Select **Create widget**.
+1. Open a terminal, navigate to the location where you want to save the widget code, and run the following command to download the code scaffold:
+
+ ```
+ npx @azure/api-management-custom-widgets-scaffolder
+ ```
+1. Navigate to the newly created folder containing the widget's code scaffold.
+
+ ```
+ cd <name-of-widget>
+ ```
+
+1. Open the folder in your code editor of choice, such as VS Code.
+
+1. Install the dependencies and start the project:
+
+ ```
+ npm install
+ npm start
+ ```
+
+ Your browser should open a new tab with your developer portal connected to your widget in development mode.
+
+ > [!NOTE]
+ > If the tab doesn't open, do the following:
+ > 1. Make sure the development server started. To do that, check output on the console where you started the server in the previous step. It should display the port the server is running on (for example, `http://127.0.0.1:3001`).
+ > 1. Go to your API Management service in the Azure portal and open your developer portal with the administrative interface.
+ > 1. Append `/?MS_APIM_CW_localhost_port=3001` to the URL. Change the port number if your server runs on a different port.
+
+1. Implement the code of the widget and test it locally. The code of the widget is located in the `src` folder, in the following subfolders:
+
+ * **`app`** - Code for the widget component that visitors to the published developer portal see and interact with
+ * **`editor`** - Code for the widget component that you use in the administrative interface of the developer portal to edit widget settings
+
+ The `values.ts` file contains the default values and types of the widget's custom properties you can enable for editing.
+
+ :::image type="content" source="media/developer-portal-extend-custom-functionality/widget-custom-properties.png" alt-text="Screenshot of custom properties page in developer portal.":::
+
+ Custom properties let you adjust values in the custom widget's instance from the administrative user interface of the developer portal, without changing the code or redeploying the custom widget. This object needs to be passed to some of the widgets' helper functions.
+
+### Deploy the custom widget to the developer portal
+
+1. Specify the following values in the `deploy.js` file located in the root of your project:
+
+ * `resourceId` - Resource ID of your API Management service, in the following format: `subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ApiManagement/service/<api-management service-name>`
+
+ * `managementApiEndpoint` - Azure Management API endpoint (depends on your environment, typically `management.azure.com`)
+
+ * `apiVersion` - Optional, use to override the default management API version
+
+1. Run the following command:
+
+ ```
+ npm run deploy
+ ```
+
+ If prompted, sign in to your Azure account.
++
+The custom widget is now deployed to your developer portal. Using the portal's administrative interface, you can add it on pages in the developer portal and set values for any custom properties configured in the widget.
+
+### Publish the developer portal
+
+After you configure the widget in the administrative interface, [republish the portal](api-management-howto-developer-portal-customize.md#publish) to make the widget available in production.
+
+> [!NOTE]
+> * If you deploy updated widget code at a later date, the widget used in production doesn't update until you republish the developer portal.
+> * The widget's compiled code is associated with a specific portal *revision*. If you make a previous portal revision current, the custom widget associated with that revision is used.
+
+### Widget templates
+
+We provide templates for the following technologies you can use for the widget:
+
+* **TypeScript** (pure implementation without any framework)
+* **React**
+* **Vue**
+
+All templates are based on the TypeScript programming language.
+
+The React template contains prepared custom hooks in the `hooks.ts` file and established providers for sharing context through the component tree with dedicated `useSecrets`, `useValues`, and `useEditorValues` hooks.
+
+### Use the `@azure/api-management-custom-widgets-tools` package
+
+This [npm package](https://www.npmjs.com/package/@azure/api-management-custom-widgets-tools) contains the following functions to help you develop your custom widget and provides features including communication between the developer portal and your widget:
++
+|Function |Description |
+|||
+|[getValues](#azureapi-management-custom-widgets-toolsgetvalues) | Returns a JSON object containing values set in the widget editor combined with default values |
+|[getEditorValues](#azureapi-management-custom-widgets-toolsgeteditorvalues) | Returns a JSON object containing only values set in the widget editor |
+|[buildOnChange](#azureapi-management-custom-widgets-toolsbuildonchange) | Accepts a TypeScript type and returns a function to update the widget values. The returned function takes as parameter a JSON object with updated values and doesn't return anything.<br/><br/>Used internally in widget editor |
+|[askForSecrets](#azureapi-management-custom-widgets-toolsaskforsecrets) | Returns a JavaScript promise, which after resolution returns a JSON object of data needed to communicate with backend |
+|[deployNodeJs](#azureapi-management-custom-widgets-toolsdeploynodejs) | Deploys widget to blob storage |
+|[getWidgetData](#azureapi-management-custom-widgets-toolsgetwidgetdata) | Returns all data passed to your custom widget from the developer portal<br/><br/>Used internally in templates |
++
+#### `@azure/api-management-custom-widgets-tools/getValues`
+
+Function that returns a JSON object containing the values you've set in the widget editor combined with default values, passed as an argument.
+
+```JavaScript
+Import {getValues} from "@azure/api-management-custom-widgets-tools/getValues"
+import {valuesDefault} from "./values"
+const values = getValues(valuesDefault)
+```
+
+It's intended to be used in the runtime (`app`) part of your widget.
+
+#### `@azure/api-management-custom-widgets-tools/getEditorValues`
+
+Function that works the same way as `getValues`, but returns only values you've set in the editor.
+
+It's intended to be used in the editor of your widget but also works in runtime.
+
+#### `@azure/api-management-custom-widgets-tools/buildOnChange`
+
+> [!NOTE]
+> This function is intended to be used only in the widget editor.
+
+Accepts a TypeScript type and returns a function to update the widget values. The returned function takes as parameter a JSON object with updated values and doesn't return anything.
+
+```JavaScript
+import {Values} from "./values"
+const onChange = buildOnChange<Values>()
+onChange({fieldKey: 'newValue'})
+```
+
+#### `@azure/api-management-custom-widgets-tools/askForSecrets`
+
+This function returns a JavaScript promise, which after resolution returns a JSON object of data needed to communicate with backend. `token` is needed for authentication. `userId` is needed to query user-specific resources. Those values might be undefined when the portal is viewed by an anonymous user. The `Secrets` object also contains `managementApiUrl`, which is the URL of your portal's backend, and `apiVersion`, which is the apiVersion currently used by the developer portal.
+
+> [!CAUTION]
+> Manage and use the token carefully. Anyone who has it can access data in your API Management service.
++
+#### `@azure/api-management-custom-widgets-tools/deployNodeJs`
+
+This function deploys your widget to your blob storage. In all templates, it's preconfigured in the `deploy.js` file.
+
+It accepts three arguments by default:
+
+* `serviceInformation` ΓÇô Information about your Azure service:
+
+ * `resourceId` - Resource ID of your API Management service, in the following format: `subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ApiManagement/service/<api-management service-name>`
+
+ * `managementApiEndpoint` - Azure management API endpoint (depends on your environment, typically `management.azure.com`)
+
+* ID of your widget ΓÇô Name of your widget in "PC-friendly" format (Latin alphanumeric lowercase characters and dashes; `Contoso widget` becomes `contoso-widget`). You can find it in the `package.json` under the `name` key.
+
+* `fallbackConfigPath` ΓÇô Path for the local `config.msapim.json` file, for example, `./static/config.msapim.json`
+
+#### `@azure/api-management-custom-widgets-tools/getWidgetData`
+
+> [!NOTE]
+> This function is used internally in templates. In most implementations you shouldn't need it otherwise.
+
+This function returns all data passed to your custom widget from the developer portal. It contains other data that might be useful in debugging or in more advanced scenarios. This API is expected to change with potential breaking changes. It returns a JSON object that contains the following keys:
+
+* `values` - All the values you've set in the editor, the same object that is returned by `getEditorData`
+
+* `environment` - Current runtime environment for the widget
+
+* `origin` - Location origin of the developer portal
+
+* `instanceId` - ID of this instance of the widget
+
+### Add or remove custom properties
+
+Custom properties let you adjust values in the custom widget's code from the administrative user interface of the developer portal, without changing the code or redeploying the custom widget. By default, input fields for four custom properties are defined. You can add or remove other custom properties as needed.
+
+To add a custom property:
+
+1. In the file `src/values.ts`, add to the `Values` type the name of the property and type of the data it will save.
+1. In the same file, add a default value for it.
+1. Navigate to the `editor.html` or `editor/index` file (exact location depends on the framework you've chosen) and duplicate an existing input or add one yourself.
+1. Make sure the input field reports the changed value to the `onChange` function, which you can get from `[buildOnChange`](#azureapi-management-custom-widgets-toolsbuildonchange).
+
+### (Optional) Use another framework
+
+To implement your widget using another JavaScript UI framework and libraries, you need to set up the project yourself with the following guidelines:
+
+* In most cases, we recommend that you start from the TypeScript template.
+* Install dependencies as in any other npm project.
+* If your framework of choice isn't compatible with [Vite build tool](https://vitejs.dev/), configure it so that it outputs compiled files to the `./dist` folder. Optionally, redefine where the compiled files are located by providing a relative path as the fourth argument for the [`deployNodeJs`](#azureapi-management-custom-widgets-toolsdeploynodejs) function.
+* For local development, the `config.msapim.json` file must be accessible at the URL `localhost:<port>/config.msapim.json` when the server is running.
+++
+## Next steps
+
+Learn more about the developer portal:
+
+- [Azure API Management developer portal overview](api-management-howto-developer-portal.md)
+- [Frequently asked questions](developer-portal-faq.md)
+- [Scaffolder of a custom widget for developer portal of Azure API Management service](https://www.npmjs.com/package/@azure/api-management-custom-widgets-scaffolder)
+- [Tools for working with custom widgets of developer portal of Azure API Management service](https://www.npmjs.com/package/@azure/api-management-custom-widgets-tools)
api-management Developer Portal Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-faq.md
You have the following options:
-* For certain situations, you can [add custom HTML](#how-do-i-add-custom-html-to-my-developer-portal) to add functionality to the portal.
+* For small customizations, use a built-in widget to [add custom HTML](developer-portal-extend-custom-functionality.md#use-custom-html-code-widget) .
+
+* For larger customizations, [create and upload](developer-portal-extend-custom-functionality.md#create-and-upload-custom-widget) a custom widget to the managed developer portal.
+
+* [Self-host the developer portal](developer-portal-self-host.md), only if you need to make modifications to the core of the developer portal codebase.
* Open a feature request in the [GitHub repository](https://github.com/Azure/api-management-developer-portal).
-* [Implement the missing functionality yourself](developer-portal-implement-widgets.md).
+Learn more about [customizing and extending](developer-portal-extend-custom-functionality.md) the functionality of the developer portal.
-Learn more about developer portal [extensibility](api-management-howto-developer-portal.md#managed-vs-self-hosted).
## Can I have multiple developer portals in one API Management service?
You can generate *user-specific tokens* (including admin tokens) using the [Get
> [!NOTE] > The token must be URL-encoded.
-## How do I add custom HTML to my developer portal?
-
-The managed developer portal includes a **Custom HTML code** widget that enables you to insert HTML code for small portal customizations. For example, use custom HTML to embed a video or to add a form. The portal renders the custom widget in an inline frame (iframe).
-
-1. In the administrative interface for the developer portal, go to the page or section where you want to insert the widget.
-1. Select the grey "plus" (**+**) icon that appears when you hover the pointer over the page.
-1. In the **Add widget** window, select **Custom HTML code**.
-
- :::image type="content" source="media/developer-portal-faq/add-custom-html-code-widget.png" alt-text="Add widget for custom HTML code":::
-1. Select the "pencil" icon to customize the widget.
-1. Enter a **Width** and **Height** (in pixels) for the widget.
-1. To inherit styles from the developer portal (recommended), select **Apply developer portal styling**.
- > [!NOTE]
- > If this setting isn't selected, the embedded elements will be plain HTML controls, without the styles of the developer portal.
-
- :::image type="content" source="media/developer-portal-faq/configure-html-custom-code.png" alt-text="Configure HTML custom code":::
-1. Replace the sample **HTML code** with your custom content.
-1. When configuration is complete, close the window.
-1. Save your changes, and [republish the portal](api-management-howto-developer-portal-customize.md#publish).
-
-> [!NOTE]
-> Microsoft does not support the HTML code you add in the Custom HTML Code widget.
## Next steps Learn more about the developer portal: - [Access and customize the managed developer portal](api-management-howto-developer-portal-customize.md)
+- [Extend](developer-portal-extend-custom-functionality.md) the functionality of the developer portal.
- [Set up self-hosted version of the portal](developer-portal-self-host.md)-- [Implement your own widget](developer-portal-implement-widgets.md) Browse other resources:
api-management Developer Portal Implement Widgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-implement-widgets.md
-
Title: Implement widgets in the developer portal-
-description: Learn how to implement widgets that consume data from external APIs and display it on the API Management developer portal.
-- Previously updated : 04/15/2021----
-# Implement widgets in the developer portal
-
-In this tutorial, you implement a widget that consumes data from an external API and displays it on the API Management developer portal.
-
-The widget will retrieve session descriptions from the sample [Conference API](https://conferenceapi.azurewebsites.net/?format=json). The session identifier will be set through a designated widget editor.
-
-To help you in the development process, refer to the completed widget located in the `examples` folder of the API Management developer portal [GitHub repository](https://github.com/Azure/api-management-developer-portal/): `/examples/widgets/conference-session`.
--
-## Prerequisites
-
-* Set up a [local environment](developer-portal-self-host.md#step-1-set-up-local-environment) for the latest release of the developer portal.
-
-* You should understand the [Paperbits widget anatomy](https://paperbits.io/wiki/widget-anatomy).
--
-## Copy the scaffold
-
-Use a `widget` scaffold from the `/scaffolds` folder as a starting point to build the new widget.
-
-1. Copy the folder `/scaffolds/widget` to `/community/widgets`.
-1. Rename the folder to `conference-session`.
-
-## Rename exported module classes
-
-Rename the exported module classes by replacing the `Widget` prefix with `ConferenceSession` and change the binding name to avoid name collision, in these files:
--- `widget.design.module.ts`--- `widget.publish.module.ts`--- `widget.runtime.module.ts`
-
-For example, in the `widget.design.module.ts` file, change `WidgetDesignModule` to `ConferenceSessionDesignModule`:
-
-```typescript
-export class WidgetDesignModule implements IInjectorModule {
- public register(injector: IInjector): void {
- injector.bind("widget", WidgetViewModel);
- injector.bind("widgetEditor", WidgetEditorViewModel);
-```
-to
-
-```typescript
-export class ConferenceSessionDesignModule implements IInjectorModule {
- public register(injector: IInjector): void {
- injector.bind("conferenceSession", WidgetViewModel);
- injector.bind("conferenceSessionEditor", WidgetEditorViewModel);
-```
-
-
-## Register the widget
-
-Register the widget's modules in the portal's root modules by adding the following lines in the respective files:
-
-1. `src/apim.design.module.ts` - a module that registers design-time dependencies.
-
- ```typescript
- import { ConferenceSessionDesignModule } from "../community/widgets/conference-session/widget.design.module";
-
- ...
- injector.bindModule(new ConferenceSessionDesignModule());
- ```
-1. `src/apim.publish.module.ts` - a module that registers publish-time dependencies.
-
- ```typescript
- import { ConferenceSessionPublishModule } from "../community/widgets/conference-session/widget.publish.module";
-
- ...
-
- injector.bindModule(new ConferenceSessionPublishModule());
- ```
-
-1. `src/apim.runtime.module.ts` - runtime dependencies.
-
- ```typescript
- import { ConferenceSessionRuntimeModule } from "../community/widgets/conference-session/widget.runtime.module";
-
- ...
-
- injector.bindModule(new ConferenceSessionRuntimeModule());
- ```
-
-## Place the widget in the portal
-
-Now you're ready to plug in the duplicated scaffold and use it in developer portal.
-
-1. Run the `npm start` command.
-
-1. When the application loads, place the new widget on a page. You can find it under the name `Your widget` in the `Community` category in the widget selector.
-
- :::image type="content" source="media/developer-portal-implement-widgets/widget-selector.png" alt-text="Screenshot of widget selector":::
-
-1. Save the page by pressing **Ctrl**+**S** (or **Γîÿ**+**S** on macOS).
-
- > [!NOTE]
- > In design-time, you can still interact with the website by holding the **Ctrl** (or **Γîÿ**) key.
-
-## Add custom properties
-
-For the widget to fetch session descriptions, it needs to be aware of the session identifier. Add the `Session ID` property to the respective interfaces and classes:
-
-In order for the widget to fetch the session description, it needs to be aware of the session identifier. Add the session ID property to the respective interfaces and classes:
-
-1. `widgetContract.ts` - data contract (data layer) defining how the widget configuration is persisted.
-
- ```typescript
- export interface WidgetContract extends Contract {
- sessionNumber: string;
- }
- ```
-
-1. `widgetModel.ts` - model (business layer) - a primary representation of the widget in the system. It's updated by editors and rendered by the presentation layer.
-
- ```typescript
- export class WidgetModel {
- public sessionNumber: string;
- }
- ```
-
-1. `ko/widgetViewModel.ts` - viewmodel (presentation layer) - a UI framework-specific object that developer portal renders with the HTML template.
-
- > [!NOTE]
- > You don't need to change anything in this file.
-
-## Configure binders
-
-Enable the flow of the `sessionNumber` from the data source to the widget presentation. Edit the `ModelBinder` and `ViewModelBinder` entities:
-
-1. `widgetModelBinder.ts` helps to prepare the model using data described in the contract.
-
- ```typescript
- export class WidgetModelBinder implements IModelBinder<WidgetModel> {
- public async contractToModel(contract: WidgetContract): Promise<WidgetModel> {
- model.sessionNumber = contract.sessionNumber || "107"; // 107 is the default session id
- ...
- }
-
- public modelToContract(model: WidgetModel): Contract {
- const contract: WidgetContract = {
- sessionNumber: model.sessionNumber
- ...
- };
- ...
- }
- }
- ```
-
-1. `ko/widgetViewModelBinder.ts` knows how developer portal needs to present the model (as a viewmodel) in a specific UI framework.
-
- ```typescript
- ...
- public async updateViewModel(model: WidgetModel, viewModel: WidgetViewModel): Promise<void> {
- viewModel.runtimeConfig(JSON.stringify({
- sessionNumber: model.sessionNumber
- }));
- }
- }
- ...
- ```
-
-## Adjust design-time widget template
-
-The components of each scope run independently. They have separate dependency injection containers, their own configuration, lifecycle, etc. They may even be powered by different UI frameworks (in this example it is Knockout JS).
-
-From the design-time perspective, any runtime component is just an HTML tag with certain attributes and/or content. Configuration if necessary is passed with plain markup. In simple cases, like in this example, the parameter is passed in the attribute. If the configuration is more complex, you could use an identifier of the required setting(s) fetched by a designated configuration provider (for example, `ISettingsProvider`).
-
-1. Update the `ko/widgetView.html` file:
-
- ```html
- <widget-runtime data-bind="attr: { params: runtimeConfig }"></widget-runtime>
- ```
-
- When developer portal runs the `attr` binding in *design-time* or *publish-time*, the resulting HTML is:
-
- ```html
- <widget-runtime params="{ sessionNumber: 107 }"></widget-runtime>
- ```
-
- Then, in runtime, `widget-runtime` component will read `sessionNumber` and use it in the initialization code (see below).
-
-1. Update the `widgetHandlers.ts` file to assign the session ID on creation:
-
- ```typescript
- ...
- createModel: async () => {
- var model = new WidgetModel();
- model.sessionNumber = "107";
- return model;
- }
- ...
- ```
-
-## Revise runtime view model
-
-Runtime components are the code running in the website itself. For example, in the API Management developer portal, they are all the scripts behind dynamic components (for example, *API details*, *API console*), handling operations such as code sample generation, sending requests, etc.
-
-Your runtime component's view model needs to have the following methods and properties:
--- The `sessionNumber` property (marked with `Param` decorator) used as a component input parameter passed from outside (the markup generated in design-time; see the previous step).-- The `sessionDescription` property bound to the widget template (see `widget-runtime.html` later in this article).-- The `initialize` method (with `OnMounted` decorator) invoked after the widget is created and all its parameters are assigned. It's a good place to read the `sessionNumber` and invoke the API using the `HttpClient`. The `HttpClient` is a dependency injected by the IoC (Inversion of Control) container.--- First, developer portal creates the widget and assigns all its parameters. Then it invokes the `initialize` method.-
- ```typescript
- ...
- import * as ko from "knockout";
- import { Component, RuntimeComponent, OnMounted, OnDestroyed, Param } from "@paperbits/common/ko/decorators";
- import { HttpClient, HttpRequest } from "@paperbits/common/http";
- ...
-
- export class WidgetRuntime {
- public readonly sessionDescription: ko.Observable<string>;
-
- constructor(private readonly httpClient: HttpClient) {
- ...
- this.sessionNumber = ko.observable();
- this.sessionDescription = ko.observable();
- ...
- }
-
- @Param()
- public readonly sessionNumber: ko.Observable<string>;
-
- @OnMounted()
- public async initialize(): Promise<void> {
- ...
- const sessionNumber = this.sessionNumber();
-
- const request: HttpRequest = {
- url: `https://conferenceapi.azurewebsites.net/session/${sessionNumber}`,
- method: "GET"
- };
-
- const response = await this.httpClient.send<string>(request);
- const sessionDescription = response.toText();
-
- this.sessionDescription(sessionDescription);
- ...
- }
- ...
- }
- ```
-
-## Tweak the widget template
-
-Update your widget to display the session description.
-
-Use a paragraph tag and a `markdown` (or `text`) binding in the `ko/runtime/widget-runtime.html` file to render the description:
-
-```html
-<p data-bind="markdown: sessionDescription"></p>
-```
-
-## Add the widget editor
-
-The widget is now configured to fetch the description of the session `107`. You specified `107` in the code as the default session. To check that you did everything right, run `npm start` and confirm that developer portal shows the description on the page.
-
-Now, carry out these steps to allow the user to set up the session ID through a widget editor:
-
-1. Update the `ko/widgetEditorViewModel.ts` file:
-
- ```typescript
- export class WidgetEditor implements WidgetEditor<WidgetModel> {
- public readonly sessionNumber: ko.Observable<string>;
-
- constructor() {
- this.sessionNumber = ko.observable();
- }
-
- @Param()
- public model: WidgetModel;
-
- @Event()
- public onChange: (model: WidgetModel) => void;
-
- @OnMounted()
- public async initialize(): Promise<void> {
- this.sessionNumber(this.model.sessionNumber);
- this.sessionNumber.subscribe(this.applyChanges);
- }
-
- private applyChanges(): void {
- this.model.sessionNumber = this.sessionNumber();
- this.onChange(this.model);
- }
- }
- ```
-
- The editor view model uses the same approach that you've seen previously, but there is a new property `onChange`, decorated with `@Event()`. It wires the callback to notify the listeners (in this case - a content editor) of changes to the model.
-
-1. Update the `ko/widgetEditorView.html` file:
-
- ```html
- <input type="text" class="form-control" data-bind="textInput: sessionNumber" />
- ```
-
-1. Run `npm start` again. You should be able to change `sessionNumber` in the widget editor. Change the ID to `108`, save the changes, and refresh the browser's tab. If you're experiencing problems, you may need to add the widget onto the page again.
-
- :::image type="content" source="media/developer-portal-implement-widgets/widget-editor.png" alt-text="Screenshot of widget editor":::
-
-## Rename the widget
-
-Change the widget name in the `constants.ts` file:
-
-```typescript
-...
-export const widgetName = "conference-session";
-export const widgetDisplayName = "Conference session";
-...
-```
-
-> [!NOTE]
-> If you're contributing the widget to the repository, the `widgetName` needs to be the same as its folder name and needs to be derived from the display name (lowercase and spaces replaced with dashes). The category should remain `Community`.
-
-## Next steps
--
-Learn more about the developer portal:
--- [Azure API Management developer portal overview](api-management-howto-developer-portal.md)--- [Contribute widgets](developer-portal-widget-contribution-guidelines.md) - we welcome and encourage community contributions.--- See [Use community widgets](developer-portal-use-community-widgets.md) to learn how to use widgets contributed by the community.
api-management Developer Portal Self Host https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-self-host.md
# Self-host the API Management developer portal
-This tutorial describes how to self-host the [API Management developer portal](api-management-howto-developer-portal.md). Self-hosting gives you flexibility to extend the developer portal with custom logic and widgets that dynamically customize pages on runtime. You can self-host multiple portals for your API Management instance, with different features. When you self-host a portal, you become its maintainer and you're responsible for its upgrades.
+This tutorial describes how to self-host the [API Management developer portal](api-management-howto-developer-portal.md). Self-hosting is one of several options to [extend the functionality](developer-portal-extend-custom-functionality.md) of the developer portal. For example, you can self-host multiple portals for your API Management instance, with different features. When you self-host a portal, you become its maintainer and you're responsible for its upgrades.
-The following steps show how to set up your local development environment, carry out changes in the developer portal, and publish and deploy them to an Azure storage account.
+> [!IMPORTANT]
+> Consider self-hosting the developer portal only when you need to modify the core of the developer portal's codebase. This option requires advanced configuration, including:
+> * Deployment to a hosting platform, optionally fronted by a solution such as CDN for increased availability and performance
+> * Maintaining and managing hosting infrastructure
+> * Manual updates, including for security patches, which may require you to resolve code conflicts when upgrading the codebase
If you have already uploaded or modified media files in the managed portal, see [Move from managed to self-hosted](#move-from-managed-to-self-hosted-developer-portal), later in this article.
api-management Developer Portal Use Community Widgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-use-community-widgets.md
description: Learn about community widgets for the API Management developer portal and how to inject and use them in your code. Previously updated : 03/25/2021 Last updated : 08/18/2022 # Use community widgets in the developer portal
-All developers place their community-contributed widgets in the `/community/widgets/` folder of the API Management developer portal [GitHub repository](https://github.com/Azure/api-management-developer-portal). Each has been accepted by the developer portal team. You can use the widgets by injecting them into your [self-hosted version](developer-portal-self-host.md) of the portal. The managed version of the developer portal doesn't currently support community widgets.
+All developers place their community-contributed widgets in the `/community/widgets/` folder of the API Management developer portal [GitHub repository](https://github.com/Azure/api-management-developer-portal). Each has been accepted by the developer portal team. You can use the widgets by injecting them into your managed developer portal or a [self-hosted version](developer-portal-self-host.md) of the portal.
> [!NOTE] > The developer portal team thoroughly inspects contributed widgets and their dependencies. However, the team canΓÇÖt guarantee itΓÇÖs safe to load the widgets. Use your own judgment when deciding to use a widget contributed by the community. Refer to our [widget contribution guidelines](developer-portal-widget-contribution-guidelines.md#contribution-guidelines) to learn about our preventive measures.
-## Inject and use external widgets
+## Inject and use external widget - managed portal
+
+For guidance to create and use a development environment to scaffold and upload a custom widget, see [Create and upload custom widget](developer-portal-extend-custom-functionality.md#create-and-upload-custom-widget).
+
+## Inject and use external widget - self-hosted portal
1. Set up a [local environment](developer-portal-self-host.md#step-1-set-up-local-environment) for the latest release of the developer portal.
api-management Developer Portal Widget Contribution Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-widget-contribution-guidelines.md
description: Learn about recommended guidelines to follow when you contribute a widget to the API Management developer portal repository. Previously updated : 03/25/2021 Last updated : 08/18/2022
If you'd like to contribute a widget to the API Management developer portal [Git
1. Open a pull request to include your widget in the official repository.
-Your widget will inherit the repository's license. It will be available for [opt-in installation](developer-portal-use-community-widgets.md) in the self-hosted version of the portal. The developer portal team may decide to also include it in the managed version of the portal.
+Your widget will inherit the repository's license. It will be available for [opt-in installation](developer-portal-use-community-widgets.md) in either the managed developer portal or a [self-hosted version](developer-portal-self-host.md) of the portal. The developer portal team may decide to also include it in the managed version of the portal.
-Refer to the [widget implementation](developer-portal-implement-widgets.md) tutorial for an example of how to develop your own widget.
+For an example of how to develop your own widget and upload it to your developer portal, see [Create and upload custom widget](developer-portal-extend-custom-functionality.md#create-and-upload-custom-widget).
## Contribution guidelines
This guidance is intended to ensure the safety and privacy of our customers and
- For more information about contributions, see the API Management developer portal [GitHub repository](https://github.com/Azure/api-management-developer-portal/). -- See [Implement widgets](developer-portal-implement-widgets.md) to learn how to develop your own widget, step by step.
+- See [Extend the developer portal with custom features](developer-portal-extend-custom-functionality.md) to learn about options to add custom functionality to the developer portal.
- See [Use community widgets](developer-portal-use-community-widgets.md) to learn how to use widgets contributed by the community.
app-service App Service Configuration References https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-configuration-references.md
+
+ Title: Use App Configuration references (Preview)
+description: Learn how to set up Azure App Service and Azure Functions to use Azure App Configuration references. Make App Configuration key-values available to your application code without changing it.
+++ Last updated : 06/21/2022++++
+# Use App Configuration references for App Service and Azure Functions (preview)
+
+This topic shows you how to work with configuration data in your App Service or Azure Functions application without requiring any code changes. [Azure App Configuration](../azure-app-configuration/overview.md) is a service to centrally manage application configuration. Additionally, it's an effective audit tool for your configuration values over time or releases.
+
+## Granting your app access to App Configuration
+
+To get started with using App Configuration references in App Service, you'll first need an App Configuration store, and provide your app permission to access the configuration key-values in the store.
+
+1. Create an App Configuration store by following the [App Configuration quickstart](../azure-app-configuration/quickstart-dotnet-core-app.md#create-an-app-configuration-store).
+
+1. Create a [managed identity](overview-managed-identity.md) for your application.
+
+ App Configuration references will use the app's system assigned identity by default, but you can [specify a user-assigned identity](#access-app-configuration-store-with-a-user-assigned-identity).
+
+1. Enable the newly created identity to have the right set of access permissions on the App Configuration store. Update the [role assignments for your store](../azure-app-configuration/howto-integrate-azure-managed-service-identity.md#grant-access-to-app-configuration). You'll be assigning `App Configuration Data Reader` role to this identity, scoped over the resource.
+
+> [!NOTE]
+> App Configuration references do not yet support network-restricted configuration stores.
+
+### Access App Configuration Store with a user-assigned identity
+
+Some apps might need to reference configuration at creation time, when a system-assigned identity wouldn't yet be available. In these cases, a user-assigned identity can be created and given access to the App Configuration store, in advance. Follow these steps to [create user-assigned identity for App Configuration store](../azure-app-configuration/overview-managed-identity.md#adding-a-user-assigned-identity).
+
+Once you have granted permissions to the user-assigned identity, follow these steps:
+
+1. [Assign the identity](./overview-managed-identity.md#add-a-user-assigned-identity) to your application if you haven't already.
+
+1. Configure the app to use this identity for App Configuration reference operations by setting the `keyVaultReferenceIdentity` property to the resource ID of the user-assigned identity. Though the property has keyVault in the name, the identity will apply to App Configuration references as well.
+
+ ```azurecli-interactive
+ userAssignedIdentityResourceId=$(az identity show -g MyResourceGroupName -n MyUserAssignedIdentityName --query id -o tsv)
+ appResourceId=$(az webapp show -g MyResourceGroupName -n MyAppName --query id -o tsv)
+ az rest --method PATCH --uri "${appResourceId}?api-version=2021-01-01" --body "{'properties':{'keyVaultReferenceIdentity':'${userAssignedIdentityResourceId}'}}"
+ ```
+
+This configuration will apply to all references from this App.
+
+## Reference syntax
+
+An App Configuration reference is of the form `@Microsoft.AppConfiguration({referenceString})`, where `{referenceString}` is replaced by below:
+
+> [!div class="mx-tdBreakAll"]
+> | Reference string parts | Description |
+> |--||
+> | Endpoint=_endpoint_; | **Endpoint** is the required part of the reference string. The value for **Endpoint** should have the url of your App Configuration resource.|
+> | Key=_keyName_; | **Key** forms the required part of the reference string. Value for **Key** should be the name of the Key that you want to assign to the App setting.
+> | Label=_label_ | The **Label** part is optional in reference string. **Label** should be the value of Label for the Key specified in **Key**
+
+For example, a complete reference with `Label` would look like the following,
+
+```
+@Microsoft.AppConfiguration(Endpoint=https://myAppConfigStore.azconfig.io; Key=myAppConfigKey; Label=myKeysLabel)ΓÇï
+```
+
+Alternatively without any `Label`:
+
+```
+@Microsoft.AppConfiguration(Endpoint=https://myAppConfigStore.azconfig.io; Key=myAppConfigKey)ΓÇï
+```
+
+Any configuration change to the app that results in a site restart causes an immediate refetch of all referenced key-values from the App Configuration store.
+
+## Source Application Settings from App Config
+
+App Configuration references can be used as values for [Application Settings](configure-common.md#configure-app-settings), allowing you to keep configuration data in App Configuration instead of the site config. Application Settings and App Configuration key-values both are securely encrypted at rest. If you need centralized configuration management capabilities, then configuration data should go into App Config.
+
+To use an App Configuration reference for an [app setting](configure-common.md#configure-app-settings), set the reference as the value of the setting. Your app can reference the Configuration value through its key as usual. No code changes are required.
+
+> [!TIP]
+> Most application settings using App Configuration references should be marked as slot settings, as you should have separate stores or labels for each environment.
+
+> [!NOTE]
+> Azure App Configuration also supports its own format for storing [Key Vault references](../azure-app-configuration/use-key-vault-references-dotnet-core.md). If the value of an App Configuration reference is a Key Vault reference in App Configuration store, the secret value will not be retrieved from Key Vault, as of yet. For using the secrets from KeyVault in App Service or Functions, please refer to the [Key Vault references in App Service](app-service-key-vault-references.md).
+
+### Considerations for Azure Files mounting
+
+Apps can use the `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` application setting to mount Azure Files as the file system. This setting has additional validation checks to ensure that the app can be properly started. The platform relies on having a content share within Azure Files, and it assumes a default name unless one is specified via the `WEBSITE_CONTENTSHARE` setting. For any requests that modify these settings, the platform will attempt to validate if this content share exists, and it will attempt to create it if not. If it can't locate or create the content share, the request is blocked.
+
+If you use App Configuration references for this setting, this validation check will fail by default, as the connection itself can't be resolved while processing the incoming request. To avoid this issue, you can skip the validation by setting `WEBSITE_SKIP_CONTENTSHARE_VALIDATION` to "1". This setting will bypass all checks, and the content share won't be created for you. You should ensure it's created in advance.
+
+> [!CAUTION]
+> If you skip validation and either the connection string or content share are invalid, the app will be unable to start properly and will only serve HTTP 500 errors.
+
+As part of creating the site, it's also possible that attempted mounting of the content share could fail due to managed identity permissions not being propagated or the virtual network integration not being set up. You can defer setting up Azure Files until later in the deployment template to accommodate for the required setup. See [Azure Resource Manager deployment](#azure-resource-manager-deployment) to learn more. App Service will use a default file system until Azure Files is set up, and files aren't copied over so make sure that no deployment attempts occur during the interim period before Azure Files is mounted.
+
+### Azure Resource Manager deployment
+
+When automating resource deployments through Azure Resource Manager templates, you may need to sequence your dependencies in a particular order to make this feature work. Of note, you'll need to define your application settings as their own resource, rather than using a `siteConfig` property in the site definition. This is because the site needs to be defined first so that the system-assigned identity is created with it and can be used in the access policy.
+
+Below is an example pseudo-template for a function app with App Configuration references:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "roleNameGuid": {
+ "type": "string",
+ "defaultValue": "[newGuid()]",
+ "metadata": {
+ "description": "A new GUID used to identify the role assignment"
+ }
+ }
+ },
+ "variables": {
+ "functionAppName": "DemoMBFunc",
+ "appConfigStoreName": "DemoMBAppConfig",
+ "resourcesRegion": "West US2",
+ "appConfigSku": "standard",
+ "FontNameKey": "FontName",
+ "FontColorKey": "FontColor",
+ "myLabel": "Test",
+ "App Configuration Data Reader": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Authorization/roleDefinitions/', '516239f1-63e1-4d78-a4de-a74fb236a071')]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "name": "[variables('functionAppName')]",
+ "apiVersion": "2021-03-01",
+ "location": "[variables('resourcesRegion')]",
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ //...
+ "resources": [
+ {
+ "type": "config",
+ "name": "appsettings",
+ "apiVersion": "2021-03-01",
+ //...
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/sites', variables('functionAppName'))]",
+ "[resourceId('Microsoft.AppConfiguration/configurationStores', variables('appConfigStoreName'))]"
+ ],
+ "properties": {
+ "WEBSITE_FONTNAME": "[concat('@Microsoft.AppConfiguration(Endpoint=', reference(resourceId('Microsoft.AppConfiguration/configurationStores', variables('appConfigStoreName'))).endpoint,'; Key=',variables('FontNameKey'),'; Label=',variables('myLabel'), ')')]",
+ "WEBSITE_FONTCOLOR": "[concat('@Microsoft.AppConfiguration(Endpoint=', reference(resourceId('Microsoft.AppConfiguration/configurationStores', variables('appConfigStoreName'))).endpoint,'; Key=',variables('FontColorKey'),'; Label=',variables('myLabel'), ')')]",
+ "WEBSITE_ENABLE_SYNC_UPDATE_SITE": "true"
+ //...
+ }
+ },
+ {
+ "type": "sourcecontrols",
+ "name": "web",
+ "apiVersion": "2021-03-01",
+ //...
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/sites', variables('functionAppName'))]",
+ "[resourceId('Microsoft.Web/sites/config', variables('functionAppName'), 'appsettings')]"
+ ]
+ }
+ ]
+ },
+ {
+ "type": "Microsoft.AppConfiguration/configurationStores",
+ "name": "[variables('appConfigStoreName')]",
+ "apiVersion": "2019-10-01",
+ "location": "[variables('resourcesRegion')]",
+ "sku": {
+ "name": "[variables('appConfigSku')]"
+ },
+ //...
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/sites', variables('functionAppName'))]"
+ ],
+ "properties": {
+ },
+ "resources": [
+ {
+ "type": "keyValues",
+ "name": "[variables('FontNameKey')]",
+ "apiVersion": "2021-10-01-preview",
+ //...
+ "dependsOn": [
+ "[resourceId('Microsoft.AppConfiguration/configurationStores', variables('appConfigStoreName'))]"
+
+ ],
+ "properties": {
+ "value": "Calibri",
+ "contentType": "application/json"
+ }
+ },
+ {
+ "type": "keyValues",
+ "name": "[variables('FontColorKey')]",
+ "apiVersion": "2021-10-01-preview",
+ //...
+ "dependsOn": [
+ "[resourceId('Microsoft.AppConfiguration/configurationStores', variables('appConfigStoreName'))]"
+
+ ],
+ "properties": {
+ "value": "Blue",
+ "contentType": "application/json"
+ }
+ }
+ ]
+ },
+ {
+ "scope": "[resourceId('Microsoft.AppConfiguration/configurationStores', variables('appConfigStoreName'))]",
+ "type": "Microsoft.Authorization/roleAssignments",
+ "apiVersion": "2020-04-01-preview",
+ "name": "[parameters('roleNameGuid')]",
+ "properties": {
+ "roleDefinitionId": "[variables('App Configuration Data Reader')]",
+ "principalId": "[reference(resourceId('Microsoft.Web/sites/', variables('functionAppName')), '2020-12-01', 'Full').identity.principalId]",
+ "principalType": "ServicePrincipal"
+ }
+ }
+ ]
+}
+```
+
+> [!NOTE]
+> In this example, the source control deployment depends on the application settings. This is normally unsafe behavior, as the app setting update behaves asynchronously. However, because we have included the `WEBSITE_ENABLE_SYNC_UPDATE_SITE` application setting, the update is synchronous. This means that the source control deployment will only begin once the application settings have been fully updated. For more app settings, see [Environment variables and app settings in Azure App Service](reference-app-settings.md).
+
+## Troubleshooting App Configuration References
+
+If a reference isn't resolved properly, the reference value will be used instead. For the application settings, an environment variable would be created whose value has the `@Microsoft.AppConfiguration(...)` syntax. It may cause an error, as the application was expecting a configuration value instead.
+
+Most commonly, this error could be due to a misconfiguration of the [App Configuration access policy](#granting-your-app-access-to-app-configuration). However, it could also be due to a syntax error in the reference or the Configuration key-value not existing in the store.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Reference Key vault secrets from App Service](./app-service-key-vault-references.md)
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md
Community support for Java 7 will terminate on July 29th, 2022 and [Java 7 will
If a supported Java runtime will be retired, Azure developers using the affected runtime will be given a deprecation notice at least six months before the runtime is retired. -- [Reasons to move to Java 11](/java/openjdk/reasons-to-move-to-java-11?bc=%2fazure%2fdeveloper%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fdeveloper%2fjava%2ffundamentals%2ftoc.json)-- [Java 7 migration guide](/java/openjdk/transition-from-java-7-to-java-8?bc=%2fazure%2fdeveloper%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fdeveloper%2fjava%2ffundamentals%2ftoc.json)
+- [Reasons to move to Java 11](/java/openjdk/reasons-to-move-to-java-11?bc=/azure/developer/breadcrumb/toc.json&toc=/azure/developer/java/fundamentals/toc.json)
+- [Java 7 migration guide](/java/openjdk/transition-from-java-7-to-java-8?bc=/azure/developer/breadcrumb/toc.json&toc=/azure/developer/java/fundamentals/toc.json)
### Local development
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md
Azure App Service is a fully managed platform as a service (PaaS) offering for d
* **API and mobile features** - App Service provides turn-key CORS support for RESTful API scenarios, and simplifies mobile app scenarios by enabling authentication, offline data sync, push notifications, and more. * **Serverless code** - Run a code snippet or script on-demand without having to explicitly provision or manage infrastructure, and pay only for the compute time your code actually uses (see [Azure Functions](../azure-functions/index.yml)).
-Besides App Service, Azure offers other services that can be used for hosting websites and web applications. For most scenarios, App Service is the best choice. For microservice architecture, consider [Azure Spring Apps](../spring-apps/index.yml) or [Service Fabric](https://azure.microsoft.com/documentation/services/service-fabric). If you need more control over the VMs on which your code runs, consider [Azure Virtual Machines](https://azure.microsoft.com/documentation/services/virtual-machines/). For more information about how to choose between these Azure services, see [Azure App Service, Virtual Machines, Service Fabric, and Cloud Services comparison](/azure/architecture/guide/technology-choices/compute-decision-tree).
+Besides App Service, Azure offers other services that can be used for hosting websites and web applications. For most scenarios, App Service is the best choice. For microservice architecture, consider [Azure Spring Apps](../spring-apps/index.yml) or [Service Fabric](/azure/service-fabric). If you need more control over the VMs on which your code runs, consider [Azure Virtual Machines](/azure/virtual-machines/). For more information about how to choose between these Azure services, see [Azure App Service, Virtual Machines, Service Fabric, and Cloud Services comparison](/azure/architecture/guide/technology-choices/compute-decision-tree).
## App Service on Linux
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/17/2022 Last updated : 08/29/2022
compliant with the specific standard.
## Release notes
+### August 2022
+- **App Service apps should only be accessible over HTTPS**
+ - Update scope of policy to remove slots
+ - Creation of "App Service app slots should only be accessible over HTTPS" to monitor slots
+ - Add "Deny" effect
+ - Creation of "Configure App Service apps to only be accessible over HTTPS" for enforcement of policy
+- **App Service app slots should only be accessible over HTTPS**
+ - New policy created
+- **Configure App Service apps to only be accessible over HTTPS**
+ - New policy created
+- **Configure App Service app slots to only be accessible over HTTPS**
+ - New policy created
+ ### July 2022 - Deprecation of the following policies:
application-gateway Application Gateway Ilb Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-ilb-arm.md
If you want to configure SSL offload, see [Configure an application gateway for
If you want more information about load balancing options in general, see:
-* [Azure Load Balancer](https://azure.microsoft.com/documentation/services/load-balancer/)
-* [Azure Traffic Manager](https://azure.microsoft.com/documentation/services/traffic-manager/)
+* [Azure Load Balancer](/azure/load-balancer/)
+* [Azure Traffic Manager](/azure/traffic-manager/)
applied-ai-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/label-tool.md
keywords: document processing
>[!TIP] >
-> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
+> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio).
> * The v3.0 Studio supports any model trained with v2.1 labeled data. > * You can refer to the [API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0. > * *See* our [**REST API**](quickstarts/get-started-v3-sdk-rest-api.md) or [**C#**](quickstarts/get-started-v3-sdk-rest-api.md), [**Java**](quickstarts/get-started-v3-sdk-rest-api.md), [**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md), or [Python](quickstarts/get-started-v3-sdk-rest-api.md) SDK quickstarts to get started with the V3.0.
automanage Tutorial Create Assignment Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/tutorial-create-assignment-python.md
In this tutorial, you'll create a resource group and a virtual machine. You'll t
## Prerequisites - [Python](https://www.python.org/downloads/)-- [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli-windows?tabs=azure-cli) or [Azure PowerShell](https://docs.microsoft.com/powershell/azure/install-az-ps)
+- [Azure CLI](/cli/azure/install-azure-cli-windows?tabs=azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps)
## Create resources
automation Change Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/change-tracking.md
If you don't see your machine in query results, it hasn't recently checked in. T
If your machine shows up in the query results, verify the scope configuration. See [Targeting monitoring solutions in Azure Monitor](../../azure-monitor/insights/solution-targeting.md).
-For more troubleshooting of this issue, see [Issue: You are not seeing any Linux data](../../azure-monitor/agents/agent-linux-troubleshoot.md#issue-you-are-not-seeing-any-linux-data).
+For more troubleshooting of this issue, see [Issue: You are not seeing any Linux data](../../azure-monitor/agents/agent-linux-troubleshoot.md#issue-you-arent-seeing-any-linux-data).
##### Log Analytics agent for Linux not configured correctly
azure-app-configuration Concept App Configuration Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-app-configuration-event.md
Title: Reacting to Azure App Configuration key-value events
description: Use Azure Event Grid to subscribe to App Configuration events, which allow applications to react to changes in key-values without the need for complicated code. -+ Previously updated : 02/20/2020 Last updated : 08/30/2022
# Reacting to Azure App Configuration events
-Azure App Configuration events enable applications to react to changes in key-values. This is done without the need for complicated code or expensive and inefficient polling services. Instead, events are pushed through [Azure Event Grid](https://azure.microsoft.com/services/event-grid/) to subscribers such as [Azure Functions](https://azure.microsoft.com/services/functions/), [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/), or even to your own custom http listener. Critically, you only pay for what you use.
+Azure App Configuration events enable applications to react to changes in key-values. This is done without the need for complicated code or expensive and inefficient polling services. Instead, events are pushed through [Azure Event Grid](https://azure.microsoft.com/services/event-grid/) to subscribers, such as [Azure Functions](https://azure.microsoft.com/services/functions/), [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/), or even to your own custom HTTP listener. Critically, you only pay for what you use.
-Azure App Configuration events are sent to the Azure Event Grid, which provides reliable delivery services to your applications through rich retry policies and dead-letter delivery. To learn more, see [Event Grid message delivery and retry](../event-grid/delivery-and-retry.md).
+Azure App Configuration events are sent to the Azure Event Grid, which provides reliable delivery services to your applications through rich retry policies and dead-letter delivery. For more information, see [Event Grid message delivery and retry](../event-grid/delivery-and-retry.md).
Common App Configuration event scenarios include refreshing application configuration, triggering deployments, or any configuration-oriented workflow. When changes are infrequent, but your scenario requires immediate responsiveness, event-based architecture can be especially efficient.
-Take a look at [Use Event Grid for data change notifications](./howto-app-configuration-event.md) for a quick example.
+Take a look at [Use Event Grid for data change notifications](./howto-app-configuration-event.md) for a quick example.
-![Event Grid Model](./media/event-grid-functional-model.png)
## Available Azure App Configuration events
-Event grid uses [event subscriptions](../event-grid/concepts.md#event-subscriptions) to route event messages to subscribers. Azure App Configuration event subscriptions can include two types of events:
+
+Event Grid uses [event subscriptions](../event-grid/concepts.md#event-subscriptions) to route event messages to subscribers. Azure App Configuration event subscriptions can include two types of events:
> |Event Name|Description| > |-|--|
-> |`Microsoft.AppConfiguration.KeyValueModified`|Fired when a key-value is created or replaced|
-> |`Microsoft.AppConfiguration.KeyValueDeleted`|Fired when a key-value is deleted|
+> |`Microsoft.AppConfiguration.KeyValueModified`|Fired when a key-value is created or replaced.|
+> |`Microsoft.AppConfiguration.KeyValueDeleted`|Fired when a key-value is deleted.|
## Event schema
-Azure App Configuration events contain all the information you need to respond to changes in your data. You can identify an App Configuration event because the eventType property starts with "Microsoft.AppConfiguration". Additional information about the usage of Event Grid event properties is documented in [Event Grid event schema](../event-grid/event-schema.md).
+
+Azure App Configuration events contain all the information you need to respond to changes in your data. You can identify an App Configuration event because the `eventType` property starts with `Microsoft.AppConfiguration`. Additional information about the usage of Event Grid event properties is documented in the [Event Grid event schema](../event-grid/event-schema.md).
> |Property|Type|Description| > |-||--|
-> |topic|string|Full Azure Resource Manager id of the App Configuration that emits the event.|
-> |subject|string|The URI of the key-value that is the subject of the event.|
-> |eventTime|string|The date/time that the event was generated, in ISO 8601 format.|
-> |eventType|string|"Microsoft.AppConfiguration.KeyValueModified" or "Microsoft.AppConfiguration.KeyValueDeleted".|
+> |topic|string|Full Azure Resource Manager ID of the App Configuration that emits the event.|
+> |subject|string|The URI of the key-value that's the subject of the event.|
+> |eventTime|string|The date/time that the event was generated in ISO 8601 format.|
+> |eventType|string|`Microsoft.AppConfiguration.KeyValueModified` or `Microsoft.AppConfiguration.KeyValueDeleted`.|
> |Id|string|A unique identifier of this event.| > |dataVersion|string|The schema version of the data object.| > |metadataVersion|string|The schema version of top-level properties.|
-> |data|object|Collection of Azure App Configuration specific event data|
+> |data|object|Collection of Azure App Configuration specific event data.|
> |data.key|string|The key of the key-value that was modified or deleted.| > |data.label|string|The label, if any, of the key-value that was modified or deleted.|
-> |data.etag|string|For `KeyValueModified` the etag of the new key-value. For `KeyValueDeleted` the etag of the key-value that was deleted.|
+> |data.etag|string|For `KeyValueModified`, the etag of the new key-value. For `KeyValueDeleted`, the etag of the key-value that was deleted.|
+
+Here's an example of a `KeyValueModified` event:
-Here is an example of a KeyValueModified event:
```json [{ "id": "84e17ea4-66db-4b54-8050-df8f7763f87b",
Here is an example of a KeyValueModified event:
For more information, see [Azure App Configuration events schema](../event-grid/event-schema-app-configuration.md). ## Practices for consuming events+ Applications that handle App Configuration events should follow these recommended practices: > [!div class="checklist"]
-> * Multiple subscriptions can be configured to route events to the same event handler, so do not assume events are from a particular source. Instead, check the topic of the message to ensure the App Configuration instance sending the event.
-> * Check the eventType and do not assume that all events you receive will be the types you expect.
-> * Use the etag fields to understand if your information about objects is still up-to-date.
+> * Multiple subscriptions can be configured to route events to the same event handler, so don't assume events are from a particular source. Instead, check the topic of the message to ensure that the App Configuration instance is sending the event.
+> * Check the `eventType`, and don't assume that all events you receive will be the types you expect.
+> * Use the `etag` fields to understand if your information about objects is still up-to-date.
> * Use the sequencer fields to understand the order of events on any particular object. > * Use the subject field to access the key-value that was modified. - ## Next steps
-Learn more about Event Grid and give Azure App Configuration events a try:
+To learn more about Event Grid and to give Azure App Configuration events a try, see:
+
+> [!div class="nextstepaction"]
+> [About Event Grid](../event-grid/overview.md)
-- [About Event Grid](../event-grid/overview.md)-- [How to use Event Grid for data change notifications](./howto-app-configuration-event.md)
+> [!div class="nextstepaction"]
+> [How to use Event Grid for data change notifications](./howto-app-configuration-event.md)
azure-app-configuration Concept Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-customer-managed-keys.md
Title: Use customer-managed keys to encrypt your configuration data
description: Encrypt your configuration data using customer-managed keys Previously updated : 07/28/2020 Last updated : 08/30/2022+ # Use customer-managed keys to encrypt your App Configuration data
-Azure App Configuration [encrypts sensitive information at rest](../security/fundamentals/encryption-atrest.md). The use of customer-managed keys provides enhanced data protection by allowing you to manage your encryption keys. When managed key encryption is used, all sensitive information in App Configuration is encrypted with a user-provided Azure Key Vault key. This provides the ability to rotate the encryption key on demand. It also provides the ability to revoke Azure App Configuration's access to sensitive information by revoking the App Configuration instance's access to the key.
-## Overview
-Azure App Configuration encrypts sensitive information at rest using a 256-bit AES encryption key provided by Microsoft. Every App Configuration instance has its own encryption key managed by the service and used to encrypt sensitive information. Sensitive information includes the values found in key-value pairs. When customer-managed key capability is enabled, App Configuration uses a managed identity assigned to the App Configuration instance to authenticate with Azure Active Directory. The managed identity then calls Azure Key Vault and wraps the App Configuration instance's encryption key. The wrapped encryption key is then stored and the unwrapped encryption key is cached within App Configuration for one hour. App Configuration refreshes the unwrapped version of the App Configuration instance's encryption key hourly. This ensures availability under normal operating conditions.
+Azure App Configuration [encrypts sensitive information at rest](../security/fundamentals/encryption-atrest.md). The use of customer-managed keys provides enhanced data protection by allowing you to manage your encryption keys. When managed key encryption is used, all sensitive information in App Configuration is encrypted with a user-provided Azure Key Vault key. This provides the ability to rotate the encryption key on demand. It also provides the ability to revoke Azure App Configuration's access to sensitive information by revoking the App Configuration instance's access to the key.
->[!IMPORTANT]
-> If the identity assigned to the App Configuration instance is no longer authorized to unwrap the instance's encryption key, or if the managed key is permanently deleted, then it will no longer be possible to decrypt sensitive information stored in the App Configuration instance. Using Azure Key Vault's [soft delete](../key-vault/general/soft-delete-overview.md) function mitigates the chance of accidentally deleting your encryption key.
+## Overview
-When users enable the customer managed key capability on their Azure App Configuration instance, they control the serviceΓÇÖs ability to access their sensitive information. The managed key serves as a root encryption key. A user can revoke their App Configuration instanceΓÇÖs access to their managed key by changing their key vault access policy. When this access is revoked, App Configuration will lose the ability to decrypt user data within one hour. At this point, the App Configuration instance will forbid all access attempts. This situation is recoverable by granting the service access to the managed key once again. Within one hour, App Configuration will be able to decrypt user data and operate under normal conditions.
+Azure App Configuration encrypts sensitive information at rest by using a 256-bit AES encryption key provided by Microsoft. Every App Configuration instance has its own encryption key managed by the service and used to encrypt sensitive information. Sensitive information includes the values found in key-value pairs. When the customer-managed key capability is enabled, App Configuration uses a managed identity assigned to the App Configuration instance to authenticate with Azure Active Directory. The managed identity then calls Azure Key Vault and wraps the App Configuration instance's encryption key. The wrapped encryption key is then stored, and the unwrapped encryption key is cached within App Configuration for one hour. Every hour, the App Configuration refreshes the unwrapped version of the App Configuration instance's encryption key. This process ensures availability under normal operating conditions.
->[!NOTE]
->All Azure App Configuration data is stored for up to 24 hours in an isolated backup. This includes the unwrapped encryption key. This data is not immediately available to the service or service team. In the event of an emergency restore, Azure App Configuration will re-revoke itself from the managed key data.
+> [!IMPORTANT]
+> If the identity assigned to the App Configuration instance is no longer authorized to unwrap the instance's encryption key, or if the managed key is permanently deleted, then it will no longer be possible to decrypt sensitive information stored in the App Configuration instance. By using Azure Key Vault's [soft delete](../key-vault/general/soft-delete-overview.md) function, you mitigate the chance of accidentally deleting your encryption key.
+
+When users enable the customer managed key capability on their Azure App Configuration instance, they control the serviceΓÇÖs ability to access their sensitive information. The managed key serves as a root encryption key. Users can revoke their App Configuration instanceΓÇÖs access to their managed key by changing their key vault access policy. When this access is revoked, App Configuration will lose the ability to decrypt user data within one hour. At this point, the App Configuration instance will forbid all access attempts. This situation is recoverable by granting the service access to the managed key once again. Within one hour, App Configuration will be able to decrypt user data and operate under normal conditions.
+
+> [!NOTE]
+> All Azure App Configuration data is stored for up to 24 hours in an isolated backup. This includes the unwrapped encryption key. This data isn't immediately available to the service or service team. In the event of an emergency restore, Azure App Configuration will revoke itself again from the managed key data.
## Requirements+ The following components are required to successfully enable the customer-managed key capability for Azure App Configuration:-- Standard tier Azure App Configuration instance-- Azure Key Vault with soft-delete and purge-protection features enabled-- An RSA or RSA-HSM key within the Key Vault
- - The key must not be expired, it must be enabled, and it must have both wrap and unwrap capabilities enabled
-Once these resources are configured, two steps remain to allow Azure App Configuration to use the Key Vault key:
-1. Assign a managed identity to the Azure App Configuration instance
-2. Grant the identity `GET`, `WRAP`, and `UNWRAP` permissions in the target Key Vault's access policy.
+- Standard tier Azure App Configuration instance.
+- Azure Key Vault with soft-delete and purge-protection features enabled.
+- An RSA or RSA-HSM key within the Key Vault.
+ - The key must not be expired, it must be enabled, and it must have both wrap and unwrap capabilities enabled.
+
+After these resources are configured, use the following steps so that the Azure App Configuration can use the Key Vault key:
+
+1. Assign a managed identity to the Azure App Configuration instance.
+1. Grant the identity `GET`, `WRAP`, and `UNWRAP` permissions in the target Key Vault's access policy.
## Enable customer-managed key encryption for your Azure App Configuration instance
-To begin, you will need a properly configured Azure App Configuration instance. If you do not yet have an App Configuration instance available, follow one of these quickstarts to set one up:
+
+To begin, you'll need a properly configured Azure App Configuration instance. If you don't yet have an App Configuration instance available, follow one of these quickstarts to set one up:
+ - [Create an ASP.NET Core app with Azure App Configuration](quickstart-aspnet-core-app.md) - [Create a .NET Core app with Azure App Configuration](quickstart-dotnet-core-app.md) - [Create a .NET Framework app with Azure App Configuration](quickstart-dotnet-app.md) - [Create a Java Spring app with Azure App Configuration](quickstart-java-spring-app.md)
+- [Create a JavaScript app with Azure App Configuration](quickstart-javascript.md)
+- [Create a Python app with Azure App Configuration](quickstart-python.md)
->[!TIP]
-> The Azure Cloud Shell is a free interactive shell that you can use to run the command line instructions in this article. It has common Azure tools preinstalled, including the .NET Core SDK. If you are logged in to your Azure subscription, launch your [Azure Cloud Shell](https://shell.azure.com) from shell.azure.com. You can learn more about Azure Cloud Shell by [reading our documentation](../cloud-shell/overview.md)
+> [!TIP]
+> The Azure Cloud Shell is a free interactive shell that you can use to run the command line instructions in this article. It has common Azure tools preinstalled, including the .NET Core SDK. If you are logged in to your Azure subscription, launch your [Azure Cloud Shell](https://shell.azure.com) from shell.azure.com. You can learn more about Azure Cloud Shell by [reading our documentation](../cloud-shell/overview.md).
### Create and configure an Azure Key Vault
-1. Create an Azure Key Vault using the Azure CLI. Note that both `vault-name` and `resource-group-name` are user-provided and must be unique. We use `contoso-vault` and `contoso-resource-group` in these examples.
+
+1. Create an Azure Key Vault by using the Azure CLI. Both `vault-name` and `resource-group-name` are user-provided and must be unique. We use `contoso-vault` and `contoso-resource-group` in these examples.
```azurecli az keyvault create --name contoso-vault --resource-group contoso-resource-group ```
-
+ 1. Enable soft-delete and purge-protection for the Key Vault. Substitute the names of the Key Vault (`contoso-vault`) and Resource Group (`contoso-resource-group`) created in step 1. ```azurecli az keyvault update --name contoso-vault --resource-group contoso-resource-group --enable-purge-protection --enable-soft-delete ```
-
+ 1. Create a Key Vault key. Provide a unique `key-name` for this key, and substitute the names of the Key Vault (`contoso-vault`) created in step 1. Specify whether you prefer `RSA` or `RSA-HSM` encryption. ```azurecli az keyvault key create --name key-name --kty {RSA or RSA-HSM} --vault-name contoso-vault ```
-
- The output from this command shows the key ID ("kid") for the generated key. Make a note of the key ID to use later in this exercise. The key ID has the form: `https://{my key vault}.vault.azure.net/keys/{key-name}/{Key version}`. The key ID has three important components:
+
+ The output from this command shows the key ID ("kid") for the generated key. Make a note of the key ID to use later in this exercise. The key ID has the form: `https://{my key vault}.vault.azure.net/keys/{key-name}/{Key version}`. The key ID has three important components:
1. Key Vault URI: `https://{my key vault}.vault.azure.net 1. Key Vault key name: {Key Name} 1. Key Vault key version: {Key version}
-1. Create a system assigned managed identity using the Azure CLI, substituting the name of your App Configuration instance and resource group used in the previous steps. The managed identity will be used to access the managed key. We use `contoso-app-config` to illustrate the name of an App Configuration instance:
-
+1. Create a system-assigned managed identity by using the Azure CLI, substituting the name of your App Configuration instance and resource group used in the previous steps. The managed identity will be used to access the managed key. We use `contoso-app-config` to illustrate the name of an App Configuration instance:
+ ```azurecli az appconfig identity assign --name contoso-app-config --resource-group contoso-resource-group --identities [system] ```
-
- The output of this command includes the principal ID ("principalId") and tenant ID ("tenandId") of the system assigned identity. These IDs will be used to grant the identity access to the managed key.
+
+ The output of this command includes the principal ID (`"principalId"`) and tenant ID (`"tenandId"`) of the system-assigned identity. These IDs will be used to grant the identity access to the managed key.
```json {
To begin, you will need a properly configured Azure App Configuration instance.
} ```
-1. The managed identity of the Azure App Configuration instance needs access to the key to perform key validation, encryption, and decryption. The specific set of actions to which it needs access includes: `GET`, `WRAP`, and `UNWRAP` for keys. Granting the access requires the principal ID of the App Configuration instance's managed identity. This value was obtained in the previous step. It is shown below as `contoso-principalId`. Grant permission to the managed key using the command line:
+1. The managed identity of the Azure App Configuration instance needs access to the key to perform key validation, encryption, and decryption. The specific set of actions to which it needs access includes: `GET`, `WRAP`, and `UNWRAP` for keys. Granting access requires the principal ID of the App Configuration instance's managed identity. This value was obtained in the previous step. It's shown below as `contoso-principalId`. Grant permission to the managed key by using the command line:
```azurecli az keyvault set-policy -n contoso-vault --object-id contoso-principalId --key-permissions get wrapKey unwrapKey ```
-1. Once the Azure App Configuration instance can access the managed key, we can enable the customer-managed key capability in the service using the Azure CLI. Recall the following properties recorded during the key creation steps: `key name` `key vault URI`.
+1. After the Azure App Configuration instance can access the managed key, we can enable the customer-managed key capability in the service by using the Azure CLI. Recall the following properties recorded during the key creation steps: `key name` `key vault URI`.
```azurecli az appconfig update -g contoso-resource-group -n contoso-app-config --encryption-key-name key-name --encryption-key-version key-version --encryption-key-vault key-vault-Uri
To begin, you will need a properly configured Azure App Configuration instance.
Your Azure App Configuration instance is now configured to use a customer-managed key stored in Azure Key Vault. ## Next Steps
-In this article, you configured your Azure App Configuration instance to use a customer-managed key for encryption. Learn how to [integrate your service with Azure Managed Identities](howto-integrate-azure-managed-service-identity.md).
+
+In this article, you configured your Azure App Configuration instance to use a customer-managed key for encryption. To learn more about how to integrate your app service with Azure managed identities, continue to the next step.
+
+> [!div class="nextstepaction"]
+> [Integrate your service with Azure Managed Identities](./howto-integrate-azure-managed-service-identity.md)
azure-app-configuration Howto Leverage Json Content Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-leverage-json-content-type.md
ms.assetid:
ms.devlang: azurecli Previously updated : 08/03/2020 Last updated : 08/24/2022+ #Customer intent: I want to store JSON key-values in App Configuration store without losing the data type of each setting.
-# Leverage content-type to store JSON key-values in App Configuration
-
-Data is stored in App Configuration as key-values, where values are treated as the string type by default. However, you can specify a custom type by leveraging the content-type property associated with each key-value, so that you can preserve the original type of your data or have your application behave differently based on content-type.
+# Use content type to store JSON key-values in App Configuration
+Data is stored in App Configuration as key-values, where values are treated as the string type by default. However, you can specify a custom type by using the content type property associated with each key-value. This process preserves the original type of your data or makes your application behave differently based on content type.
## Overview
-In App Configuration, you can use the JSON media type as the content-type of your key-values to avail benefits like:
+In App Configuration, you can use the JSON media type as the content type of your key-values to avail the following benefits:
+ - **Simpler data management**: Managing key-values, like arrays, will become a lot easier in the Azure portal. - **Enhanced data export**: Primitive types, arrays, and JSON objects will be preserved during data export.-- **Native support with App Configuration provider**: Key-values with JSON content-type will work fine when consumed by App Configuration provider libraries in your applications.
+- **Native support with App Configuration provider**: Key-values with JSON content type will work fine when consumed by App Configuration provider libraries in your applications.
-#### Valid JSON content-type
+### Valid JSON content type
-Media types, as defined [here](https://www.iana.org/assignments/media-types/media-types.xhtml), can be assigned to the content-type associated with each key-value.
-A media type consists of a type and a subtype. If the type is `application` and the subtype (or suffix) is `json`, the media type will be considered a valid JSON content-type.
-Some examples of valid JSON content-types are:
+Media types, as defined [here](https://www.iana.org/assignments/media-types/media-types.xhtml), can be assigned to the content type associated with each key-value.
+A media type consists of a type and a subtype. If the type is `application` and the subtype (or suffix) is `json`, the media type will be considered a valid JSON content type.
+Some examples of valid JSON content types are:
-- application/json-- application/activity+json-- application/vnd.foobar+json;charset=utf-8
+- `application/json`
+- `application/activity+json`
+- `application/vnd.foobar+json;charset=utf-8`
-#### Valid JSON values
+### Valid JSON values
-When a key-value has JSON content-type, its value must be in valid JSON format for clients to process it correctly. Otherwise, clients may fail or fall back and treat it as string format.
+When a key-value has a JSON content type, its value must be in valid JSON format for clients to process it correctly. Otherwise, clients might fail or fall back and treat it as string format.
Some examples of valid JSON values are: -- "John Doe"-- 723-- false-- null-- "2020-01-01T12:34:56.789Z"-- [1, 2, 3, 4]-- {"ObjectSetting":{"Targeting":{"Default":true,"Level":"Information"}}}
+- `"John Doe"`
+- `723`
+- `false`
+- `null`
+- `"2020-01-01T12:34:56.789Z"`
+- `[1, 2, 3, 4]`
+- `{"ObjectSetting":{"Targeting":{"Default":true,"Level":"Information"}}}`
> [!NOTE]
-> For the rest of this article, any key-value in App Configuration that has a valid JSON content-type and a valid JSON value will be referred to as **JSON key-value**.
+> For the rest of this article, any key-value in App Configuration that has a valid JSON content type and a valid JSON value will be referred to as **JSON key-value**.
In this tutorial, you'll learn how to: > [!div class="checklist"]
In this tutorial, you'll learn how to:
> * Export JSON key-values to a JSON file. > * Consume JSON key-values in your applications. - [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)]
In this tutorial, you'll learn how to:
[!INCLUDE [azure-app-configuration-create](../../includes/azure-app-configuration-create.md)] - ## Create JSON key-values in App Configuration
-JSON key-values can be created using Azure portal, Azure CLI or by importing from a JSON file. In this section, you will find instructions on creating the same JSON key-values using all three methods.
+JSON key-values can be created using Azure portal, Azure CLI, or by importing from a JSON file. In this section, you'll find instructions on creating the same JSON key-values using all three methods.
### Create JSON key-values using Azure portal
az appconfig kv set -n $appConfigName --content-type application/json --key Sett
``` > [!IMPORTANT]
-> If you are using Azure CLI or Azure Cloud Shell to create JSON key-values, the value provided must be an escaped JSON string.
+> If you're using Azure CLI or Azure Cloud Shell to create JSON key-values, the value provided must be an escaped JSON string.
### Import JSON key-values from a file
-Create a JSON file called `Import.json` with the following content and import as key-values into App Configuration:
+Create a JSON file called `Import.json` with the following content and import it as key-values into App Configuration:
```json {
Create a JSON file called `Import.json` with the following content and import as
az appconfig kv import -s file --format json --path "~/Import.json" --content-type "application/json" --separator : --depth 2 ```
-> [!Note]
-> The `--depth` argument is used for flattening hierarchical data from a file into key-value pairs. In this tutorial, depth is specified for demonstrating that you can also store JSON objects as values in App Configuration. If depth is not specified, JSON objects will be flattened to the deepest level by default.
+> [!NOTE]
+> The `--depth` argument is used for flattening hierarchical data from a file into key-value pairs. In this tutorial, depth is specified for demonstrating that you can also store JSON objects as values in App Configuration. If depth isn't specified, JSON objects will be flattened to the deepest level by default.
The JSON key-values you created should look like this in App Configuration:
-![Config store containing JSON key-values](./media/create-json-settings.png)
- ## Export JSON key-values to a file
-One of the major benefits of using JSON key-values is the ability to preserve the original data type of your values while exporting. If a key-value in App Configuration doesn't have JSON content-type, its value will be treated as string.
+One of the major benefits of using JSON key-values is the ability to preserve the original data type of your values while exporting. If a key-value in App Configuration doesn't have JSON content type, its value will be treated as a string.
-Consider these key-values without JSON content-type:
+Consider these key-values without JSON content type:
| Key | Value | Content Type | ||||
When you export these key-values to a JSON file, the values will be exported as
} ```
-However, when you export JSON key-values to a file, all values will preserve their original data type. To verify this, export key-values from your App Configuration to a JSON file. You'll see that the exported file has the same contents as the `Import.json` file you previously imported.
+However, when you export JSON key-values to a file, all values will preserve their original data type. To verify this process, export key-values from your App Configuration to a JSON file. You'll see that the exported file has the same contents as the `Import.json` file you previously imported.
```azurecli-interactive az appconfig kv export -d file --format json --path "~/Export.json" --separator : ``` > [!NOTE]
-> If your App Configuration store has some key-values without JSON content-type, they will also be exported to the same file in string format.
-
+> If your App Configuration store has some key-values without JSON content type, they will also be exported to the same file in string format.
## Consuming JSON key-values in applications
-The easiest way to consume JSON key-values in your application is through App Configuration provider libraries. With the provider libraries, you don't need to implement special handling of JSON key-values in your application. They will be parsed and converted to match the native configuration of your application.
+The easiest way to consume JSON key-values in your application is through App Configuration provider libraries. With the provider libraries, you don't need to implement special handling of JSON key-values in your application. They'll be parsed and converted to match the native configuration of your application.
For example, if you have the following key-value in App Configuration:
Your .NET application configuration will have the following key-values:
| Settings:FontSize | 24 | | Settings:UseDefaultRouting | false |
-You may access the new keys directly or you may choose to [bind configuration values to instances of .NET objects](/aspnet/core/fundamentals/configuration/#bind-hierarchical-configuration-data-using-the-options-pattern).
-
+You might access the new keys directly or you might choose to [bind configuration values to instances of .NET objects](/aspnet/core/fundamentals/configuration/#bind-hierarchical-configuration-data-using-the-options-pattern).
-> [!Important]
-> Native support for JSON key-values is available in .NET configuration provider version 4.0.0 (or later). See [*Next steps*](#next-steps) section for more details.
-
-If you are using the SDK or REST API to read key-values from App Configuration, based on the content-type, your application is responsible for parsing the value of a JSON key-value.
+> [!IMPORTANT]
+> Native support for JSON key-values is available in .NET configuration provider version 4.0.0 (or later). For more information, go to [Next steps](#next-steps) section.
+If you're using the SDK or REST API to read key-values from App Configuration, based on the content type, your application is responsible for parsing the value of a JSON key-value.
## Clean up resources
If you are using the SDK or REST API to read key-values from App Configuration,
Now that you know how to work with JSON key-values in your App Configuration store, create an application for consuming these key-values:
-* [ASP.NET Core quickstart](./quickstart-aspnet-core-app.md)
- * Prerequisite: [Microsoft.Azure.AppConfiguration.AspNetCore](https://www.nuget.org/packages/Microsoft.Azure.AppConfiguration.AspNetCore) package v4.0.0 or later.
+> [!div class="nextstepaction"]
+> [ASP.NET Core quickstart](./quickstart-aspnet-core-app.md)
-* [.NET Core quickstart](./quickstart-dotnet-core-app.md)
- * Prerequisite: [Microsoft.Extensions.Configuration.AzureAppConfiguration](https://www.nuget.org/packages/Microsoft.Extensions.Configuration.AzureAppConfiguration) package v4.0.0 or later.
+> [!div class="nextstepaction"]
+> [.NET Core quickstart](./quickstart-dotnet-core-app.md)
azure-app-configuration Integrate Ci Cd Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/integrate-ci-cd-pipeline.md
- Previously updated : 04/19/2020-+ Last updated : 08/30/2022+ # Customer intent: I want to use Azure App Configuration data in my CI/CD pipeline.
If you have an Azure DevOps Pipeline, you can fetch key-values from App Configur
## Deploy App Configuration data with your application
-Your application may fail to run if it depends on Azure App Configuration and cannot reach it. Enhance the resiliency of your application by packaging configuration data into a file that's deployed with the application and loaded locally during application startup. This approach guarantees that your application has default setting values on startup. These values are overwritten by any newer changes in an App Configuration store when it's available.
+Your application might fail to run if it depends on Azure App Configuration and can't reach it. Enhance the resiliency of your application by packaging configuration data into a file that's deployed with the application and loaded locally during application startup. This approach guarantees that your application has a default setting values on startup. These values are overwritten by any newer changes in an App Configuration store when it's available.
Using the [Export](./howto-import-export-data.md#export-data) function of Azure App Configuration, you can automate the process of retrieving current configuration data as a single file. You can then embed this file in a build or deployment step in your continuous integration and continuous deployment (CI/CD) pipeline.
You can use any code editor to do the steps in this tutorial. [Visual Studio Cod
If you build locally, download and install the [Azure CLI](/cli/azure/install-azure-cli) if you havenΓÇÖt already.
-To do a cloud build, with Azure DevOps for example, make sure the [Azure CLI](/cli/azure/install-azure-cli) is installed in your build system.
- ### Export an App Configuration store 1. Open your *.csproj* file, and add the following script:
To do a cloud build, with Azure DevOps for example, make sure the [Azure CLI](/c
<Exec WorkingDirectory="$(MSBuildProjectDirectory)" Condition="$(ConnectionString) != ''" Command="az appconfig kv export -d file --path $(OutDir)\azureappconfig.json --format json --separator : --connection-string $(ConnectionString)" /> </Target> ```
-1. Open *Program.cs*, and update the `CreateWebHostBuilder` method to use the exported JSON file by calling the `config.AddJsonFile()` method. Add the `System.Reflection` namespace as well.
+
+1. Open *Program.cs*, and update the `CreateWebHostBuilder` method to use the exported JSON file by calling the `config.AddJsonFile()` method. Add the `System.Reflection` namespace as well.
```csharp public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
To do a cloud build, with Azure DevOps for example, make sure the [Azure CLI](/c
### Build and run the app locally
-1. Set an environment variable named **ConnectionString**, and set it to the access key to your App Configuration store.
- If you use the Windows command prompt, run the following command and restart the command prompt to allow the change to take effect:
+1. Set an environment variable named *ConnectionString*, and set it to the access key to your App Configuration store.
+ #### [Windows command prompt](#tab/windowscommandprompt)
+
+ To build and run the app locally using the Windows command prompt, run the following command and restart the command prompt to allow the change to take effect:
+
```console
- setx ConnectionString "connection-string-of-your-app-configuration-store"
+ setx ConnectionString "connection-string-of-your-app-configuration-store"
```-
+
+ ### [PowerShell](#tab/powershell)
+
If you use Windows PowerShell, run the following command:-
+
```powershell
- $Env:ConnectionString = "connection-string-of-your-app-configuration-store"
+ $Env:ConnectionString = "connection-string-of-your-app-configuration-store"
```-
- If you use macOS or Linux, run the following command:
-
+
+ ### [macOS](#tab/unix)
+
+ If you use macOS, run the following command:
+
+ ```console
+ export ConnectionString='connection-string-of-your-app-configuration-store'
+ ```
+
+ ### [Linux](#tab/linux)
+
+ If you use Linux, run the following command:
+
```console
- export ConnectionString='connection-string-of-your-app-configuration-store'
+ export ConnectionString='connection-string-of-your-app-configuration-store'
```
+
+
-2. To build the app by using the .NET Core CLI, run the following command in the command shell:
+1. To build the app by using the .NET Core CLI, run the following command in the command shell:
```console dotnet build ```
-3. After the build successfully completes, run the following command to run the web app locally:
+1. After the build completes successfully, run the following command to run the web app locally:
```console dotnet run ```
-4. Open a browser window and go to `http://localhost:5000`, which is the default URL for the web app hosted locally.
+1. Open a browser window and go to `http://localhost:5000`, which is the default URL for the web app hosted locally.
- ![Quickstart app launch local](./media/quickstarts/aspnet-core-app-launch-local.png)
+ :::image type="content" source="./media/quickstarts/aspnet-core-app-launch-local.png" alt-text="Screenshot that shows Quickstart app launch local page.":::
## Next steps
azure-arc Managed Instance Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery.md
The following image shows a properly configured distributed availability group:
2. Provision the managed instance in the secondary site and configure as a disaster recovery instance. At this point, the system databases are not part of the contained availability group. ```azurecli
- az sql mi-arc create --name <secondaryinstance> --tier bc --replicas 3 ΓÇôlicense-type DisasterRecovery --k8s-namespace <namespace> --use-k8s
+ az sql mi-arc create --name <secondaryinstance> --tier bc --replicas 3 --license-type DisasterRecovery --k8s-namespace <namespace> --use-k8s
``` 3. Copy the mirroring certificates from each site to a location that's accessible to both the geo-primary and geo-secondary instances.
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
Title: "Use the cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters" Previously updated : 07/22/2022 Last updated : 08/30/2022 description: "Use cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters"
A conceptual overview of this feature is available in [Cluster connect - Azure A
|`*.servicebus.windows.net` | 443 | |`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com` | 443 |
+ > [!NOTE]
+ > To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET https://guestnotificationservice.azure.com/urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the `<location>` placeholder.
+ - Replace the placeholders and run the below command to set the environment variables used in this document: ```azurecli
A conceptual overview of this feature is available in [Cluster connect - Azure A
|`*.servicebus.windows.net` | 443 | |`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com` | 443 |
+ > [!NOTE]
+ > To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET https://guestnotificationservice.azure.com/urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the `<location>` placeholder.
+ - Replace the placeholders and run the below command to set the environment variables used in this document: ```azurepowershell
azure-arc Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/private-link.md
Title: Private connectivity for Arc enabled Kubernetes clusters using private link (preview) Previously updated : 04/08/2021 Last updated : 08/28/2021 description: With Azure Arc, you can use a Private Link Scope model to allow multiple Kubernetes clusters to use a single private endpoint.
The rest of this document assumes you have already set up your ExpressRoute circ
## Network configuration
-Azure Arc-enabled Kubernetes integrates with several Azure services to bring cloud management and governance to your hybrid Kubernetes clusters. Most of these services already offer private endpoints, but you need to configure your firewall and routing rules to allow access to Azure Active Directory and Azure Resource Manager over the internet until these services offer private endpoints. You also need to allow access to Microsoft Container Registry (and Azure Front Door.First Party as a precursor for Microsoft Container Registry) to pull images & Helm charts to enable services like Azure Monitor, as well as for initial setup of Azure Arc agents on the Kubernetes clusters.
+Azure Arc-enabled Kubernetes integrates with several Azure services to bring cloud management and governance to your hybrid Kubernetes clusters. Most of these services already offer private endpoints, but you need to configure your firewall and routing rules to allow access to Azure Active Directory and Azure Resource Manager over the internet until these services offer private endpoints. You also need to allow access to Microsoft Container Registry (and AzureFrontDoor.FirstParty as a precursor for Microsoft Container Registry) to pull images & Helm charts to enable services like Azure Monitor, as well as for initial setup of Azure Arc agents on the Kubernetes clusters.
There are two ways you can achieve this:
-* If your network is configured to route all internet-bound traffic through the Azure VPN or ExpressRoute circuit, you can configure the network security group (NSG) associated with your subnet in Azure to allow outbound TCP 443 (HTTPS) access to Azure AD, Azure Resource Manager, Azure Frontdoor and Microsoft Container Registry using [service tags] (/azure/virtual-network/service-tags-overview). The NSG rules should look like the following:
+* If your network is configured to route all internet-bound traffic through the Azure VPN or ExpressRoute circuit, you can configure the network security group (NSG) associated with your subnet in Azure to allow outbound TCP 443 (HTTPS) access to Azure AD, Azure Resource Manager, Azure Front Door and Microsoft Container Registry using [service tags](/azure/virtual-network/service-tags-overview). The NSG rules should look like the following:
| Setting | Azure AD rule | Azure Resource Manager rule | AzureFrontDoorFirstParty rule | Microsoft Container Registry rule | |-|||| | Source | Virtual Network | Virtual Network | Virtual Network | Virtual Network | Source Port ranges | * | * | * | * | Destination | Service Tag | Service Tag | Service Tag | Service Tag
- | Destination service tag | AzureActiveDirectory | AzureResourceManager | FrontDoor.FirstParty | MicrosoftContainerRegistry
+ | Destination service tag | AzureActiveDirectory | AzureResourceManager | AzureFrontDoor.FirstParty | MicrosoftContainerRegistry
| Destination port ranges | 443 | 443 | 443 | 443 | Protocol | TCP | TCP | TCP | TCP | Action | Allow | Allow | Allow (Both inbound and outbound) | Allow | Priority | 150 (must be lower than any rules that block internet access) | 151 (must be lower than any rules that block internet access) | 152 (must be lower than any rules that block internet access) | 153 (must be lower than any rules that block internet access) | | Name | AllowAADOutboundAccess | AllowAzOutboundAccess | AllowAzureFrontDoorFirstPartyAccess | AllowMCROutboundAccess
-* Configure the firewall on your local network to allow outbound TCP 443 (HTTPS) access to Azure AD, Azure Resource Manager, and Microsoft Container Registry, and inbound & outbound access to Azure FrontDoor.FirstParty using the downloadable service tag files. The JSON file contains all the public IP address ranges used by Azure AD, Azure Resource Manager, Azure FrontDoor.FirstParty, and Microsoft Container Registry and is updated monthly to reflect any changes. Azure Active Directory's service tag is AzureActiveDirectory, Azure Resource Manager's service tag is AzureResourceManager, Microsoft Container Registry's service tag is MicrosoftContainerRegistry, and Azure Front Door's service tag is FrontDoor.FirstParty. Consult with your network administrator and network firewall vendor to learn how to configure your firewall rules.
+* Configure the firewall on your local network to allow outbound TCP 443 (HTTPS) access to Azure AD, Azure Resource Manager, and Microsoft Container Registry, and inbound & outbound access to AzureFrontDoor.FirstParty using the downloadable service tag files. The JSON file contains all the public IP address ranges used by Azure AD, Azure Resource Manager, AzureFrontDoor.FirstParty, and Microsoft Container Registry and is updated monthly to reflect any changes. Azure Active Directory's service tag is AzureActiveDirectory, Azure Resource Manager's service tag is AzureResourceManager, Microsoft Container Registry's service tag is MicrosoftContainerRegistry, and Azure Front Door's service tag is AzureFrontDoor.FirstParty. Consult with your network administrator and network firewall vendor to learn how to configure your firewall rules.
## Create an Azure Arc Private Link Scope
The Private Endpoint on your virtual network allows it to reach Azure Arc-enable
1. On the **Configuration** page, perform the following: 1. Choose the virtual network and subnet from which you want to connect to Azure Arc-enabled Kubernetes clusters. 1. For **Integrate with private DNS zone**, select **Yes**. A new Private DNS Zone will be created. The actual DNS zones may be different from what is shown in the screenshot below.
-
+ :::image type="content" source="media/private-link/create-private-endpoint-2.png" alt-text="Screenshot of the Configuration step to create a private endpoint in the Azure portal."::: > [!NOTE]
The Private Endpoint on your virtual network allows it to reach Azure Arc-enable
1. Select **Review + create**. 1. Let validation pass. 1. Select **Create**.
-
+ :::image type="content" source="media/private-link/create-private-endpoint-2.png" alt-text="Screenshot of the Configuration step to create a private endpoint in the Azure portal."::: > [!NOTE]
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
Title: "Quickstart: Connect an existing Kubernetes cluster to Azure Arc" description: In this quickstart, you learn how to connect an Azure Arc-enabled Kubernetes cluster. Previously updated : 08/25/2022 Last updated : 08/30/2022 ms.devlang: azurecli
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
| `https://mcr.microsoft.com`, `https://*.data.mcr.microsoft.com` | Required to pull container images for Azure Arc agents. | | `https://gbl.his.arc.azure.com` (for Azure Cloud), `https://gbl.his.arc.azure.us` (for Azure US Government) | Required to get the regional endpoint for pulling system-assigned Managed Identity certificates. | | `https://*.his.arc.azure.com` (for Azure Cloud), `https://usgv.his.arc.azure.us` (for Azure US Government) | Required to pull system-assigned Managed Identity certificates. |
-|`*.servicebus.windows.net`, `guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`, `sts.windows.net`, `https://k8sconnectcsp.azureedge.net` | For [Cluster Connect](cluster-connect.md) and for [Custom Location](custom-locations.md) based scenarios. |
|`https://k8connecthelm.azureedge.net` | `az connectedk8s connect` uses Helm 3 to deploy Azure Arc agents on the Kubernetes cluster. This endpoint is needed for Helm client download to facilitate deployment of the agent helm chart. |
+|`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`, `sts.windows.net`, `https://k8sconnectcsp.azureedge.net` | For [Cluster Connect](cluster-connect.md) and for [Custom Location](custom-locations.md) based scenarios. |
+|`*.servicebus.windows.net` | For [Cluster Connect](cluster-connect.md) and for [Custom Location](custom-locations.md) based scenarios. |
> [!NOTE]
-> To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET /urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the `<location>` placeholder.
+> To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET https://guestnotificationservice.azure.com/urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the `<location>` placeholder.
## Create a resource group
azure-arc Onboard Windows Admin Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-windows-admin-center.md
You can enable Azure Arc-enabled servers for one or more Windows machines in you
* Azure Arc-enabled servers - Review the [prerequisites](prerequisites.md) and verify that your subscription, your Azure account, and resources meet the requirements.
-* Windows Admin Center - Review the requirements to [prepare your environment](/windows-server/manage/windows-admin-center/deploy/prepare-environment) to deploy and [configure Azure integration ](/windows-server/manage/windows-admin-center/azure/azure-integration).
+* Windows Admin Center - Review the requirements to [prepare your environment](/windows-server/manage/windows-admin-center/deploy/prepare-environment) to deploy and [configure Azure integration](/windows-server/manage/windows-admin-center/azure/azure-integration).
* An Azure subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
azure-fluid-relay Container Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/container-deletion.md
-+ description: Learn how to delete individual containers using az-cli Title: Delete Fluid containers-+ Last updated 09/28/2021
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-infrastructure-as-code.md
Title: Automate function app resource deployment to Azure
-description: Learn how to build an Azure Resource Manager template that deploys your function app.
+description: Learn how to build a Bicep file or an Azure Resource Manager template that deploys your function app.
ms.assetid: d20743e3-aab6-442c-a836-9bcea09bfd32 Previously updated : 08/18/2022 Last updated : 08/30/2022 # Automate resource deployment for your function app in Azure Functions
-You can use an Azure Resource Manager template to deploy a function app. This article outlines the required resources and parameters for doing so. You might need to deploy other resources, depending on the [triggers and bindings](functions-triggers-bindings.md) in your function app.
+You can use a Bicep file or an Azure Resource Manager template to deploy a function app. This article outlines the required resources and parameters for doing so. You might need to deploy other resources, depending on the [triggers and bindings](functions-triggers-bindings.md) in your function app. For more information about creating Bicep files, see [Understand the structure and syntax of Bicep files](../azure-resource-manager/bicep/file.md). For more information about creating templates, see [Authoring Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md).
-For more information about creating templates, see [Authoring Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md).
-
-For sample templates, see:
+For sample Bicep files and ARM templates, see:
- [ARM templates for function app deployment](https://github.com/Azure-Samples/function-app-arm-templates) - [Function app on Consumption plan]
For sample templates, see:
An Azure Functions deployment typically consists of these resources:
+# [Bicep](#tab/bicep)
+
+| Resource | Requirement | Syntax and properties reference |
+||-|--|
+| A function app | Required | [Microsoft.Web/sites](/azure/templates/microsoft.web/sites?pivots=deployment-language-bicep) |
+| A [storage account](../storage/index.yml) | Required | [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts?pivots=deployment-language-bicep) |
+| An [Application Insights](../azure-monitor/app/app-insights-overview.md) component | Optional | [Microsoft.Insights/components](/azure/templates/microsoft.insights/components?pivots=deployment-language-bicep) |
+| A [hosting plan](./functions-scale.md) | Optional<sup>1</sup> | [Microsoft.Web/serverfarms](/azure/templates/microsoft.web/serverfarms?pivots=deployment-language-bicep)
+
+# [JSON](#tab/json)
+ | Resource | Requirement | Syntax and properties reference | ||-|--|
-| A function app | Required | [Microsoft.Web/sites](/azure/templates/microsoft.web/sites) |
-| An [Azure Storage](../storage/index.yml) account | Required | [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts) |
-| An [Application Insights](../azure-monitor/app/app-insights-overview.md) component | Optional | [Microsoft.Insights/components](/azure/templates/microsoft.insights/components) |
-| A [hosting plan](./functions-scale.md) | Optional<sup>1</sup> | [Microsoft.Web/serverfarms](/azure/templates/microsoft.web/serverfarms) |
+| A function app | Required | [Microsoft.Web/sites](/azure/templates/microsoft.web/sites?pivots=deployment-language-arm-template) |
+| A [storage account](../storage/index.yml) | Required | [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts?pivots=deployment-language-arm-template) |
+| An [Application Insights](../azure-monitor/app/app-insights-overview.md) component | Optional | [Microsoft.Insights/components](/azure/templates/microsoft.insights/components?pivots=deployment-language-arm-template) |
+| A [hosting plan](./functions-scale.md) | Optional<sup>1</sup> | [Microsoft.Web/serverfarms](/azure/templates/microsoft.web/serverfarms?pivots=deployment-language-arm-template) |
++ <sup>1</sup>A hosting plan is only required when you choose to run your function app on a [Premium plan](./functions-premium-plan.md) or on an [App Service plan](../app-service/overview-hosting-plans.md).
An Azure Functions deployment typically consists of these resources:
<a name="storage"></a> ### Storage account
-An Azure storage account is required for a function app. You need a general purpose account that supports blobs, tables, queues, and files. For more information, see [Azure Functions storage account requirements](storage-considerations.md#storage-account-requirements).
+A storage account is required for a function app. You need a general purpose account that supports blobs, tables, queues, and files. For more information, see [Azure Functions storage account requirements](storage-considerations.md#storage-account-requirements).
-```json
-{
- "type": "Microsoft.Storage/storageAccounts",
- "name": "[variables('storageAccountName')]",
- "apiVersion": "2019-06-01",
- "location": "[resourceGroup().location]",
- "kind": "StorageV2",
- "sku": {
- "name": "[parameters('storageAccountType')]"
+# [Bicep](#tab/bicep)
+
+```bicep
+resource storageAccountName 'Microsoft.Storage/storageAccounts@2021-09-01' = {
+ name: storageAccountName
+ location: location
+ kind: 'StorageV2'
+ sku: {
+ name: storageAccountType
} } ```
+# [JSON](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-09-01",
+ "name": "[parameters('storageAccountName')]",
+ "location": "[parameters('location')]",
+ "kind": "StorageV2",
+ "sku": {
+ "name": "[parameters('storageAccountType')]"
+ }
+ }
+]
+```
+++ You must also specify the `AzureWebJobsStorage` property as an app setting in the site configuration. If the function app doesn't use Application Insights for monitoring, it should also specify `AzureWebJobsDashboard` as an app setting.
-The Azure Functions runtime uses the `AzureWebJobsStorage` connection string to create internal queues. When Application Insights is not enabled, the runtime uses the `AzureWebJobsDashboard` connection string to log to Azure Table storage and power the **Monitor** tab in the portal.
+The Azure Functions runtime uses the `AzureWebJobsStorage` connection string to create internal queues. When Application Insights isn't enabled, the runtime uses the `AzureWebJobsDashboard` connection string to log to Azure Table storage and power the **Monitor** tab in the portal.
These properties are specified in the `appSettings` collection in the `siteConfig` object:
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ ...
+ properties: {
+ ...
+ siteConfig: {
+ ...
+ appSettings: [
+ {
+ name: 'AzureWebJobsDashboard'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ {
+ name: 'AzureWebJobsStorage'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ ...
+ ]
+ }
+ }
+}
+```
+
+# [JSON](#tab/json)
+ ```json
-"appSettings": [
- {
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]"
- },
+"resources": [
{
- "name": "AzureWebJobsDashboard",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]"
+ "type": "Microsoft.Web/sites",
+ ...
+ "properties": {
+ ...
+ "siteConfig": {
+ ...
+ "appSettings": [
+ {
+ "name": "AzureWebJobsDashboard",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', parameters('storageAccountName'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', parameters('storageAccountName'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ ...
+ ]
+ }
+ }
} ] ``` ++ ### Application Insights
-Application Insights is recommended for monitoring your function apps. The Application Insights resource is defined with the type **Microsoft.Insights/components** and the kind **web**:
+Application Insights is recommended for monitoring your function apps. The Application Insights resource is defined with the type `Microsoft.Insights/components` and the kind **web**:
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource applicationInsights 'Microsoft.Insights/components@2020-02-02' = {
+ name: applicationInsightsName
+ location: appInsightsLocation
+ kind: 'web'
+ properties: {
+ Application_Type: 'web'
+ Request_Source: 'IbizaWebAppExtensionCreate'
+ }
+}
+```
+
+# [JSON](#tab/json)
```json
-{
- "apiVersion": "2015-05-01",
- "name": "[variables('appInsightsName')]",
- "type": "Microsoft.Insights/components",
- "kind": "web",
- "location": "[resourceGroup().location]",
- "tags": {
- "[concat('hidden-link:', resourceGroup().id, '/providers/Microsoft.Web/sites/', variables('functionAppName'))]": "Resource"
- },
- "properties": {
- "Application_Type": "web",
- "ApplicationId": "[variables('appInsightsName')]"
+"resources": [
+ {
+ "type": "Microsoft.Insights/components",
+ "apiVersion": "2020-02-02",
+ "name": "[parameters('applicationInsightsName')]",
+ "location": "[parameters('appInsightsLocation')]",
+ "kind": "web",
+ "properties": {
+ "Application_Type": "web",
+ "Request_Source": "IbizaWebAppExtensionCreate"
+ }
}
-},
+]
``` ++ In addition, the instrumentation key needs to be provided to the function app using the `APPINSIGHTS_INSTRUMENTATIONKEY` application setting. This property is specified in the `appSettings` collection in the `siteConfig` object:
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ ...
+ properties: {
+ ...
+ siteConfig: {
+ ...
+ appSettings: [
+ {
+ name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
+ value: appInsights.properties.InstrumentationKey
+ }
+ ...
+ ]
+ }
+ }
+}
+```
+
+# [JSON](#tab/json)
+ ```json
-"appSettings": [
+"resources": [
{
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('microsoft.insights/components/', variables('appInsightsName')), '2015-05-01').InstrumentationKey]"
+ "type": "Microsoft.Web/sites",
+ ...
+ "properties": {
+ ...
+ "siteConfig": {
+ ...
+ "appSettings": [
+ {
+ "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
+ "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').InstrumentationKey]"
+ },
+ ...
+ ]
+ }
+ }
} ] ``` ++ ### Hosting plan
-The definition of the hosting plan varies, and can be one of the following:
+The definition of the hosting plan varies, and can be one of the following plans:
- [Consumption plan](#consumption) (default) - [Premium plan](#premium)
The definition of the hosting plan varies, and can be one of the following:
The function app resource is defined by using a resource of type **Microsoft.Web/sites** and kind **functionapp**:
-```json
-{
- "apiVersion": "2015-08-01",
- "type": "Microsoft.Web/sites",
- "name": "[variables('functionAppName')]",
- "location": "[resourceGroup().location]",
- "kind": "functionapp",
- "dependsOn": [
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
- "[resourceId('Microsoft.Insights/components', variables('appInsightsName'))]"
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ name: functionAppName
+ location: location
+ kind: 'functionapp'
+ identity:{
+ type:'SystemAssigned'
+ }
+ properties: {
+ serverFarmId: hostingPlan.id
+ clientAffinityEnabled: false
+ siteConfig: {
+ alwaysOn: true
+ }
+ httpsOnly: true
+ }
+ dependsOn: [
+ storageAccount
] } ```
+# [JSON](#tab/json)
+
+```json
+"resources:": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('functionAppName')]",
+ "location": "[parameters('location')]",
+ "kind": "functionapp",
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "properties": {
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "clientAffinityEnabled": false,
+ "siteConfig": {
+ "alwaysOn": true
+ },
+ "httpsOnly": true
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]",
+ "[resourceId('Microsoft.Insights/components', parameters('applicationInsightsName'))]"
+ ]
+ }
+]
+```
+++ > [!IMPORTANT]
-> If you are explicitly defining a hosting plan, an additional item would be needed in the dependsOn array: `"[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]"`
+> If you're explicitly defining a hosting plan, an additional item would be needed in the dependsOn array: `"[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]"`
A function app must include these application settings:
A function app must include these application settings:
These properties are specified in the `appSettings` collection in the `siteConfig` property:
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ ...
+ properties: {
+ ...
+ siteConfig: {
+ ...
+ appSettings: [
+ {
+ name: 'AzureWebJobsStorage'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${listKeys(storageAccountName, '2021-09-01').keys[0].value}'
+ }
+ {
+ name: 'FUNCTIONS_WORKER_RUNTIME'
+ value: 'node'
+ }
+ {
+ name: 'WEBSITE_NODE_DEFAULT_VERSION'
+ value: '~14'
+ }
+ {
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~4'
+ }
+ ...
+ ]
+ }
+ }
+}
+
+```
+
+# [JSON](#tab/json)
+ ```json
-"properties": {
- "siteConfig": {
- "appSettings": [
- {
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]"
- },
- {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
- },
- {
- "name": "WEBSITE_NODE_DEFAULT_VERSION",
- "value": "~14"
- },
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~4"
+"resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ ...
+ "properties": {
+ ...
+ "siteConfig": {
+ ...
+ "appSettings": [
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', parameters('storageAccountName'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "FUNCTIONS_WORKER_RUNTIME",
+ "value": "node"
+ },
+ {
+ "name": "WEBSITE_NODE_DEFAULT_VERSION",
+ "value": "~14"
+ },
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~4"
+ },
+ ...
+ ]
}
- ]
+ }
}
-}
+]
``` ++ <a name="consumption"></a> ## Deploy on Consumption plan
-The Consumption plan automatically allocates compute power when your code is running, scales out as necessary to handle load, and then scales in when code is not running. You don't have to pay for idle VMs, and you don't have to reserve capacity in advance. To learn more, see [Azure Functions scale and hosting](consumption-plan.md).
+The Consumption plan automatically allocates compute power when your code is running, scales out as necessary to handle load, and then scales in when code isn't running. You don't have to pay for idle VMs, and you don't have to reserve capacity in advance. To learn more, see [Azure Functions scale and hosting](consumption-plan.md).
-For a sample Azure Resource Manager template, see [Function app on Consumption plan].
+For a sample Bicep file/Azure Resource Manager template, see [Function app on Consumption plan].
### Create a Consumption plan
A Consumption plan doesn't need to be defined. When not defined, a plan is autom
The Consumption plan is a special type of `serverfarm` resource. You can specify it by using the `Dynamic` value for the `computeMode` and `sku` properties, as follows:
-# [Windows](#tab/windows)
+#### Windows
-```json
-{
- "type": "Microsoft.Web/serverfarms",
- "apiVersion": "2021-02-01",
- "name": "[variables('hostingPlanName')]",
- "location": "[parameters('location')]",
- "sku": {
- "name": "Y1",
- "tier": "Dynamic",
- "size": "Y1",
- "family": "Y",
- "capacity":0
- },
- "properties": {
- "name":"[variables('hostingPlanName')]",
- "computeMode": "Dynamic"
+# [Bicep](#tab/bicep)
+
+```bicep
+resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
+ name: hostingPlanName
+ location: location
+ sku: {
+ name: 'Y1'
+ tier: 'Dynamic'
+ size: 'Y1'
+ family: 'Y'
+ capacity: 0
+ }
+ properties: {
+ computeMode: 'Dynamic'
} } ```
-# [Linux](#tab/linux)
+# [JSON](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('hostingPlanName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Y1",
+ "tier": "Dynamic",
+ "size": "Y1",
+ "family": "Y",
+ "capacity": 0
+ },
+ "properties": {
+ "computeMode": "Dynamic"
+ }
+ }
+]
+```
+++
+#### Linux
To run your app on Linux, you must also set the property `"reserved": true` for the `serverfarms` resource:
-```json
-{
- "type": "Microsoft.Web/serverfarms",
- "apiVersion": "2021-02-01",
- "name": "[variables('hostingPlanName')]",
- "location": "[parameters('location')]",
- "sku": {
- "name": "Y1",
- "tier": "Dynamic",
- "size": "Y1",
- "family": "Y",
- "capacity":0
- },
- "properties": {
- "name":"[variables('hostingPlanName')]",
- "computeMode": "Dynamic",
- "reserved": true
+# [Bicep](#tab/bicep)
+
+```bicep
+resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
+ name: hostingPlanName
+ location: location
+ sku: {
+ name: 'Y1'
+ tier: 'Dynamic'
+ size: 'Y1'
+ family: 'Y'
+ capacity: 0
+ }
+ properties: {
+ computeMode: 'Dynamic'
+ reserved: true
} } ```
+# [JSON](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('hostingPlanName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Y1",
+ "tier": "Dynamic",
+ "size": "Y1",
+ "family": "Y",
+ "capacity":0
+ },
+ "properties": {
+ "computeMode": "Dynamic",
+ "reserved": true
+ }
+ }
+]
+```
+ ### Create a function app
When you explicitly define your Consumption plan, you must set the `serverFarmId
The settings required by a function app running in Consumption plan differ between Windows and Linux.
-# [Windows](#tab/windows)
+#### Windows
On Windows, a Consumption plan requires another two other settings in the site configuration: [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring) and [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare). This property configures the storage account where the function app code and configuration are stored.
-For a sample Azure Resource Manager template, see [Azure Function App Hosted on Windows Consumption Plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-windows-consumption).
+For a sample Bicep file/Azure Resource Manager template, see [Azure Function App Hosted on Windows Consumption Plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-windows-consumption).
-```json
-{
- "type": "Microsoft.Web/sites",
- "apiVersion": "2021-02-01",
- "name": "[parameters('functionAppName')]",
- "location": "[parameters('location')]",
- "kind": "functionapp",
- "dependsOn": [
- "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
- "[resourceId('Microsoft.Insights/components', variables('applicationInsightsName'))]"
- ],
- "properties": {
- "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "siteConfig": {
- "appSettings": [
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('microsoft.insights/components', variables('applicationInsightsName')), '2015-05-01').InstrumentationKey]"
- },
- {
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';EndpointSuffix=', environment().suffixes.storage, ';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value)]"
- },
- {
- "name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';EndpointSuffix=', environment().suffixes.storage, ';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value)]"
- },
- {
- "name": "WEBSITE_CONTENTSHARE",
- "value": "[toLower(parameters('functionAppName'))]"
- },
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~4"
- },
- {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
- },
- {
- "name": "WEBSITE_NODE_DEFAULT_VERSION",
- "value": "~14"
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ name: functionAppName
+ location: location
+ kind: 'functionapp'
+ properties: {
+ serverFarmId: hostingPlan.id
+ siteConfig: {
+ appSettings: [
+ {
+ name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
+ value: applicationInsights.properties.InstrumentationKey
+ }
+ {
+ name: 'AzureWebJobsStorage'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ {
+ name: 'WEBSITE_CONTENTAZUREFILECONNECTIONSTRING'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ {
+ name: 'WEBSITE_CONTENTSHARE'
+ value: toLower(functionAppName)
+ }
+ {
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~4'
+ }
+ {
+ name: 'FUNCTIONS_WORKER_RUNTIME'
+ value: 'node'
+ }
+ {
+ name: 'WEBSITE_NODE_DEFAULT_VERSION'
+ value: '~14'
} ] }
For a sample Azure Resource Manager template, see [Azure Function App Hosted on
} ```
+# [JSON](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('functionAppName')]",
+ "location": "[parameters('location')]",
+ "kind": "functionapp",
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components', parameters('applicationInsightsName'))]",
+ "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
+ ],
+ "properties": {
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "siteConfig": {
+ "appSettings": [
+ {
+ "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
+ "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').InstrumentationKey]"
+ },
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};AccountKey={2}', parameters('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};AccountKey={2}', parameters('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "WEBSITE_CONTENTSHARE",
+ "value": "[toLower(parameters('functionAppName'))]"
+ },
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~4"
+ },
+ {
+ "name": "FUNCTIONS_WORKER_RUNTIME",
+ "value": "node"
+ },
+ {
+ "name": "WEBSITE_NODE_DEFAULT_VERSION",
+ "value": "~14"
+ }
+ ]
+ }
+ }
+ }
+]
+```
+++ > [!IMPORTANT]
-> Do not need to set the [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) setting in a deployment slot. This setting is generated for you when the app is created in the deployment slot.
+> Don't set the [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) setting in a new deployment slot. This setting is generated for you when the app is created in the deployment slot.
-# [Linux](#tab/linux)
+#### Linux
-The function app must have set `"kind": "functionapp,linux"`, and it must have set property `"reserved": true`. Linux apps should also include a `linuxFxVersion` property under siteConfig. If you are just deploying code, the value for this is determined by your desired runtime stack in the format of runtime|runtimeVersion. For example: `python|3.7`, `node|14` and `dotnet|3.1`.
+The function app must have set `"kind": "functionapp,linux"`, and it must have set property `"reserved": true`. Linux apps should also include a `linuxFxVersion` property under siteConfig. If you're just deploying code, the value for this property is determined by your desired runtime stack in the format of runtime|runtimeVersion. For example: `python|3.7`, `node|14` and `dotnet|3.1`.
The [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring) and [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) settings aren't supported on Linux Consumption plan.
-For a sample Azure Resource Manager template, see [Azure Function App Hosted on Linux Consumption Plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-linux-consumption).
-
-```json
-{
- "type": "Microsoft.Web/sites",
- "apiVersion": "2021-02-01",
- "name": "[parameters('functionAppName')]",
- "location": "[parameters('location')]",
- "kind": "functionapp,linux",
- "dependsOn": [
- "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
- "[resourceId('Microsoft.Insights/components', variables('applicationInsightsName'))]"
- ],
- "properties": {
- "reserved": true,
- "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "siteConfig": {
- "linuxFxVersion": "node|14",
- "appSettings": [
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('Microsoft.Insights/components', parameters('functionAppName')), '2015-05-01').InstrumentationKey]"
- },
- {
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';EndpointSuffix=', environment().suffixes.storage, ';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value)]"
- },
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~4"
- },
- {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
+For a sample Bicep file/Azure Resource Manager template, see [Azure Function App Hosted on Linux Consumption Plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-linux-consumption).
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ name: functionAppName
+ location: location
+ kind: 'functionapp,linux'
+ properties: {
+ reserved: true
+ serverFarmId: hostingPlan.id
+ siteConfig: {
+ linuxFxVersion: 'node|14'
+ appSettings: [
+ {
+ name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
+ value: applicationInsights.properties.InstrumentationKey
+ }
+ {
+ name: 'AzureWebJobsStorage'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ {
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~4'
+ }
+ {
+ name: 'FUNCTIONS_WORKER_RUNTIME'
+ value: 'node'
} ] }
For a sample Azure Resource Manager template, see [Azure Function App Hosted on
} ```
+# [JSON](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('functionAppName')]",
+ "location": "[parameters('location')]",
+ "kind": "functionapp,linux",
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components', parameters('applicationInsightsName'))]",
+ "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
+ ],
+ "properties": {
+ "reserved": true,
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "siteConfig": {
+ "linuxFxVersion": "node|14",
+ "appSettings": [
+ {
+ "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
+ "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02).InstrumentationKey]"
+ },
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};AccountKey={2}', parameters('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~4"
+ },
+ {
+ "name": "FUNCTIONS_WORKER_RUNTIME",
+ "value": "node"
+ }
+ ]
+ }
+ }
+ }
+]
+```
+ <a name="premium"></a>
The Premium plan offers the same scaling as the Consumption plan but includes de
### Create a Premium plan
-A Premium plan is a special type of "serverfarm" resource. You can specify it by using either `EP1`, `EP2`, or `EP3` for the `Name` property value in the `sku` as following:
+A Premium plan is a special type of `serverfarm` resource. You can specify it by using either `EP1`, `EP2`, or `EP3` for the `Name` property value in the `sku` as shown in the following samples:
-# [Windows](#tab/windows)
+#### Windows
-```json
-{
- "type": "Microsoft.Web/serverfarms",
- "apiVersion": "2021-02-01",
- "name": "[variables('hostingPlanName')]",
- "location": "[parameters('location')]",
- "sku": {
- "tier": "ElasticPremium",
- "name": "EP1",
- "family": "EP"
- },
- "properties": {
- "name": "[parameters('hostingPlanName')]",
- "maximumElasticWorkerCount": 20
- },
- "kind": "elastic"
+# [Bicep](#tab/bicep)
+
+```bicep
+resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
+ name: hostingPlanName
+ location: location
+ sku: {
+ name: 'EP1'
+ tier: 'ElasticPremium'
+ family: 'EP'
+ }
+ kind: 'elastic'
+ properties: {
+ maximumElasticWorkerCount: 20
+ }
} ```
-# [Linux](#tab/linux)
-
-To run your app on Linux, you must also set property `"reserved": true` for the serverfarms resource:
+# [JSON](#tab/json)
```json
-{
- "type": "Microsoft.Web/serverfarms",
- "apiVersion": "2021-02-01",
- "name": "[variables('hostingPlanName')]",
- "location": "[parameters('location')]",
- "sku": {
- "tier": "ElasticPremium",
- "name": "EP1",
- "family": "EP"
- },
- "properties": {
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2022-03-01",
"name": "[parameters('hostingPlanName')]",
- "maximumElasticWorkerCount": 20,
- "reserved": true
- },
- "kind": "elastic"
-}
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "EP1",
+ "tier": "ElasticPremium",
+ "family": "EP"
+ },
+ "kind": "elastic",
+ "properties": {
+ "maximumElasticWorkerCount": 20
+ }
+ }
+]
```
-### Create a function app
-
-For function app on a Premium plan, you will need to set the `serverFarmId` property on the app so that it points to the resource ID of the plan. You should ensure that the function app has a `dependsOn` setting for the plan as well.
+#### Linux
-A Premium plan requires another settings in the site configuration: [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring) and [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare). This property configures the storage account where the function app code and configuration are stored, which are used for dynamic scale.
+To run your app on Linux, you must also set property `"reserved": true` for the serverfarms resource:
-For a sample Azure Resource Manager template, see [Azure Function App Hosted on Premium Plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-premium-plan).
+# [Bicep](#tab/bicep)
-The settings required by a function app running in Premium plan differ between Windows and Linux.
+```bicep
+resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
+ name: hostingPlanName
+ location: location
+ sku: {
+ name: 'EP1'
+ tier: 'ElasticPremium'
+ family: 'EP'
+ }
+ kind: 'elastic'
+ properties: {
+ maximumElasticWorkerCount: 20
+ reserved: true
+ }
+}
+```
-# [Windows](#tab/windows)
+# [JSON](#tab/json)
```json
-{
- "type": "Microsoft.Web/sites",
- "apiVersion": "2021-02-01",
- "name": "[parameters('functionAppName')]",
- "location": "[parameters('location')]",
- "kind": "functionapp",
- "dependsOn": [
- "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
- "[resourceId('Microsoft.Insights/components', variables('applicationInsightsName'))]"
- ],
- "properties": {
- "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "siteConfig": {
- "appSettings": [
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('microsoft.insights/components', variables('applicationInsightsName')), '2015-05-01').InstrumentationKey]"
- },
- {
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';EndpointSuffix=', environment().suffixes.storage, ';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value)]"
- },
- {
- "name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';EndpointSuffix=', environment().suffixes.storage, ';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value)]"
- },
- {
- "name": "WEBSITE_CONTENTSHARE",
- "value": "[toLower(parameters('functionAppName'))]"
- },
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~4"
- },
- {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
- },
- {
- "name": "WEBSITE_NODE_DEFAULT_VERSION",
- "value": "~14"
- }
- ]
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('hostingPlanName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "EP1",
+ "tier": "ElasticPremium",
+ "family": "EP",
+ },
+ "kind": "elastic",
+ "properties": {
+ "maximumElasticWorkerCount": 20,
+ "reserved": true
} }
-}
+]
```
-> [!IMPORTANT]
-> You don't need to set the [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) setting because it's generated for you when the site is first created.
+
+### Create a function app
-# [Linux](#tab/linux)
+For function app on a Premium plan, you'll need to set the `serverFarmId` property on the app so that it points to the resource ID of the plan. You should ensure that the function app has a `dependsOn` setting for the plan as well.
-The function app must have set `"kind": "functionapp,linux"`, and it must have set property `"reserved": true`. Linux apps should also include a `linuxFxVersion` property under siteConfig. If you are just deploying code, the value for this is determined by your desired runtime stack in the format of runtime|runtimeVersion. For example: `python|3.7`, `node|14` and `dotnet|3.1`.
+A Premium plan requires another settings in the site configuration: [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring) and [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare). This property configures the storage account where the function app code and configuration are stored, which are used for dynamic scale.
+
+For a sample Bicep file/Azure Resource Manager template, see [Azure Function App Hosted on Premium Plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-premium-plan).
+
+The settings required by a function app running in Premium plan differ between Windows and Linux.
+
+#### Windows
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionAppName_resource 'Microsoft.Web/sites@2022-03-01' = {
+ name: functionAppName
+ location: location
+ kind: 'functionapp'
+ properties: {
+ serverFarmId: hostingPlanName.id
+ siteConfig: {
+ appSettings: [
+ {
+ name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
+ value: applicationInsightsName.properties.InstrumentationKey
+ }
+ {
+ name: 'AzureWebJobsStorage'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ {
+ name: 'WEBSITE_CONTENTAZUREFILECONNECTIONSTRING'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ {
+ name: 'WEBSITE_CONTENTSHARE'
+ value: toLower(functionAppName)
+ }
+ {
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~4'
+ }
+ {
+ name: 'FUNCTIONS_WORKER_RUNTIME'
+ value: 'node'
+ }
+ {
+ name: 'WEBSITE_NODE_DEFAULT_VERSION'
+ value: '~14'
+ }
+ ]
+ }
+ }
+}
+```
+
+# [JSON](#tab/json)
```json
-{
- "type": "Microsoft.Web/sites",
- "apiVersion": "2021-02-01",
- "name": "[parameters('functionAppName')]",
- "location": "[parameters('location')]",
- "kind": "functionapp,linux",
- "dependsOn": [
- "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
- "[resourceId('Microsoft.Insights/components', variables('applicationInsightsName'))]"
- ],
- "properties": {
- "reserved": true,
- "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "siteConfig": {
- "linuxFxVersion": "node|14",
- "appSettings": [
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('microsoft.insights/components', variables('applicationInsightsName')), '2015-05-01').InstrumentationKey]"
- },
- {
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';EndpointSuffix=', environment().suffixes.storage, ';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value)]"
- },
- {
- "name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';EndpointSuffix=', environment().suffixes.storage, ';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value)]"
- },
- {
- "name": "WEBSITE_CONTENTSHARE",
- "value": "[toLower(parameters('functionAppName'))]"
- },
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~4"
- },
- {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
+"resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('functionAppName')]",
+ "location": "[parameters('location')]",
+ "kind": "functionapp",
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components', parameters('applicationInsightsName'))]",
+ "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
+ ],
+ "properties": {
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "siteConfig": {
+ "appSettings": [
+ {
+ "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
+ "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').InstrumentationKey]"
+ },
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};AccountKey={2}', parameters('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};AccountKey={2}', parameters('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "WEBSITE_CONTENTSHARE",
+ "value": "[toLower(parameters('functionAppName'))]"
+ },
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~4"
+ },
+ {
+ "name": "FUNCTIONS_WORKER_RUNTIME",
+ "value": "node"
+ },
+ {
+ "name": "WEBSITE_NODE_DEFAULT_VERSION",
+ "value": "~14"
+ }
+ ]
+ }
+ }
+ }
+]
+```
+++
+> [!IMPORTANT]
+> You don't need to set the [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) setting because it's generated for you when the site is first created.
+
+#### Linux
+
+The function app must have set `"kind": "functionapp,linux"`, and it must have set property `"reserved": true`. Linux apps should also include a `linuxFxVersion` property under siteConfig. If you're just deploying code, the value for this property is determined by your desired runtime stack in the format of runtime|runtimeVersion. For example: `python|3.7`, `node|14` and `dotnet|3.1`.
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2021-02-01' = {
+ name: functionAppName
+ location: location
+ kind: 'functionapp,linux'
+ properties: {
+ reserved: true
+ serverFarmId: hostingPlan.id
+ siteConfig: {
+ linuxFxVersion: 'node|14'
+ appSettings: [
+ {
+ name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
+ value: applicationInsightsName.properties.InstrumentationKey
+ }
+ {
+ name: 'AzureWebJobsStorage'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ {
+ name: 'WEBSITE_CONTENTAZUREFILECONNECTIONSTRING'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ {
+ name: 'WEBSITE_CONTENTSHARE'
+ value: toLower(functionAppName)
+ }
+ {
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~4'
+ }
+ {
+ name: 'FUNCTIONS_WORKER_RUNTIME'
+ value: 'node'
} ] }
The function app must have set `"kind": "functionapp,linux"`, and it must have s
} ```
+# [JSON](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2021-02-01",
+ "name": "[parameters('functionAppName')]",
+ "location": "[parameters('location')]",
+ "kind": "functionapp,linux",
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components', parameters('applicationInsightsName'))]",
+ "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
+ ],
+ "properties": {
+ "reserved": true,
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "siteConfig": {
+ "linuxFxVersion": "node|14",
+ "appSettings": [
+ {
+ "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
+ "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').InstrumentationKey]"
+ },
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};AccountKey={2}', parameters('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};AccountKey={2}', parameters('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "WEBSITE_CONTENTSHARE",
+ "value": "[toLower(parameters('functionAppName'))]"
+ },
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~4"
+ },
+ {
+ "name": "FUNCTIONS_WORKER_RUNTIME",
+ "value": "node"
+ }
+ ]
+ }
+ }
+ }
+]
+```
+ <a name="app-service-plan"></a>
The function app must have set `"kind": "functionapp,linux"`, and it must have s
In the App Service plan, your function app runs on dedicated VMs on Basic, Standard, and Premium SKUs, similar to web apps. For details about how the App Service plan works, see the [Azure App Service plans in-depth overview](../app-service/overview-hosting-plans.md).
-For a sample Azure Resource Manager template, see [Function app on Azure App Service plan].
+For a sample Bicep file/Azure Resource Manager template, see [Function app on Azure App Service plan].
-### Create an App Service plan
+### Create a Dedicated plan
-An App Service plan is defined by a "serverfarm" resource. You can specify the SKU as follows:
+In Functions, the Dedicated plan is just a regular App Service plan, which is defined by a `serverfarm` resource. You can specify the SKU as follows:
-# [Windows](#tab/windows)
+#### Windows
-```json
-{
- "type": "Microsoft.Web/serverfarms",
- "apiVersion": "2021-02-01",
- "name": "[variables('hostingPlanName')]",
- "location": "[parameters('location')]",
- "sku": {
- "tier": "Standard",
- "name": "S1",
- "size": "S1",
- "family": "S",
- "capacity": 1
+# [Bicep](#tab/bicep)
+
+```bicep
+resource hostingPlanName 'Microsoft.Web/serverfarms@2022-03-01' = {
+ name: hostingPlanName
+ location: location
+ sku: {
+ tier: 'Standard'
+ name: 'S1'
+ size: 'S1'
+ family: 'S'
+ capacity: 1
} } ```
-# [Linux](#tab/linux)
+# [JSON](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('hostingPlanName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "tier": "Standard",
+ "name": "S1",
+ "size": "S1",
+ "family": "S",
+ "capacity": 1
+ }
+ }
+]
+```
+++
+#### Linux
To run your app on Linux, you must also set property `"reserved": true` for the serverfarms resource:
-```json
-{
- "type": "Microsoft.Web/serverfarms",
- "apiVersion": "2021-02-01",
- "name": "[variables('hostingPlanName')]",
- "location": "[parameters('location')]",
- "sku": {
- "tier": "Standard",
- "name": "S1",
- "size": "S1",
- "family": "S",
- "capacity": 1
- },
- "properties": {
- "reserved": true
+# [Bicep](#tab/bicep)
+
+```bicep
+resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
+ name: hostingPlanName
+ location: location
+ sku: {
+ tier: 'Standard'
+ name: 'S1'
+ size: 'S1'
+ family: 'S'
+ capacity: 1
+ }
+ properties: {
+ reserved: true
} } ```
+# [JSON](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('hostingPlanName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "tier": "Standard",
+ "name": "S1",
+ "size": "S1",
+ "family": "S",
+ "capacity": 1
+ },
+ "properties": {
+ "reserved": true
+ }
+ }
+]
+```
+ ### Create a function app
On App Service plan, you should enable the `"alwaysOn": true` setting under site
The [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring) and [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) settings aren't supported on Dedicated plan.
-For a sample Azure Resource Manager template, see [Azure Function App Hosted on Dedicated Plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-dedicated-plan).
+For a sample Bicep file/Azure Resource Manager template, see [Azure Function App Hosted on Dedicated Plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-dedicated-plan).
The settings required by a function app running in Dedicated plan differ between Windows and Linux.
-# [Windows](#tab/windows)
+#### Windows
-```json
-{
- "type": "Microsoft.Web/sites",
- "apiVersion": "2021-02-01",
- "name": "[parameters('functionAppName')]",
- "location": "[parameters('location')]",
- "kind": "functionapp",
- "dependsOn": [
- "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
- "[resourceId('Microsoft.Insights/components', variables('applicationInsightsName'))]"
- ],
- "properties": {
- "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "siteConfig": {
- "alwaysOn": true,
- "appSettings": [
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('microsoft.insights/components', variables('applicationInsightsName')), '2015-05-01').InstrumentationKey]"
- },
- {
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';EndpointSuffix=', environment().suffixes.storage, ';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value)]"
- },
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~4"
- },
- {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
- },
- {
- "name": "WEBSITE_NODE_DEFAULT_VERSION",
- "value": "~14"
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ name: functionAppName
+ location: location
+ kind: 'functionapp'
+ properties: {
+ serverFarmId: hostingPlan.id
+ siteConfig: {
+ alwaysOn: true
+ appSettings: [
+ {
+ name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
+ value: applicationInsightsName.properties.InstrumentationKey
+ }
+ {
+ name: 'AzureWebJobsStorage'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ {
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~4'
+ }
+ {
+ name: 'FUNCTIONS_WORKER_RUNTIME'
+ value: 'node'
+ }
+ {
+ name: 'WEBSITE_NODE_DEFAULT_VERSION'
+ value: '~14'
} ] }
The settings required by a function app running in Dedicated plan differ between
} ```
-# [Linux](#tab/linux)
-
-The function app must have set `"kind": "functionapp,linux"`, and it must have set property `"reserved": true`. Linux apps should also include a `linuxFxVersion` property under siteConfig. If you are just deploying code, the value for this is determined by your desired runtime stack in the format of runtime|runtimeVersion. Examples of `linuxFxVersion` property include: `python|3.7`, `node|14` and `dotnet|3.1`.
+# [JSON](#tab/json)
```json
-{
- "type": "Microsoft.Web/sites",
- "apiVersion": "2021-02-01",
- "name": "[parameters('functionAppName')]",
- "location": "[parameters('location')]",
- "kind": "functionapp,linux",
- "dependsOn": [
- "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
- "[resourceId('Microsoft.Insights/components', variables('applicationInsightsName'))]"
- ],
- "properties": {
- "reserved": true,
- "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "siteConfig": {
- "alwaysOn": true,
- "linuxFxVersion": "node|14",
- "appSettings": [
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('microsoft.insights/components', variables('applicationInsightsName')), '2015-05-01').InstrumentationKey]"
- },
- {
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';EndpointSuffix=', environment().suffixes.storage, ';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value)]"
- },
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~4"
- },
- {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
+"resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('functionAppName')]",
+ "location": "[parameters('location')]",
+ "kind": "functionapp",
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components', parameters('applicationInsightsName'))]",
+ "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
+ ],
+ "properties": {
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "siteConfig": {
+ "alwaysOn": true,
+ "appSettings": [
+ {
+ "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
+ "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').InstrumentationKey]"
+ },
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};AccountKey={2}', parameters('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~4"
+ },
+ {
+ "name": "FUNCTIONS_WORKER_RUNTIME",
+ "value": "node"
+ },
+ {
+ "name": "WEBSITE_NODE_DEFAULT_VERSION",
+ "value": "~14"
+ }
+ ]
+ }
+ }
+ }
+]
+```
+++
+#### Linux
+
+The function app must have set `"kind": "functionapp,linux"`, and it must have set property `"reserved": true`. Linux apps should also include a `linuxFxVersion` property under siteConfig. If you're just deploying code, the value for this property is determined by your desired runtime stack in the format of runtime|runtimeVersion. Examples of `linuxFxVersion` property include: `python|3.7`, `node|14` and `dotnet|3.1`.
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ name: functionAppName
+ location: location
+ kind: 'functionapp,linux'
+ properties: {
+ reserved: true
+ serverFarmId: hostingPlan.id
+ siteConfig: {
+ alwaysOn: true
+ linuxFxVersion: 'node|14'
+ appSettings: [
+ {
+ name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
+ value: applicationInsightsName.properties.InstrumentationKey
+ }
+ {
+ name: 'AzureWebJobsStorage'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ {
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~4'
+ }
+ {
+ name: 'FUNCTIONS_WORKER_RUNTIME'
+ value: 'node'
} ] }
The function app must have set `"kind": "functionapp,linux"`, and it must have s
} ```
+# [JSON](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('functionAppName')]",
+ "location": "[parameters('location')]",
+ "kind": "functionapp,linux",
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components', parameters('applicationInsightsName'))]",
+ "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
+ ],
+ "properties": {
+ "reserved": true,
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "siteConfig": {
+ "alwaysOn": true,
+ "linuxFxVersion": "node|14",
+ "appSettings": [
+ {
+ "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
+ "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').InstrumentationKey]"
+ },
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};AccountKey={2}', parameters('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~4"
+ },
+ {
+ "name": "FUNCTIONS_WORKER_RUNTIME",
+ "value": "node"
+ }
+ ]
+ }
+ }
+ }
+]
+```
+ ### Custom Container Image
-If you are [deploying a custom container image](./functions-create-function-linux-custom-image.md), you must specify it with `linuxFxVersion` and include configuration that allows your image to be pulled, as in [Web App for Containers](../app-service/index.yml). Also, set `WEBSITES_ENABLE_APP_SERVICE_STORAGE` to `false`, since your app content is provided in the container itself:
+If you're [deploying a custom container image](./functions-create-function-linux-custom-image.md), you must specify it with `linuxFxVersion` and include configuration that allows your image to be pulled, as in [Web App for Containers](../app-service/index.yml). Also, set `WEBSITES_ENABLE_APP_SERVICE_STORAGE` to `false`, since your app content is provided in the container itself:
-```json
-{
- "apiVersion": "2016-03-01",
- "type": "Microsoft.Web/sites",
- "name": "[variables('functionAppName')]",
- "location": "[resourceGroup().location]",
- "kind": "functionapp",
- "dependsOn": [
- "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
- ],
- "properties": {
- "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "siteConfig": {
- "appSettings": [
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ name: functionAppName
+ location: location
+ kind: 'functionapp'
+ properties: {
+ serverFarmId: hostingPlan.id
+ siteConfig: {
+ appSettings: [
{
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]"
- },
+ name: 'AzureWebJobsStorage'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
{
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
- },
+ name: 'FUNCTIONS_WORKER_RUNTIME'
+ value: 'node'
+ }
{
- "name": "WEBSITE_NODE_DEFAULT_VERSION",
- "value": "~14"
- },
+ name: 'WEBSITE_NODE_DEFAULT_VERSION'
+ value: '~14'
+ }
{
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~3"
- },
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~3'
+ }
{
- "name": "DOCKER_REGISTRY_SERVER_URL",
- "value": "[parameters('dockerRegistryUrl')]"
- },
+ name: 'DOCKER_REGISTRY_SERVER_URL'
+ value: dockerRegistryUrl
+ }
{
- "name": "DOCKER_REGISTRY_SERVER_USERNAME",
- "value": "[parameters('dockerRegistryUsername')]"
- },
+ name: 'DOCKER_REGISTRY_SERVER_USERNAME'
+ value: dockerRegistryUsername
+ }
{
- "name": "DOCKER_REGISTRY_SERVER_PASSWORD",
- "value": "[parameters('dockerRegistryPassword')]"
- },
+ name: 'DOCKER_REGISTRY_SERVER_PASSWORD'
+ value: dockerRegistryPassword
+ }
{
- "name": "WEBSITES_ENABLE_APP_SERVICE_STORAGE",
- "value": "false"
+ name: 'WEBSITES_ENABLE_APP_SERVICE_STORAGE'
+ value: 'false'
}
- ],
- "linuxFxVersion": "DOCKER|myacr.azurecr.io/myimage:mytag"
+ ]
+ linuxFxVersion: 'DOCKER|myacr.azurecr.io/myimage:mytag'
} }
+ dependsOn: [
+ storageAccount
+ ]
} ```
+# [JSON](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('functionAppName')]",
+ "location": "[parameters('location')]",
+ "kind": "functionapp",
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
+ ],
+ "properties": {
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "siteConfig": {
+ "appSettings": [
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', parameters('storageAccountName'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "FUNCTIONS_WORKER_RUNTIME",
+ "value": "node"
+ },
+ {
+ "name": "WEBSITE_NODE_DEFAULT_VERSION",
+ "value": "~14"
+ },
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~3"
+ },
+ {
+ "name": "DOCKER_REGISTRY_SERVER_URL",
+ "value": "[parameters('dockerRegistryUrl')]"
+ },
+ {
+ "name": "DOCKER_REGISTRY_SERVER_USERNAME",
+ "value": "[parameters('dockerRegistryUsername')]"
+ },
+ {
+ "name": "DOCKER_REGISTRY_SERVER_PASSWORD",
+ "value": "[parameters('dockerRegistryPassword')]"
+ },
+ {
+ "name": "WEBSITES_ENABLE_APP_SERVICE_STORAGE",
+ "value": "false"
+ }
+ ],
+ "linuxFxVersion": "DOCKER|myacr.azurecr.io/myimage:mytag"
+ }
+ }
+ }
+]
+```
+++ ## Deploy to Azure Arc Azure Functions can be deployed to [Azure Arc-enabled Kubernetes](../app-service/overview-arc-integration.md). This process largely follows [deploying to an App Service plan](#deploy-on-app-service-plan), with a few differences to note.
-To create the app and plan resources, you must have already [created an App Service Kubernetes environment](../app-service/manage-create-arc-environment.md) for an Azure Arc-enabled Kubernetes cluster. These examples assume you have the resource ID of the custom location and App Service Kubernetes environment that you are deploying to. For most templates, you can supply these as parameters.
+To create the app and plan resources, you must have already [created an App Service Kubernetes environment](../app-service/manage-create-arc-environment.md) for an Azure Arc-enabled Kubernetes cluster. These examples assume you have the resource ID of the custom location and App Service Kubernetes environment that you're deploying to. For most Bicep files/ARM templates, you can supply these values as parameters.
+
+# [Bicep](#tab/bicep)
+
+```bicep
+param kubeEnvironmentId string
+param customLocationId string
+```
+
+# [JSON](#tab/json)
```json
-{
- "parameters": {
- "kubeEnvironmentId" : {
- "type": "string"
- },
- "customLocationId" : {
- "type": "string"
- }
+"parameters": {
+ "kubeEnvironmentId" : {
+ "type": "string"
+ },
+ "customLocationId" : {
+ "type": "string"
} } ``` ++ Both sites and plans must reference the custom location through an `extendedLocation` field. This block sits outside of `properties`, peer to `kind` and `location`:
-```json
-{
- "extendedLocation": {
- "type": "customlocation",
- "name": "[parameters('customLocationId')]"
- },
+# [Bicep](#tab/bicep)
+
+```bicep
+resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
+ ...
+ {
+ extendedLocation: {
+ name: customLocationId
+ }
+ }
} ```
-The plan resource should use the Kubernetes (K1) SKU, and its `kind` field should be "linux,kubernetes". Within `properties`, `reserved` should be "true", and `kubeEnvironmentProfile.id` should be set to the App Service Kubernetes environment resource ID. An example plan might look like the following:
+# [JSON](#tab/json)
```json { "type": "Microsoft.Web/serverfarms",
- "name": "[variables('hostingPlanName')]",
- "location": "[parameters('location')]",
- "apiVersion": "2020-12-01",
- "kind": "linux,kubernetes",
- "sku": {
- "name": "K1",
- "tier": "Kubernetes"
- },
- "extendedLocation": {
- "type": "customlocation",
- "name": "[parameters('customLocationId')]"
- },
- "properties": {
- "name": "[variables('hostingPlanName')]",
- "location": "[parameters('location')]",
- "workerSizeId": "0",
- "numberOfWorkers": "1",
- "kubeEnvironmentProfile": {
- "id": "[parameters('kubeEnvironmentId')]"
+ ...
+ {
+ "extendedLocation": {
+ "name": "[parameters('customLocationId')]"
},
- "reserved": true
} } ```
-The function app resource should have its `kind` field set to "functionapp,linux,kubernetes" or "functionapp,linux,kubernetes,container" depending on if you intend to deploy via code or container. An example function app might look like the following:
++
+The plan resource should use the Kubernetes (K1) SKU, and its `kind` field should be `linux,kubernetes`. Within `properties`, `reserved` should be `true`, and `kubeEnvironmentProfile.id` should be set to the App Service Kubernetes environment resource ID. An example plan might look like:
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
+ name: hostingPlanName
+ location: location
+ kind: 'linux,kubernetes'
+ sku: {
+ name: 'K1'
+ tier: 'Kubernetes'
+ }
+ extendedLocation: {
+ name: customLocationId
+ }
+ properties: {
+ kubeEnvironmentProfile: {
+ id: kubeEnvironmentId
+ }
+ reserved: true
+ }
+}
+```
+
+# [JSON](#tab/json)
```json
- {
- "apiVersion": "2018-11-01",
- "type": "Microsoft.Web/sites",
- "name": "[variables('appName')]",
- "kind": "kubernetes,functionapp,linux,container",
- "location": "[parameters('location')]",
- "extendedLocation": {
- "type": "customlocation",
- "name": "[parameters('customLocationId')]"
- },
- "dependsOn": [
- "[resourceId('Microsoft.Insights/components', variables('appInsightsName'))]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
- "[variables('hostingPlanId')]"
- ],
- "properties": {
- "serverFarmId": "[variables('hostingPlanId')]",
- "siteConfig": {
- "linuxFxVersion": "DOCKER|mcr.microsoft.com/azure-functions/dotnet:3.0-appservice-quickstart",
- "appSettings": [
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('hostingPlanName')]",
+ "location": "[parameters('location')]",
+ "kind": "linux,kubernetes",
+ "sku": {
+ "name": "K1",
+ "tier": "Kubernetes"
+ },
+ "extendedLocation": {
+ "name": "[parameters('customLocationId')]"
+ },
+ "properties": {
+ "kubeEnvironmentProfile": {
+ "id": "[parameters('kubeEnvironmentId')]"
+ },
+ "reserved": true
+ }
+ }
+]
+```
+++
+The function app resource should have its `kind` field set to **functionapp,linux,kubernetes** or **functionapp,linux,kubernetes,container** depending on if you intend to deploy via code or container. An example .NET 6.0 function app might look like:
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ name: functionAppName
+ kind: 'kubernetes,functionapp,linux,container'
+ location: location
+ extendedLocation: {
+ name: customLocationId
+ }
+ properties: {
+ serverFarmId: hostingPlanName
+ siteConfig: {
+ linuxFxVersion: 'DOCKER|mcr.microsoft.com/azure-functions/dotnet:3.0-appservice-quickstart'
+ appSettings: [
{
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~3"
- },
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~3'
+ }
{
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2015-05-01-preview').key1)]"
-
- },
+ name: 'AzureWebJobsStorage'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
{
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('microsoft.insights/components/', variables('appInsightsName')), '2015-05-01').InstrumentationKey]"
+ name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
+ value: applicationInsightsName.properties.InstrumentationKey
}
- ],
- "alwaysOn": true
+ ]
+ alwaysOn: true
} }
+ dependsOn: [
+ storageAccount
+ hostingPlan
+ ]
} ```
+# [JSON](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('functionAppName')]",
+ "kind": "kubernetes,functionapp,linux,container",
+ "location": "[parameters('location')]",
+ "extendedLocation": {
+ "name": "[parameters('customLocationId')]"
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components', parameters('applicationInsightsName'))]",
+ "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
+ ],
+ "properties": {
+ "serverFarmId": "[parameters('hostingPlanName')]",
+ "siteConfig": {
+ "linuxFxVersion": "DOCKER|mcr.microsoft.com/azure-functions/dotnet:3.0-appservice-quickstart",
+ "appSettings": [
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~3"
+ },
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', parameters('storageAccountName'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
+ "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').InstrumentationKey]"
+ }
+ ],
+ "alwaysOn": true
+ }
+ }
+ }
+]
+```
+++ ## Customizing a deployment A function app has many child resources that you can use in your deployment, including app settings and source control options. You also might choose to remove the **sourcecontrols** child resource, and use a different [deployment option](functions-continuous-deployment.md) instead. > [!IMPORTANT]
-> To successfully deploy your application by using Azure Resource Manager, it's important to understand how resources are deployed in Azure. In the following example, top-level configurations are applied by using **siteConfig**. It's important to set these configurations at a top level, because they convey information to the Functions runtime and deployment engine. Top-level information is required before the child **sourcecontrols/web** resource is applied. Although it's possible to configure these settings in the child-level **config/appSettings** resource, in some cases your function app must be deployed *before* **config/appSettings** is applied. For example, when you are using functions with [Logic Apps](../logic-apps/index.yml), your functions are a dependency of another resource.
+> To successfully deploy your application by using Azure Resource Manager, it's important to understand how resources are deployed in Azure. In the following example, top-level configurations are applied by using `siteConfig`. It's important to set these configurations at a top level, because they convey information to the Functions runtime and deployment engine. Top-level information is required before the child **sourcecontrols/web** resource is applied. Although it's possible to configure these settings in the child-level **config/appSettings** resource, in some cases your function app must be deployed *before* **config/appSettings** is applied. For example, when you're using functions with [Logic Apps](../logic-apps/index.yml), your functions are a dependency of another resource.
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ name: functionAppName
+ location: location
+ kind: 'functionapp'
+ properties: {
+ serverFarmId: hostingPlan.id
+ siteConfig: {
+ alwaysOn: true
+ appSettings: [
+ {
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~3'
+ }
+ {
+ name: 'Project'
+ value: 'src'
+ }
+ ]
+ }
+ }
+ dependsOn: [
+ storageAccount
+ ]
+}
+
+resource config 'Microsoft.Web/sites/config@2022-03-01' = {
+ parent: functionApp
+ name: 'appsettings'
+ properties: {
+ AzureWebJobsStorage: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ AzureWebJobsDashboard: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ FUNCTIONS_EXTENSION_VERSION: '~3'
+ FUNCTIONS_WORKER_RUNTIME: 'dotnet'
+ Project: 'src'
+ }
+ dependsOn: [
+ sourcecontrol
+ storageAccount
+ ]
+}
+
+resource sourcecontrol 'Microsoft.Web/sites/sourcecontrols@2022-03-01' = {
+ parent: functionApp
+ name: 'web'
+ properties: {
+ repoUrl: repoUrl
+ branch: branch
+ isManualIntegration: true
+ }
+}
+```
+
+# [JSON](#tab/json)
```json
-{
- "apiVersion": "2015-08-01",
- "name": "[parameters('appName')]",
- "type": "Microsoft.Web/sites",
- "kind": "functionapp",
- "location": "[parameters('location')]",
- "dependsOn": [
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
- "[resourceId('Microsoft.Web/serverfarms', parameters('appName'))]"
- ],
- "properties": {
- "serverFarmId": "[variables('appServicePlanName')]",
- "siteConfig": {
+"resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2022-03-01",
+ "name": "[variables('functionAppName')]",
+ "location": "[parameters('location')]",
+ "kind": "functionapp",
+ "properties": {
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
+ "siteConfig": {
"alwaysOn": true, "appSettings": [
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~3"
- },
- {
- "name": "Project",
- "value": "src"
- }
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~3"
+ },
+ {
+ "name": "Project",
+ "value": "src"
+ }
]
- }
- },
- "resources": [
- {
- "apiVersion": "2015-08-01",
- "name": "appsettings",
- "type": "config",
- "dependsOn": [
- "[resourceId('Microsoft.Web/Sites', parameters('appName'))]",
- "[resourceId('Microsoft.Web/Sites/sourcecontrols', parameters('appName'), 'web')]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
- ],
- "properties": {
- "AzureWebJobsStorage": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]",
- "AzureWebJobsDashboard": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]",
- "FUNCTIONS_EXTENSION_VERSION": "~3",
- "FUNCTIONS_WORKER_RUNTIME": "dotnet",
- "Project": "src"
} },
- {
- "apiVersion": "2015-08-01",
- "name": "web",
- "type": "sourcecontrols",
- "dependsOn": [
- "[resourceId('Microsoft.Web/sites/', parameters('appName'))]"
- ],
- "properties": {
- "RepoUrl": "[parameters('sourceCodeRepositoryURL')]",
- "branch": "[parameters('sourceCodeBranch')]",
- "IsManualIntegration": "[parameters('sourceCodeManualIntegration')]"
- }
- }
- ]
-}
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
+ ]
+ },
+ {
+ "type": "Microsoft.Web/sites/config",
+ "apiVersion": "2022-03-01",
+ "name": "[format('{0}/{1}', variables('functionAppName'), 'appsettings')]",
+ "properties": {
+ "AzureWebJobsStorage": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', variables('storageAccountName'), listKeys(variables('storageAccountName'), '2021-09-01').keys[0].value)]",
+ "AzureWebJobsDashboard": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', variables('storageAccountName'), listKeys(variables('storageAccountName'), '2021-09-01').keys[0].value)]",
+ "FUNCTIONS_EXTENSION_VERSION": "~3",
+ "FUNCTIONS_WORKER_RUNTIME": "dotnet",
+ "Project": "src"
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/sites', variables('functionAppName'))]",
+ "[resourceId('Microsoft.Web/sites/sourcecontrols', variables('functionAppName'), 'web')]",
+ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
+ ]
+ },
+ {
+ "type": "Microsoft.Web/sites/sourcecontrols",
+ "apiVersion": "2022-03-01",
+ "name": "[format('{0}/{1}', variables('functionAppName'), 'web')]",
+ "properties": {
+ "repoUrl": "[parameters('repoURL')]",
+ "branch": "[parameters('branch')]",
+ "isManualIntegration": true
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/sites', variables('functionAppName'))]"
+ ]
+ }
+]
``` ++ > [!TIP]
-> This template uses the [Project](https://github.com/projectkudu/kudu/wiki/Customizing-deployments#using-app-settings-instead-of-a-deployment-file) app settings value, which sets the base directory in which the Functions deployment engine (Kudu) looks for deployable code. In our repository, our functions are in a subfolder of the **src** folder. So, in the preceding example, we set the app settings value to `src`. If your functions are in the root of your repository, or if you are not deploying from source control, you can remove this app settings value.
+> This Bicep/ARM template uses the [Project](https://github.com/projectkudu/kudu/wiki/Customizing-deployments#using-app-settings-instead-of-a-deployment-file) app settings value, which sets the base directory in which the Functions deployment engine (Kudu) looks for deployable code. In our repository, our functions are in a subfolder of the **src** folder. So, in the preceding example, we set the app settings value to `src`. If your functions are in the root of your repository, or if you're not deploying from source control, you can remove this app settings value.
## Deploy your template
-You can use any of the following ways to deploy your template:
+You can use any of the following ways to deploy your Bicep file and template:
+
+# [Bicep](#tab/bicep)
+
+- [Azure CLI](../azure-resource-manager/bicep/deploy-cli.md)
+- [PowerShell](../azure-resource-manager/bicep/deploy-powershell.md)
+
+# [JSON](#tab/json)
-- [PowerShell](../azure-resource-manager/templates/deploy-powershell.md)-- [Azure CLI](../azure-resource-manager/templates/deploy-cli.md) - [Azure portal](../azure-resource-manager/templates/deploy-portal.md)-- [REST API](../azure-resource-manager/templates/deploy-rest.md)
+- [Azure CLI](../azure-resource-manager/templates/deploy-cli.md)
+- [PowerShell](../azure-resource-manager/templates/deploy-powershell.md)
++ ### Deploy to Azure button
+> [!NOTE]
+> This method doesn't support deploying Bicep files currently.
+ Replace ```<url-encoded-path-to-azuredeploy-json>``` with a [URL-encoded](https://www.bing.com/search?q=url+encode) version of the raw path of your `azuredeploy.json` file in GitHub.
-Here is an example that uses markdown:
+Here's an example that uses markdown:
```markdown [![Deploy to Azure](https://azuredeploy.net/deploybutton.png)](https://portal.azure.com/#create/Microsoft.Template/uri/<url-encoded-path-to-azuredeploy-json>) ```
-Here is an example that uses HTML:
+Here's an example that uses HTML:
```html <a href="https://portal.azure.com/#create/Microsoft.Template/uri/<url-encoded-path-to-azuredeploy-json>" target="_blank"><img src="https://azuredeploy.net/deploybutton.png"></a>
Here is an example that uses HTML:
### Deploy using PowerShell
-The following PowerShell commands create a resource group and deploy a template that creates a function app with its required resources. To run locally, you must have [Azure PowerShell](/powershell/azure/install-az-ps) installed. Run [`Connect-AzAccount`](/powershell/module/az.accounts/connect-azaccount) to sign in.
+The following PowerShell commands create a resource group and deploy a Bicep file/ARM template that creates a function app with its required resources. To run locally, you must have [Azure PowerShell](/powershell/azure/install-az-ps) installed. Run [`Connect-AzAccount`](/powershell/module/az.accounts/connect-azaccount) to sign in.
+
+# [Bicep](#tab/bicep)
```powershell # Register Resource Providers if they're not already registered
Register-AzResourceProvider -ProviderNamespace "microsoft.storage"
# Create a resource group for the function app New-AzResourceGroup -Name "MyResourceGroup" -Location 'West Europe'
-# Create the parameters for the file, which for this template is the function app name.
-$TemplateParams = @{"appName" = "<function-app-name>"}
+# Deploy the template
+New-AzResourceGroupDeployment -ResourceGroupName "MyResourceGroup" -TemplateFile main.bicep -Verbose
+```
+
+# [JSON](#tab/json)
+
+```powershell
+# Register Resource Providers if they're not already registered
+Register-AzResourceProvider -ProviderNamespace "microsoft.web"
+Register-AzResourceProvider -ProviderNamespace "microsoft.storage"
+
+# Create a resource group for the function app
+New-AzResourceGroup -Name "MyResourceGroup" -Location 'West Europe'
# Deploy the template
-New-AzResourceGroupDeployment -ResourceGroupName "MyResourceGroup" -TemplateFile template.json -TemplateParameterObject $TemplateParams -Verbose
+New-AzResourceGroupDeployment -ResourceGroupName "MyResourceGroup" -TemplateFile azuredeploy.json -Verbose
```
-To test out this deployment, you can use a [template like this one](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.web/function-app-create-dynamic/azuredeploy.json) that creates a function app on Windows in a Consumption plan. Replace `<function-app-name>` with a unique name for your function app.
++
+To test out this deployment, you can use a [template like this one](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/function-app-create-dynamic) that creates a function app on Windows in a Consumption plan.
## Next steps
azure-functions Functions Premium Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-premium-plan.md
See the complete regional availability of Functions on the [Azure web site](http
|Brazil South| 100 | 20 | |Canada Central| 100 | 20 | |Central India| 100 | 20 |
-|Central US| 100 | 40 |
+|Central US| 100 | 80 |
|China East 2| 100 | 20 | |China North 2| 100 | 20 | |East Asia| 100 | 20 |
-|East US | 100 | 60 |
-|East US 2| 100 | 40 |
+|East US | 100 | 80 |
+|East US 2| 100 | 60 |
|France Central| 100 | 20 | |Germany West Central| 100 | 20 | |Japan East| 100 | 20 |
See the complete regional availability of Functions on the [Azure web site](http
|USGov Texas| 100 | Not Available | |USGov Virginia| 100 | 20 | |West Central US| 100 | 20 |
-|West Europe| 100 | 40 |
+|West Europe| 100 | 80 |
|West India| 100 | 20 | |West US| 100 | 20 | |West US 2| 100 | 20 |
azure-functions Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/language-support-policy.md
To learn more about specific language version support policy timeline, visit the
* .NET - [dotnet.microsoft.com](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) * Node - [github.com](https://github.com/nodejs/Release#release-schedule) * Java - [azul.com](https://www.azul.com/products/azul-support-roadmap/)
-* PowerShell - [docs.microsoft.com](/powershell/scripting/powershell-support-lifecycle#powershell-end-of-support-dates)
+* PowerShell - [Microsoft technical documentation](/powershell/scripting/powershell-support-lifecycle#powershell-end-of-support-dates)
* Python - [devguide.python.org](https://devguide.python.org/#status-of-python-branches) ## Configuring language versions
azure-functions Security Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/security-concepts.md
Connection strings and other credentials stored in application settings gives al
Managed identities can be used in place of secrets for connections from some triggers and bindings. See [Identity-based connections](#identity-based-connections).
-For more information, see [How to use managed identities for App Service and Azure Functions](../app-service/overview-managed-identity.md?toc=%2fazure%2fazure-functions%2ftoc.json).
+For more information, see [How to use managed identities for App Service and Azure Functions](../app-service/overview-managed-identity.md?toc=/azure/azure-functions/toc.json).
#### Restrict CORS access
You can also encrypt settings by default in the local.settings.json file when de
While application settings are sufficient for most many functions, you may want to share the same secrets across multiple services. In this case, redundant storage of secrets results in more potential vulnerabilities. A more secure approach is to a central secret storage service and use references to this service instead of the secrets themselves.
-[Azure Key Vault](../key-vault/general/overview.md) is a service that provides centralized secrets management, with full control over access policies and audit history. You can use a Key Vault reference in the place of a connection string or key in your application settings. To learn more, see [Use Key Vault references for App Service and Azure Functions](../app-service/app-service-key-vault-references.md?toc=%2fazure%2fazure-functions%2ftoc.json).
+[Azure Key Vault](../key-vault/general/overview.md) is a service that provides centralized secrets management, with full control over access policies and audit history. You can use a Key Vault reference in the place of a connection string or key in your application settings. To learn more, see [Use Key Vault references for App Service and Azure Functions](../app-service/app-service-key-vault-references.md?toc=/azure/azure-functions/toc.json).
### Identity-based connections
Restricting network access to your function app lets you control who can access
### Set access restrictions
-Access restrictions allow you to define lists of allow/deny rules to control traffic to your app. Rules are evaluated in priority order. If there are no rules defined, then your app will accept traffic from any address. To learn more, see [Azure App Service Access Restrictions](../app-service/app-service-ip-restrictions.md?toc=%2fazure%2fazure-functions%2ftoc.json).
+Access restrictions allow you to define lists of allow/deny rules to control traffic to your app. Rules are evaluated in priority order. If there are no rules defined, then your app will accept traffic from any address. To learn more, see [Azure App Service Access Restrictions](../app-service/app-service-ip-restrictions.md?toc=/azure/azure-functions/toc.json).
### Private site access
azure-functions Security Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-glossary-cloud-terminology.md
The compute resources that [Azure App Service](app-service/overview.md) provides
## availability set A collection of virtual machines that are managed together to provide application redundancy and reliability. The use of an availability set ensures that during either a planned or unplanned maintenance event at least one virtual machine is available.
-See [Manage the availability of Windows virtual machines](./virtual-machines/availability.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) and [Manage the availability of Linux virtual machines](./virtual-machines/availability.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json)
+See [Manage the availability of Windows virtual machines](./virtual-machines/availability.md?toc=/azure/virtual-machines/windows/toc.json) and [Manage the availability of Linux virtual machines](./virtual-machines/availability.md?toc=/azure/virtual-machines/linux/toc.json)
## <a name="classic-model"></a>Azure classic deployment model One of two [deployment models](./azure-resource-manager/management/deployment-models.md) used to deploy resources in Azure (the new model is Azure Resource Manager). Some Azure services support only the Resource Manager deployment model, some support only the classic deployment model, and some support both. The documentation for each Azure service specifies which model(s) they support.
One of two [deployment models](./azure-resource-manager/management/deployment-mo
## fault domain The collection of virtual machines in an availability set that can possibly fail at the same time. An example is a group of machines in a rack that share a common power source and network switch. In Azure, the virtual machines in an availability set are automatically separated across multiple fault domains.
-See [Manage the availability of Windows virtual machines](./virtual-machines/availability.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) or [Manage the availability of Linux virtual machines](./virtual-machines/availability.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json)
+See [Manage the availability of Windows virtual machines](./virtual-machines/availability.md?toc=/azure/virtual-machines/windows/toc.json) or [Manage the availability of Linux virtual machines](./virtual-machines/availability.md?toc=/azure/virtual-machines/linux/toc.json)
## geo A defined boundary for data residency that typically contains two or more regions. The boundaries may be within or beyond national borders and are influenced by tax regulation. Every geo has at least one region. Examples of geos are Asia Pacific and Japan. Also called *geography*.
See [Active Geo-Replication for Azure SQL Database](/azure/azure-sql/database/au
## image A file that contains the operating system and application configuration that can be used to create any number of virtual machines. In Azure there are two types of images: VM image and OS image. A VM image includes an operating system and all disks attached to a virtual machine when the image is created. An OS image contains only a generalized operating system with no data disk configurations.
-See [Navigate and select Windows virtual machine images in Azure with PowerShell or the CLI](virtual-machines/windows/cli-ps-findimage.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json)
+See [Navigate and select Windows virtual machine images in Azure with PowerShell or the CLI](virtual-machines/windows/cli-ps-findimage.md?toc=/azure/virtual-machines/windows/toc.json)
## limits The number of resources that can be created or the performance benchmark that can be achieved. Limits are typically associated with subscriptions, services, and offerings.
A tenant is a group of users or an organization that share access with specific
## update domain The collection of virtual machines in an availability set that are updated at the same time. Virtual machines in the same update domain are restarted together during planned maintenance. Azure never restarts more than one update domain at a time. Also referred to as an upgrade domain.
-See [Manage the availability of Windows virtual machines](./virtual-machines/availability.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) and [Manage the availability of Linux virtual machines](./virtual-machines/availability.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json)
+See [Manage the availability of Windows virtual machines](./virtual-machines/availability.md?toc=/azure/virtual-machines/windows/toc.json) and [Manage the availability of Linux virtual machines](./virtual-machines/availability.md?toc=/azure/virtual-machines/linux/toc.json)
## <a name="vm"></a>virtual machine
-The software implementation of a physical computer that runs an operating system. Multiple virtual machines can run simultaneously on the same hardware. In Azure, virtual machines are available in a variety of sizes.
-See [Virtual Machines documentation](https://azure.microsoft.com/documentation/services/virtual-machines/)
+The software implementation of a physical computer that runs an operating system. Multiple virtual machines can run simultaneously on the same hardware. In Azure, virtual machines are available in a variety of sizes. For more information, see [Virtual Machines documentation](/azure/virtual-machines/)
## <a name="vm-extension"></a>virtual machine extension A resource that implements behaviors or features that either help other programs work or provide the ability for you to interact with a running computer. For example, you could use the VM Access extension to reset or modify remote access values on an Azure virtual machine. <!-- This definition seems obscure to me; maybe a list of examples would work better than a conceptual definition? -->
-See [About virtual machine extensions and features (Windows)](./virtual-machines/extensions/features-windows.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) or [About virtual machine extensions and features (Linux)](./virtual-machines/extensions/features-linux.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json)
+See [About virtual machine extensions and features (Windows)](./virtual-machines/extensions/features-windows.md?toc=/azure/virtual-machines/windows/toc.json) or [About virtual machine extensions and features (Linux)](./virtual-machines/extensions/features-linux.md?toc=/azure/virtual-machines/linux/toc.json)
## <a name="vnet"></a>virtual network A network that provides connectivity between your Azure resources that is isolated from all other Azure tenants. An [Azure VPN Gateway](vpn-gateway/vpn-gateway-about-vpngateways.md) lets you establish connections between virtual networks and between a virtual network and an on-premises network. You can fully control the IP address blocks, DNS settings, security policies, and route tables within a virtual network.
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Clients First Business Solutions LLC](https://www.clientsfirst-us.com)| |[ClearShark](https://clearshark.com/)| |[CloudFit Software, LLC](https://www.cloudfitsoftware.com/)|
-|[Cloud Navigator, Inc - formerly ISC](https://www.cloudnav.com )|
+|[Cloud Navigator, Inc - formerly ISC](https://www.cloudnav.com)|
|[CNSS - Cherokee Nation System Solutions LLC](https://cherokee-federal.com/about/cherokee-nation-system-solutions)| |[CodeLynx, LLC](http://www.codelynx.com/)| |[Columbus US, Inc.](https://www.columbusglobal.com)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Norseman, Inc](https://www.norseman.com)| |[Nortec](https://www.nortec.com)| |[Northrop Grumman](https://www.northropgrumman.com)|
-|[NTS Cloud](http://ntscloud.com/ )|
+|[NTS Cloud](http://ntscloud.com/)|
|[NTT America, Inc.](https://www.us.ntt.net)| |[Nubelity LLC](http://www.nubelity.com)| |[NuSoft Solutions (Atrio Systems, Inc.)](https://nusoftsolutions.com)|
azure-monitor Agent Linux Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux-troubleshoot.md
-# How to troubleshoot issues with the Log Analytics agent for Linux
+# Troubleshoot issues with the Log Analytics agent for Linux
-This article provides help troubleshooting errors you might experience with the Log Analytics agent for Linux in Azure Monitor and suggests possible solutions to resolve them.
+This article provides help in troubleshooting errors you might experience with the Log Analytics agent for Linux in Azure Monitor and suggests possible solutions to resolve them.
## Log Analytics Troubleshooting Tool
-The Log Analytics Agent Linux Troubleshooting Tool is a script designed to help find and diagnose issues with the Log Analytics Agent. It is automatically included with the agent upon installation. Running the tool should be the first step in diagnosing an issue.
+The Log Analytics agent for Linux Troubleshooting Tool is a script designed to help find and diagnose issues with the Log Analytics agent. It's automatically included with the agent upon installation. Running the tool should be the first step in diagnosing an issue.
-### How to Use
+### Use the Troubleshooting Tool
+
+To run the Troubleshooting Tool, paste the following command into a terminal window on a machine with the Log Analytics agent:
-The Troubleshooting Tool can be run by pasting the following command into a terminal window on a machine with the Log Analytics agent:
`sudo /opt/microsoft/omsagent/bin/troubleshooter`
-### Manual Installation
+### Manual installation
-The Troubleshooting Tool is automatically included upon installation of the Log Analytics Agent. However, if installation fails in any way, it can also be installed manually by following the steps below.
+The Troubleshooting Tool is automatically included when the Log Analytics agent is installed. If installation fails in any way, you can also install the tool manually:
-1. Ensure that the [GNU Project Debugger (GDB)](https://www.gnu.org/software/gdb/) is installed on the machine since the troubleshooter relies on it.
-2. Copy the troubleshooter bundle onto your machine: `wget https://raw.github.com/microsoft/OMS-Agent-for-Linux/master/source/code/troubleshooter/omsagent_tst.tar.gz`
-3. Unpack the bundle: `tar -xzvf omsagent_tst.tar.gz`
-4. Run the manual installation: `sudo ./install_tst`
+1. Ensure that the [GNU Project Debugger (GDB)](https://www.gnu.org/software/gdb/) is installed on the machine because the troubleshooter relies on it.
+1. Copy the troubleshooter bundle onto your machine: `wget https://raw.github.com/microsoft/OMS-Agent-for-Linux/master/source/code/troubleshooter/omsagent_tst.tar.gz`
+1. Unpack the bundle: `tar -xzvf omsagent_tst.tar.gz`
+1. Run the manual installation: `sudo ./install_tst`
-### Scenarios Covered
+### Scenarios covered
-Below is a list of scenarios checked by the Troubleshooting Tool:
+The Troubleshooting Tool checks the following scenarios:
-1. Agent is unhealthy, heartbeat doesn't work properly
-2. Agent doesn't start, can't connect to Log Analytic Services
-3. Agent syslog isn't working
-4. Agent has high CPU / memory usage
-5. Agent having installation issues
-6. Agent custom logs aren't working
-7. Collect Agent logs
+- The agent is unhealthy; the heartbeat doesn't work properly.
+- The agent doesn't start or can't connect to Log Analytics.
+- The agent Syslog isn't working.
+- The agent has high CPU or memory usage.
+- The agent has installation issues.
+- The agent custom logs aren't working.
+- Agent logs can't be collected.
-For more details, please check out our [GitHub documentation](https://github.com/microsoft/OMS-Agent-for-Linux/blob/master/docs/Troubleshooting-Tool.md).
+For more information, see the [Troubleshooting Tool documentation on GitHub](https://github.com/microsoft/OMS-Agent-for-Linux/blob/master/docs/Troubleshooting-Tool.md).
> [!NOTE]
- > Please run the Log Collector tool when you experience an issue. Having the logs initially will greatly help our support team troubleshoot your issue quicker.
+ > Run the Log Collector tool when you experience an issue. Having the logs initially will help our support team troubleshoot your issue faster.
-## Purge and Re-Install the Linux Agent
+## Purge and reinstall the Linux agent
-We've seen that a clean re-install of the Agent will fix most issues. In fact this may be the first suggestion from Support to get the Agent into a uncorrupted state from our support team. Running the troubleshooter, log collect, and attempting a clean re-install will help solve issues more quickly.
+A clean reinstall of the agent fixes most issues. This task might be the first suggestion from our support team to get the agent into an uncorrupted state. Running the Troubleshooting Tool and Log Collector tool and attempting a clean reinstall helps to solve issues more quickly.
1. Download the purge script:-- `$ wget https://raw.githubusercontent.com/microsoft/OMS-Agent-for-Linux/master/tools/purge_omsagent.sh`
-2. Run the purge script (with sudo permissions):
-- `$ sudo sh purge_omsagent.sh`
+
+ `$ wget https://raw.githubusercontent.com/microsoft/OMS-Agent-for-Linux/master/tools/purge_omsagent.sh`
+1. Run the purge script (with sudo permissions):
+
+ `$ sudo sh purge_omsagent.sh`
-## Important log locations and Log Collector tool
+## Important log locations and the Log Collector tool
File | Path - | -- Log Analytics agent for Linux log file | `/var/opt/microsoft/omsagent/<workspace id>/log/omsagent.log` Log Analytics agent configuration log file | `/var/opt/microsoft/omsconfig/omsconfig.log`
- We recommend you to use our log collector tool to retrieve important logs for troubleshooting or before submitting a GitHub issue. You can read more about the tool and how to run it [here](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/tools/LogCollector/OMS_Linux_Agent_Log_Collector.md).
+ We recommend that you use the Log Collector tool to retrieve important logs for troubleshooting or before you submit a GitHub issue. For more information about the tool and how to run it, see [OMS Linux Agent Log Collector](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/tools/LogCollector/OMS_Linux_Agent_Log_Collector.md).
## Important configuration files
- Category | File Location
+ Category | File location
-- | -- Syslog | `/etc/syslog-ng/syslog-ng.conf` or `/etc/rsyslog.conf` or `/etc/rsyslog.d/95-omsagent.conf` Performance, Nagios, Zabbix, Log Analytics output and general agent | `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.conf`
- Additional configurations | `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.d/*.conf`
+ Extra configurations | `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.d/*.conf`
> [!NOTE]
- > Editing configuration files for performance counters and Syslog is overwritten if the collection is configured from the [Agents configuration](../agents/agent-data-sources.md#configuring-data-sources) in the Azure portal for your workspace. To disable configuration for all agents, disable collection from **Agents configuration** or for a single agent run the following:
+ > Editing configuration files for performance counters and Syslog is overwritten if the collection is configured from the [agent's configuration](../agents/agent-data-sources.md#configuring-data-sources) in the Azure portal for your workspace. To disable configuration for all agents, disable collection from **Agents configuration**. For a single agent, run the following script:
+>
> `sudo /opt/microsoft/omsconfig/Scripts/OMS_MetaConfigHelper.py --disable && sudo rm /etc/opt/omi/conf/omsconfig/configuration/Current.mof* /etc/opt/omi/conf/omsconfig/configuration/Pending.mof*` ## Installation error codes
-| Error Code | Meaning |
+| Error code | Meaning |
| | |
-| NOT_DEFINED | Because the necessary dependencies are not installed, the auoms auditd plugin will not be installed. Installation of auoms failed, install package auditd. |
-| 2 | Invalid option provided to the shell bundle. Run `sudo sh ./omsagent-*.universal*.sh --help` for usage |
+| NOT_DEFINED | Because the necessary dependencies aren't installed, the auoms auditd plug-in won't be installed. Installation of auoms failed. Install package auditd. |
+| 2 | Invalid option provided to the shell bundle. Run `sudo sh ./omsagent-*.universal*.sh --help` for usage. |
| 3 | No option provided to the shell bundle. Run `sudo sh ./omsagent-*.universal*.sh --help` for usage. |
-| 4 | Invalid package type OR invalid proxy settings; omsagent-*rpm*.sh packages can only be installed on RPM-based systems, and omsagent-*deb*.sh packages can only be installed on Debian-based systems. It is recommend you use the universal installer from the [latest release](../vm/monitor-virtual-machine.md#agents). Also review to verify your proxy settings. |
-| 5 | The shell bundle must be executed as root OR there was 403 error returned during onboarding. Run your command using `sudo`. |
-| 6 | Invalid package architecture OR there was error 200 error returned during onboarding; omsagent-\*x64.sh packages can only be installed on 64-bit systems, and omsagent-\*x86.sh packages can only be installed on 32-bit systems. Download the correct package for your architecture from the [latest release](https://github.com/Microsoft/OMS-Agent-for-Linux/releases/latest). |
+| 4 | Invalid package type *or* invalid proxy settings. The omsagent-*rpm*.sh packages can only be installed on RPM-based systems. The omsagent-*deb*.sh packages can only be installed on Debian-based systems. We recommend that you use the universal installer from the [latest release](../vm/monitor-virtual-machine.md#agents). Also review to verify your proxy settings. |
+| 5 | The shell bundle must be executed as root *or* there was a 403 error returned during onboarding. Run your command by using `sudo`. |
+| 6 | Invalid package architecture *or* there was a 200 error returned during onboarding. The omsagent-\*x64.sh packages can only be installed on 64-bit systems. The omsagent-\*x86.sh packages can only be installed on 32-bit systems. Download the correct package for your architecture from the [latest release](https://github.com/Microsoft/OMS-Agent-for-Linux/releases/latest). |
| 17 | Installation of OMS package failed. Look through the command output for the root failure. | | 18 | Installation of OMSConfig package failed. Look through the command output for the root failure. | | 19 | Installation of OMI package failed. Look through the command output for the root failure. |
We've seen that a clean re-install of the Agent will fix most issues. In fact th
| 21 | Installation of Provider kits failed. Look through the command output for the root failure. | | 22 | Installation of bundled package failed. Look through the command output for the root failure | | 23 | SCX or OMI package already installed. Use `--upgrade` instead of `--install` to install the shell bundle. |
-| 30 | Internal bundle error. File a [GitHub Issue](https://github.com/Microsoft/OMS-Agent-for-Linux/issues) with details from the output. |
-| 55 | Unsupported openssl version OR Cannot connect to Azure Monitor OR dpkg is locked OR missing curl program. |
+| 30 | Internal bundle error. File a [GitHub issue](https://github.com/Microsoft/OMS-Agent-for-Linux/issues) with details from the output. |
+| 55 | Unsupported openssl version *or* can't connect to Azure Monitor *or* dpkg is locked *or* missing curl program. |
| 61 | Missing Python ctypes library. Install the Python ctypes library or package (python-ctypes). |
-| 62 | Missing tar program, install tar. |
-| 63 | Missing sed program, install sed. |
-| 64 | Missing curl program, install curl. |
-| 65 | Missing gpg program, install gpg. |
+| 62 | Missing tar program. Install tar. |
+| 63 | Missing sed program. Install sed. |
+| 64 | Missing curl program. Install curl. |
+| 65 | Missing gpg program. Install gpg. |
## Onboarding error codes
-| Error Code | Meaning |
+| Error code | Meaning |
| | | | 2 | Invalid option provided to the omsadmin script. Run `sudo sh /opt/microsoft/omsagent/bin/omsadmin.sh -h` for usage. | | 3 | Invalid configuration provided to the omsadmin script. Run `sudo sh /opt/microsoft/omsagent/bin/omsadmin.sh -h` for usage. |
We've seen that a clean re-install of the Agent will fix most issues. In fact th
| 6 | Non-200 HTTP error received from Azure Monitor. See the full output of the omsadmin script for details. | | 7 | Unable to connect to Azure Monitor. See the full output of the omsadmin script for details. | | 8 | Error onboarding to Log Analytics workspace. See the full output of the omsadmin script for details. |
-| 30 | Internal script error. File a [GitHub Issue](https://github.com/Microsoft/OMS-Agent-for-Linux/issues) with details from the output. |
-| 31 | Error generating agent ID. File a [GitHub Issue](https://github.com/Microsoft/OMS-Agent-for-Linux/issues) with details from the output. |
+| 30 | Internal script error. File a [GitHub issue](https://github.com/Microsoft/OMS-Agent-for-Linux/issues) with details from the output. |
+| 31 | Error generating agent ID. File a [GitHub issue](https://github.com/Microsoft/OMS-Agent-for-Linux/issues) with details from the output. |
| 32 | Error generating certificates. See the full output of the omsadmin script for details. |
-| 33 | Error generating metaconfiguration for omsconfig. File a [GitHub Issue](https://github.com/Microsoft/OMS-Agent-for-Linux/issues) with details from the output. |
+| 33 | Error generating metaconfiguration for omsconfig. File a [GitHub issue](https://github.com/Microsoft/OMS-Agent-for-Linux/issues) with details from the output. |
| 34 | Metaconfiguration generation script not present. Retry onboarding with `sudo sh /opt/microsoft/omsagent/bin/omsadmin.sh -w <Workspace ID> -s <Workspace Key>`. | ## Enable debug logging
-### OMS output plugin debug
+### OMS output plug-in debug
- FluentD allows for plugin-specific logging levels allowing you to specify different log levels for inputs and outputs. To specify a different log level for OMS output, edit the general agent configuration at `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.conf`.
+ FluentD allows for plug-in-specific logging levels that allow you to specify different log levels for inputs and outputs. To specify a different log level for OMS output, edit the general agent configuration at `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.conf`.
- In the OMS output plugin, before the end of the configuration file, change the `log_level` property from `info` to `debug`:
+ In the OMS output plug-in, before the end of the configuration file, change the `log_level` property from `info` to `debug`:
``` <match oms.** docker.**>
We've seen that a clean re-install of the Agent will fix most issues. In fact th
</match> ```
-Debug logging allows you to see batched uploads to Azure Monitor separated by type, number of data items, and time taken to send:
+Debug logging allows you to see batched uploads to Azure Monitor separated by type, number of data items, and time taken to send.
-*Example debug enabled log:*
+Here's an example debug-enabled log:
``` Success sending oms.nagios x 1 in 0.14s
Success sending oms.syslog.authpriv.info x 1 in 0.91s
### Verbose output
-Instead of using the OMS output plugin you can also output data items directly to `stdout`, which is visible in the Log Analytics agent for Linux log file.
+Instead of using the OMS output plug-in, you can output data items directly to `stdout`, which is visible in the Log Analytics agent for Linux log file.
-In the Log Analytics general agent configuration file at `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.conf`, comment out the OMS output plugin by adding a `#` in front of each line:
+In the Log Analytics general agent configuration file at `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.conf`, comment out the OMS output plug-in by adding a `#` in front of each line:
``` #<match oms.** docker.**>
In the Log Analytics general agent configuration file at `/etc/opt/microsoft/oms
#</match> ```
-Below the output plugin, uncomment the following section by removing the `#` in front of each line:
+Below the output plug-in, uncomment the following section by removing the `#` in front of each line:
``` <match **>
Below the output plugin, uncomment the following section by removing the `#` in
</match> ```
-## Issue: Unable to connect through proxy to Azure Monitor
+## Issue: Unable to connect through proxy to Azure Monitor
### Probable causes
-* The proxy specified during onboarding was incorrect
-* The Azure Monitor and Azure Automation Service Endpoints are not included in the approved list in your datacenter
+* The proxy specified during onboarding was incorrect.
+* The Azure Monitor and Azure Automation service endpoints aren't included in the approved list in your datacenter.
### Resolution
-1. Reonboard to Azure Monitor with the Log Analytics agent for Linux by using the following command with the option `-v` enabled. It allows verbose output of the agent connecting through the proxy to Azure Monitor.
+1. Reonboard to Azure Monitor with the Log Analytics agent for Linux by using the following command with the option `-v` enabled. It allows verbose output of the agent connecting through the proxy to Azure Monitor:
`/opt/microsoft/omsagent/bin/omsadmin.sh -w <Workspace ID> -s <Workspace Key> -p <Proxy Conf> -v`
-2. Review the section [Update proxy settings](agent-manage.md#update-proxy-settings) to verify you have properly configured the agent to communicate through a proxy server.
+1. Review the section [Update proxy settings](agent-manage.md#update-proxy-settings) to verify you've properly configured the agent to communicate through a proxy server.
-3. Double-check that the endpoints outlined in the Azure Monitor [network firewall requirements](./log-analytics-agent.md#firewall-requirements) list are added to an allow list correctly. If you use Azure Automation, the necessary network configuration steps are linked above as well.
+1. Double-check that the endpoints outlined in the Azure Monitor [network firewall requirements](./log-analytics-agent.md#firewall-requirements) list are added to an allow list correctly. If you use Azure Automation, the necessary network configuration steps are also linked above.
## Issue: You receive a 403 error when trying to onboard ### Probable causes
-* Date and Time is incorrect on Linux Server
-* Workspace ID and Workspace Key used are not correct
+* Date and time are incorrect on the Linux server.
+* The workspace ID and workspace key aren't correct.
### Resolution
-1. Check the time on your Linux server with the command date. If the time is +/- 15 minutes from current time, then onboarding fails. To correct this update the date and/or timezone of your Linux server.
-2. Verify you have installed the latest version of the Log Analytics agent for Linux. The newest version now notifies you if time skew is causing the onboarding failure.
-3. Reonboard using correct Workspace ID and Workspace Key following the installation instructions earlier in this article.
+1. Check the time on your Linux server with the command date. If the time is +/- 15 minutes from the current time, onboarding fails. To correct this situation, update the date and/or time zone of your Linux server.
+1. Verify that you've installed the latest version of the Log Analytics agent for Linux. The newest version now notifies you if time skew is causing the onboarding failure.
+1. Reonboard by using the correct workspace ID and workspace key in the installation instructions earlier in this article.
## Issue: You see a 500 and 404 error in the log file right after onboarding
-This is a known issue that occurs on first upload of Linux data into a Log Analytics workspace. This does not affect data being sent or service experience.
+This is a known issue that occurs on the first upload of Linux data into a Log Analytics workspace. This issue doesn't affect data being sent or service experience.
## Issue: You see omiagent using 100% CPU ### Probable causes
-A regression in nss-pem package [v1.0.3-5.el7](https://centos.pkgs.org/7/centos-x86_64/nss-pem-1.0.3-7.el7.x86_64.rpm.html) caused a severe performance issue, that we've been seeing come up a lot in Redhat/Centos 7.x distributions. To learn more about this issue, check the following documentation: Bug [1667121 Performance regression in libcurl](https://bugzilla.redhat.com/show_bug.cgi?id=1667121).
+A regression in nss-pem package [v1.0.3-5.el7](https://centos.pkgs.org/7/centos-x86_64/nss-pem-1.0.3-7.el7.x86_64.rpm.html) caused a severe performance issue. We've been seeing this issue come up a lot in Redhat/Centos 7.x distributions. To learn more about this issue, see [1667121 Performance regression in libcurl](https://bugzilla.redhat.com/show_bug.cgi?id=1667121).
-Performance related bugs don't happen all the time, and they are very difficult to reproduce. If you experience such issue with omiagent you should use the script omiHighCPUDiagnostics.sh which will collect the stack trace of the omiagent when exceeding a certain threshold.
+Performance-related bugs don't happen all the time, and they're difficult to reproduce. If you experience such an issue with omiagent, use the script `omiHighCPUDiagnostics.sh`, which will collect the stack trace of the omiagent when it exceeds a certain threshold.
-1. Download the script <br/>
+1. Download the script: <br/>
`wget https://raw.githubusercontent.com/microsoft/OMS-Agent-for-Linux/master/tools/LogCollector/source/omiHighCPUDiagnostics.sh`
-2. Run diagnostics for 24 hours with 30% CPU threshold <br/>
+1. Run diagnostics for 24 hours with 30% CPU threshold: <br/>
`bash omiHighCPUDiagnostics.sh --runtime-in-min 1440 --cpu-threshold 30`
-3. Callstack will be dumped in omiagent_trace file, If you notice many Curl and NSS function calls, follow resolution steps below.
+1. Callstack will be dumped in the omiagent_trace file. If you notice many curl and NSS function calls, follow these resolution steps.
-### Resolution (step by step)
+### Resolution
-1. Upgrade the nss-pem package to [v1.0.3-5.el7_6.1](https://centos.pkgs.org/7/centos-x86_64/nss-pem-1.0.3-7.el7.x86_64.rpm.html). <br/>
+1. Upgrade the nss-pem package to [v1.0.3-5.el7_6.1](https://centos.pkgs.org/7/centos-x86_64/nss-pem-1.0.3-7.el7.x86_64.rpm.html): <br/>
`sudo yum upgrade nss-pem`
-2. If nss-pem is not available for upgrade (mostly happens on Centos), then downgrade curl to 7.29.0-46. If by mistake you run "yum update", then curl will be upgraded to 7.29.0-51 and the issue will happen again. <br/>
+1. If nss-pem isn't available for upgrade, which mostly happens on Centos, downgrade curl to 7.29.0-46. If you run "yum update" by mistake, curl will be upgraded to 7.29.0-51 and the issue will happen again: <br/>
`sudo yum downgrade curl libcurl`
-3. Restart OMI: <br/>
+1. Restart OMI: <br/>
`sudo scxadmin -restart`
-## Issue: You are not seeing forwarded Syslog messages
+## Issue: You're not seeing forwarded Syslog messages
### Probable causes
-* The configuration applied to the Linux server does not allow collection of the sent facilities and/or log levels
-* Syslog is not being forwarded correctly to the Linux server
-* The number of messages being forwarded per second are too great for the base configuration of the Log Analytics agent for Linux to handle
+* The configuration applied to the Linux server doesn't allow collection of the sent facilities or log levels.
+* Syslog isn't being forwarded correctly to the Linux server.
+* The number of messages being forwarded per second is too great for the base configuration of the Log Analytics agent for Linux to handle.
### Resolution
-* Verify the configuration in the Log Analytics workspace for Syslog has all the facilities and the correct log levels. Review [configure Syslog collection in the Azure portal](data-sources-syslog.md#configure-syslog-in-the-azure-portal)
-* Verify the native syslog messaging daemons (`rsyslog`, `syslog-ng`) are able to receive the forwarded messages
-* Check firewall settings on the Syslog server to ensure that messages are not being blocked
-* Simulate a Syslog message to Log Analytics using `logger` command
- * `logger -p local0.err "This is my test message"`
+* Verify the configuration in the Log Analytics workspace for Syslog has all the facilities and the correct log levels. Review [configure Syslog collection in the Azure portal](data-sources-syslog.md#configure-syslog-in-the-azure-portal).
+* Verify the native Syslog messaging daemons (`rsyslog`, `syslog-ng`) can receive the forwarded messages.
+* Check firewall settings on the Syslog server to ensure that messages aren't being blocked.
+* Simulate a Syslog message to Log Analytics by using a `logger` command: <br/>
+ `logger -p local0.err "This is my test message"`
-## Issue: You are receiving Errno address already in use in omsagent log file
+## Issue: You're receiving Errno address already in use in omsagent log file
-If you see `[error]: unexpected error error_class=Errno::EADDRINUSE error=#<Errno::EADDRINUSE: Address already in use - bind(2) for "127.0.0.1" port 25224>` in omsagent.log.
+You see `[error]: unexpected error error_class=Errno::EADDRINUSE error=#<Errno::EADDRINUSE: Address already in use - bind(2) for "127.0.0.1" port 25224>` in omsagent.log.
### Probable causes
-This error indicates that the Linux Diagnostic extension (LAD) is installed side by side with the Log Analytics Linux VM extension, and it is using same port for syslog data collection as omsagent.
+This error indicates that the Linux diagnostic extension (LAD) is installed side by side with the Log Analytics Linux VM extension. It's using the same port for Syslog data collection as omsagent.
### Resolution
-1. As root, execute the following commands (note that 25224 is an example and it is possible that in your environment you see a different port number used by LAD):
+1. As root, execute the following commands. Note that 25224 is an example, and it's possible that in your environment you see a different port number used by LAD.
``` /opt/microsoft/omsagent/bin/configure_syslog.sh configure LAD 25229
This error indicates that the Linux Diagnostic extension (LAD) is installed side
You then need to edit the correct `rsyslogd` or `syslog_ng` config file and change the LAD-related configuration to write to port 25229.
-2. If the VM is running `rsyslogd`, the file to be modified is: `/etc/rsyslog.d/95-omsagent.conf` (if it exists, else `/etc/rsyslog`). If the VM is running `syslog_ng`, the file to be modified is: `/etc/syslog-ng/syslog-ng.conf`.
-3. Restart omsagent `sudo /opt/microsoft/omsagent/bin/service_control restart`.
-4. Restart syslog service.
+1. If the VM is running `rsyslogd`, the file to be modified is `/etc/rsyslog.d/95-omsagent.conf` (if it exists, else `/etc/rsyslog`). If the VM is running `syslog_ng`, the file to be modified is `/etc/syslog-ng/syslog-ng.conf`.
+1. Restart omsagent `sudo /opt/microsoft/omsagent/bin/service_control restart`.
+1. Restart the Syslog service.
-## Issue: You are unable to uninstall omsagent using purge option
+## Issue: You're unable to uninstall omsagent using the purge option
### Probable causes
-* Linux Diagnostic Extension is installed
-* Linux Diagnostic Extension was installed and uninstalled, but you still see an error about omsagent being used by mdsd and cannot be removed.
+* The Linux diagnostic extension is installed.
+* The Linux diagnostic extension was installed and uninstalled, but you still see an error about omsagent being used by mdsd and it can't be removed.
### Resolution
-1. Uninstall the Linux Diagnostic Extension (LAD).
-2. Remove Linux Diagnostic Extension files from the machine if they are present in the following location: `/var/lib/waagent/Microsoft.Azure.Diagnostics.LinuxDiagnostic-<version>/` and `/var/opt/microsoft/omsagent/LAD/`.
+1. Uninstall the Linux diagnostic extension.
+1. Remove Linux diagnostic extension files from the machine if they're present in the following location: `/var/lib/waagent/Microsoft.Azure.Diagnostics.LinuxDiagnostic-<version>/` and `/var/opt/microsoft/omsagent/LAD/`.
-## Issue: You cannot see data any Nagios data
+## Issue: You can't see any Nagios data
### Probable causes
-* Omsagent user does not have permissions to read from Nagios log file
-* Nagios source and filter have not been uncommented from omsagent.conf file
+* The omsagent user doesn't have permissions to read from the Nagios log file.
+* The Nagios source and filter haven't been uncommented from the omsagent.conf file.
### Resolution
-1. Add omsagent user to read from Nagios file by following these [instructions](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/docs/OMS-Agent-for-Linux.md#nagios-alerts).
-2. In the Log Analytics agent for Linux general configuration file at `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.conf`, ensure that **both** the Nagios source and filter are uncommented.
+1. Add the omsagent user to read from the Nagios file by following these [instructions](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/docs/OMS-Agent-for-Linux.md#nagios-alerts).
+1. In the Log Analytics agent for Linux general configuration file at `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.conf`, ensure that *both* the Nagios source and filter are uncommented.
``` <source>
This error indicates that the Linux Diagnostic extension (LAD) is installed side
</filter> ```
-## Issue: You are not seeing any Linux data
+## Issue: You aren't seeing any Linux data
### Probable causes
-* Onboarding to Azure Monitor failed
-* Connection to Azure Monitor is blocked
-* Virtual machine was rebooted
-* OMI package was manually upgraded to a newer version compared to what was installed by Log Analytics agent for Linux package
-* OMI is frozen, blocking OMS agent
-* DSC resource logs *class not found* error in `omsconfig.log` log file
-* Log Analytics agent for data is backed up
+* Onboarding to Azure Monitor failed.
+* Connection to Azure Monitor is blocked.
+* Virtual machine was rebooted.
+* OMI package was manually upgraded to a newer version compared to what was installed by the Log Analytics agent for Linux package.
+* OMI is frozen, blocking the OMS agent.
+* DSC resource logs *class not found* error in `omsconfig.log` log file.
+* Log Analytics agent for data is backed up.
* DSC logs *Current configuration does not exist. Execute Start-DscConfiguration command with -Path parameter to specify a configuration file and create a current configuration first.* in `omsconfig.log` log file, but no log message exists about `PerformRequiredConfigurationChecks` operations. ### Resolution
-1. Install all dependencies like auditd package.
-2. Check if onboarding to Azure Monitor was successful by checking if the following file exists: `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsadmin.conf`. If it was not, reonboard using the omsadmin.sh command line [instructions](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/docs/OMS-Agent-for-Linux.md#onboarding-using-the-command-line).
-4. If using a proxy, check proxy troubleshooting steps above.
-5. In some Azure distribution systems, omid OMI server daemon does not start after the virtual machine is rebooted. This will result in not seeing Audit, ChangeTracking, or UpdateManagement solution-related data. The workaround is to manually start omi server by running `sudo /opt/omi/bin/service_control restart`.
-6. After OMI package is manually upgraded to a newer version, it has to be manually restarted for Log Analytics agent to continue functioning. This step is required for some distros where OMI server does not automatically start after it is upgraded. Run `sudo /opt/omi/bin/service_control restart` to restart OMI.
-* In some situations, OMI can become frozen. The OMS agent may enter a blocked state waiting for OMI, blocking all data collection. The OMS agent process will be running but there will be no activity, evidenced by no new log lines (such as sent heartbeats) present in `omsagent.log`. Restart OMI with `sudo /opt/omi/bin/service_control restart` to recover the agent.
-7. If you see DSC resource *class not found* error in omsconfig.log, run `sudo /opt/omi/bin/service_control restart`.
-8. In some cases, when the Log Analytics agent for Linux cannot talk to Azure Monitor, data on the agent is backed up to the full buffer size: 50 MB. The agent should be restarted by running the following command `/opt/microsoft/omsagent/bin/service_control restart`.
+1. Install all dependencies like the auditd package.
+1. Check if onboarding to Azure Monitor was successful by checking if the following file exists: `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsadmin.conf`. If it wasn't, reonboard by using the omsadmin.sh command-line [instructions](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/docs/OMS-Agent-for-Linux.md#onboarding-using-the-command-line).
+1. If you're using a proxy, check the preceding proxy troubleshooting steps.
+1. In some Azure distribution systems, the omid OMI server daemon doesn't start after the virtual machine is rebooted. If this is the case, you won't see Audit, ChangeTracking, or UpdateManagement solution-related data. The workaround is to manually start the OMI server by running `sudo /opt/omi/bin/service_control restart`.
+1. After the OMI package is manually upgraded to a newer version, it must be manually restarted for the Log Analytics agent to continue functioning. This step is required for some distros where the OMI server doesn't automatically start after it's upgraded. Run `sudo /opt/omi/bin/service_control restart` to restart the OMI.
+
+ In some situations, the OMI can become frozen. The OMS agent might enter a blocked state waiting for the OMI, which blocks all data collection. The OMS agent process will be running but there will be no activity, which is evidenced by no new log lines (such as sent heartbeats) present in `omsagent.log`. Restart the OMI with `sudo /opt/omi/bin/service_control restart` to recover the agent.
+1. If you see a DSC resource *class not found* error in omsconfig.log, run `sudo /opt/omi/bin/service_control restart`.
+1. In some cases, when the Log Analytics agent for Linux can't talk to Azure Monitor, data on the agent is backed up to the full buffer size of 50 MB. The agent should be restarted by running the following command: `/opt/microsoft/omsagent/bin/service_control restart`.
> [!NOTE]
- > This issue is fixed in Agent version 1.1.0-28 or later
+ > This issue is fixed in agent version 1.1.0-28 or later.
>
-* If `omsconfig.log` log file does not indicate that `PerformRequiredConfigurationChecks` operations are running periodically on the system, there might be a problem with the cron job/service. Make sure cron job exists under `/etc/cron.d/OMSConsistencyInvoker`. If needed run the following commands to create the cron job:
-
- ```
- mkdir -p /etc/cron.d/
- echo "*/15 * * * * omsagent /opt/omi/bin/OMSConsistencyInvoker > 2>&1" | sudo tee /etc/cron.d/OMSConsistencyInvoker
- ```
-
- Also, make sure the cron service is running. You can use `service cron status` with Debian, Ubuntu, SUSE, or `service crond status` with RHEL, CentOS, Oracle Linux to check the status of this service. If the service does not exist, you can install the binaries and start the service using the following:
-
- **Ubuntu/Debian**
-
- ```
- # To Install the service binaries
- sudo apt-get install -y cron
- # To start the service
- sudo service cron start
- ```
-
- **SUSE**
-
- ```
- # To Install the service binaries
- sudo zypper in cron -y
- # To start the service
- sudo systemctl enable cron
- sudo systemctl start cron
- ```
-
- **RHEL/CeonOS**
-
- ```
- # To Install the service binaries
- sudo yum install -y crond
- # To start the service
- sudo service crond start
- ```
-
- **Oracle Linux**
-
- ```
- # To Install the service binaries
- sudo yum install -y cronie
- # To start the service
- sudo service crond start
- ```
-
-## Issue: When configuring collection from the portal for Syslog or Linux performance counters, the settings are not applied
+ * If the `omsconfig.log` log file doesn't indicate that `PerformRequiredConfigurationChecks` operations are running periodically on the system, there might be a problem with the cron job/service. Make sure the cron job exists under `/etc/cron.d/OMSConsistencyInvoker`. If needed, run the following commands to create the cron job:
+
+ ```
+ mkdir -p /etc/cron.d/
+ echo "*/15 * * * * omsagent /opt/omi/bin/OMSConsistencyInvoker > 2>&1" | sudo tee /etc/cron.d/OMSConsistencyInvoker
+ ```
+
+ * Also, make sure the cron service is running. You can use `service cron status` with Debian, Ubuntu, and SUSE or `service crond status` with RHEL, CentOS, and Oracle Linux to check the status of this service. If the service doesn't exist, you can install the binaries and start the service by using the following instructions:
+
+ **Ubuntu/Debian**
+
+ ```
+ # To Install the service binaries
+ sudo apt-get install -y cron
+ # To start the service
+ sudo service cron start
+ ```
+
+ **SUSE**
+
+ ```
+ # To Install the service binaries
+ sudo zypper in cron -y
+ # To start the service
+ sudo systemctl enable cron
+ sudo systemctl start cron
+ ```
+
+ **RHEL/CeonOS**
+
+ ```
+ # To Install the service binaries
+ sudo yum install -y crond
+ # To start the service
+ sudo service crond start
+ ```
+
+ **Oracle Linux**
+
+ ```
+ # To Install the service binaries
+ sudo yum install -y cronie
+ # To start the service
+ sudo service crond start
+ ```
+
+## Issue: When you configure collection from the portal for Syslog or Linux performance counters, the settings aren't applied
### Probable causes
-* The Log Analytics agent for Linux has not picked up the latest configuration
-* The changed settings in the portal were not applied
+* The Log Analytics agent for Linux hasn't picked up the latest configuration.
+* The changed settings in the portal weren't applied.
### Resolution **Background:** `omsconfig` is the Log Analytics agent for Linux configuration agent that looks for new portal-side configuration every five minutes. This configuration is then applied to the Log Analytics agent for Linux configuration files located at /etc/opt/microsoft/omsagent/conf/omsagent.conf.
-* In some cases, the Log Analytics agent for Linux configuration agent might not be able to communicate with the portal configuration service resulting in latest configuration not being applied.
- 1. Check that the `omsconfig` agent is installed by running `dpkg --list omsconfig` or `rpm -qi omsconfig`. If it is not installed, reinstall the latest version of the Log Analytics agent for Linux.
+In some cases, the Log Analytics agent for Linux configuration agent might not be able to communicate with the portal configuration service. This scenario results in the latest configuration not being applied.
+
+1. Check that the `omsconfig` agent is installed by running `dpkg --list omsconfig` or `rpm -qi omsconfig`. If it isn't installed, reinstall the latest version of the Log Analytics agent for Linux.
- 2. Check that the `omsconfig` agent can communicate with Azure Monitor by running the following command `sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/GetDscConfiguration.py'`. This command returns the configuration that agent receives from the service, including Syslog settings, Linux performance counters, and custom logs. If this command fails, run the following command `sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/PerformRequiredConfigurationChecks.py'`. This command forces the omsconfig agent to talk to Azure Monitor and retrieve the latest configuration.
+1. Check that the `omsconfig` agent can communicate with Azure Monitor by running the following command: `sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/GetDscConfiguration.py'`. This command returns the configuration that the agent receives from the service, including Syslog settings, Linux performance counters, and custom logs. If this command fails, run the following command: `sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/PerformRequiredConfigurationChecks.py'`. This command forces the omsconfig agent to talk to Azure Monitor and retrieve the latest configuration.
-## Issue: You are not seeing any custom log data
+## Issue: You aren't seeing any custom log data
### Probable causes * Onboarding to Azure Monitor failed.
-* The setting **Apply the following configuration to my Linux Servers** has not been selected.
-* omsconfig has not picked up the latest custom log configuration from the service.
-* Log Analytics agent for Linux user `omsagent` is unable to access the custom log due to permissions or not being found. You may see the following errors:
-* `[DATETIME] [warn]: file not found. Continuing without tailing it.`
-* `[DATETIME] [error]: file not accessible by omsagent.`
-* Known Issue with Race Condition fixed in Log Analytics agent for Linux version 1.1.0-217
+* The setting **Apply the following configuration to my Linux Servers** hasn't been selected.
+* `omsconfig` hasn't picked up the latest custom log configuration from the service.
+* The Log Analytics agent for Linux user `omsagent` is unable to access the custom log due to permissions or not being found. You might see the following errors:
+ * `[DATETIME] [warn]: file not found. Continuing without tailing it.`
+ * `[DATETIME] [error]: file not accessible by omsagent.`
+* Known issue with race condition fixed in Log Analytics agent for Linux version 1.1.0-217.
### Resolution 1. Verify onboarding to Azure Monitor was successful by checking if the following file exists: `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsadmin.conf`. If not, either:
- 1. Reonboard using the omsadmin.sh command line [instructions](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/docs/OMS-Agent-for-Linux.md#onboarding-using-the-command-line).
- 2. Under **Advanced Settings** in the Azure portal, ensure that the setting **Apply the following configuration to my Linux Servers** is enabled.
+ 1. Reonboard by using the omsadmin.sh command line [instructions](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/docs/OMS-Agent-for-Linux.md#onboarding-using-the-command-line).
+ 1. Under **Advanced Settings** in the Azure portal, ensure that the setting **Apply the following configuration to my Linux Servers** is enabled.
-2. Check that the `omsconfig` agent can communicate with Azure Monitor by running the following command `sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/GetDscConfiguration.py'`. This command returns the configuration that agent receives from the service, including Syslog settings, Linux performance counters, and custom logs. If this command fails, run the following command `sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/PerformRequiredConfigurationChecks.py'`. This command forces the omsconfig agent to talk to Azure Monitor and retrieve the latest configuration.
+1. Check that the `omsconfig` agent can communicate with Azure Monitor by running the following command: `sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/GetDscConfiguration.py'`. This command returns the configuration that the agent receives from the service, including Syslog settings, Linux performance counters, and custom logs. If this command fails, run the following command: `sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/PerformRequiredConfigurationChecks.py'`. This command forces the `omsconfig` agent to talk to Azure Monitor and retrieve the latest configuration.
-**Background:** Instead of the Log Analytics agent for Linux running as a privileged user - `root`, the agent runs as the `omsagent` user. In most cases, explicit permission must be granted to this user in order for certain files to be read. To grant permission to `omsagent` user, run the following commands:
+**Background:** Instead of the Log Analytics agent for Linux running as a privileged user - `root`, the agent runs as the `omsagent` user. In most cases, explicit permission must be granted to this user for certain files to be read. To grant permission to `omsagent` user, run the following commands:
-1. Add the `omsagent` user to specific group `sudo usermod -a -G <GROUPNAME> <USERNAME>`
-2. Grant universal read access to the required file `sudo chmod -R ugo+rx <FILE DIRECTORY>`
+1. Add the `omsagent` user to the specific group: `sudo usermod -a -G <GROUPNAME> <USERNAME>`.
+1. Grant universal read access to the required file: `sudo chmod -R ugo+rx <FILE DIRECTORY>`.
-There is a known issue with a race condition with the Log Analytics agent for Linux version earlier than 1.1.0-217. After updating to the latest agent, run the following command to get the latest version of the output plugin `sudo cp /etc/opt/microsoft/omsagent/sysconf/omsagent.conf /etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.conf`.
+There's a known issue with a race condition with the Log Analytics agent for Linux version earlier than 1.1.0-217. After you update to the latest agent, run the following command to get the latest version of the output plug-in: `sudo cp /etc/opt/microsoft/omsagent/sysconf/omsagent.conf /etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.conf`.
-## Issue: You are trying to reonboard to a new workspace
+## Issue: You're trying to reonboard to a new workspace
-When you try to reonboard an agent to a new workspace, the Log Analytics agent configuration needs to be cleaned up before reonboarding. To clean up old configuration from the agent, run the shell bundle with `--purge`
+When you try to reonboard an agent to a new workspace, the Log Analytics agent configuration needs to be cleaned up before reonboarding. To clean up old configuration from the agent, run the shell bundle with `--purge`:
``` sudo sh ./omsagent-*.universal.x64.sh --purge
Or
sudo sh ./onboard_agent.sh --purge ```
-You can continue reonboard after using the `--purge` option
+You can continue to reonboard after you use the `--purge` option.
-## Log Analytics agent extension in the Azure portal is marked with a failed state: Provisioning failed
+## Issue: Log Analytics agent extension in the Azure portal is marked with a failed state: Provisioning failed
### Probable causes
-* Log Analytics agent has been removed from the operating system
-* Log Analytics agent service is down, disabled, or not configured
+* The Log Analytics agent has been removed from the operating system.
+* The Log Analytics agent service is down, disabled, or not configured.
### Resolution
-Perform the following steps to correct the issue.
-1. Remove extension from Azure portal.
-2. Install the agent following the [instructions](../vm/monitor-virtual-machine.md).
-3. Restart the agent by running the following command: `sudo /opt/microsoft/omsagent/bin/service_control restart`.
-* Wait several minutes and the provisioning state changes to **Provisioning succeeded**.
+1. Remove the extension from the Azure portal.
+1. Install the agent by following the [instructions](../vm/monitor-virtual-machine.md).
+1. Restart the agent by running the following command: <br/> `sudo /opt/microsoft/omsagent/bin/service_control restart`.
+1. Wait several minutes until the provisioning state changes to **Provisioning succeeded**.
## Issue: The Log Analytics agent upgrade on-demand
The Log Analytics agent packages on the host are outdated.
### Resolution
-Perform the following steps to correct the issue.
-
-1. Check for the latest release on [page](https://github.com/Microsoft/OMS-Agent-for-Linux/releases/).
-2. Download install script (1.4.2-124 as example version):
+1. Check for the latest release on [this GitHub page](https://github.com/Microsoft/OMS-Agent-for-Linux/releases/).
+1. Download the installation script (1.4.2-124 is an example version):
``` wget https://github.com/Microsoft/OMS-Agent-for-Linux/releases/download/OMSAgent_GA_v1.4.2-124/omsagent-1.4.2-124.universal.x64.sh ```
-3. Upgrade packages by executing `sudo sh ./omsagent-*.universal.x64.sh --upgrade`.
+1. Upgrade packages by executing `sudo sh ./omsagent-*.universal.x64.sh --upgrade`.
-## Issue: Installation is failing saying Python2 cannot support ctypes, even though Python3 is being used
+## Issue: Installation is failing and says Python2 can't support ctypes, even though Python3 is being used
### Probable causes
-There is a known issue where, if the VM's language isn't English, a check will fail when verifying which version of Python is being used. This leads to the agent always assuming Python2 is being used, and failing if there is no Python2.
+For this known issue, if the VM's language isn't English, a check will fail when verifying which version of Python is being used. This issue leads to the agent always assuming Python2 is being used and failing if there's no Python2.
### Resolution
azure-monitor Agent Windows Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-windows-troubleshoot.md
If the query returns results, then you need to determine if a particular data ty
||-|| |8000 |HealthService |This event will specify if a workflow related to performance, event, or other data type collected is unable to forward to the service for ingestion to the workspace. | Event ID 2136 from source HealthService is written together with this event and can indicate the agent is unable to communicate with the service, possibly due to misconfiguration of the proxy and authentication settings, network outage, or the network firewall/proxy does not allow TCP traffic from the computer to the service.| |10102 and 10103 |Health Service Modules |Workflow could not resolve data source. |This can occur if the specified performance counter or instance does not exist on the computer or is incorrectly defined in the workspace data settings. If this is a user-specified [performance counter](data-sources-performance-counters.md#configuring-performance-counters), verify the information specified is following the correct format and exists on the target computers. |
- |26002 |Health Service Modules |Workflow could not resolve data source. |This can occur if the specified Windows event log does not exist on the computer. This error can be safely ignored if the computer is not expected to have this event log registered, otherwise if this is a user-specified [event log](data-sources-windows-events.md#configuring-windows-event-logs), verify the information specified is correct. |
+ |26002 |Health Service Modules |Workflow could not resolve data source. |This can occur if the specified Windows event log does not exist on the computer. This error can be safely ignored if the computer is not expected to have this event log registered, otherwise if this is a user-specified [event log](data-sources-windows-events.md#configure-windows-event-logs), verify the information specified is correct. |
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
Title: Manage the Azure Monitor agent
-description: Options for managing the Azure Monitor agent (AMA) on Azure virtual machines and Azure Arc-enabled servers.
+description: Options for managing the Azure Monitor agent on Azure virtual machines and Azure Arc-enabled servers.
# Manage the Azure Monitor agent
-This article provides the different options currently available to install, uninstall and update the [Azure Monitor agent](azure-monitor-agent-overview.md). This agent extension can be installed on Azure virtual machines, scale sets and Azure Arc-enabled servers. It also lists the options to create [associations with data collection rules](data-collection-rule-azure-monitor-agent.md) that define which data the agent should collect. Installing, upgrading, or uninstalling the Azure Monitor Agent will not require you to restart your server.
+
+This article provides the different options currently available to install, uninstall, and update the [Azure Monitor agent](azure-monitor-agent-overview.md). This agent extension can be installed on Azure virtual machines, scale sets, and Azure Arc-enabled servers. It also lists the options to create [associations with data collection rules](data-collection-rule-azure-monitor-agent.md) that define which data the agent should collect. Installing, upgrading, or uninstalling the Azure Monitor agent won't require you to restart your server.
## Virtual machine extension details
-The Azure Monitor agent is implemented as an [Azure VM extension](../../virtual-machines/extensions/overview.md) with the details in the following table. It can be installed using any of the methods to install virtual machine extensions including those described in this article.
+
+The Azure Monitor agent is implemented as an [Azure VM extension](../../virtual-machines/extensions/overview.md) with the details in the following table. You can install it by using any of the methods to install virtual machine extensions including the methods described in this article.
| Property | Windows | Linux | |:|:|:| | Publisher | Microsoft.Azure.Monitor | Microsoft.Azure.Monitor | | Type | AzureMonitorWindowsAgent | AzureMonitorLinuxAgent |
-| TypeHandlerVersion | See [Azure Monitor Agent extension versions](./azure-monitor-agent-extension-versions.md) | [Azure Monitor Agent extension versions](./azure-monitor-agent-extension-versions.md) |
+| TypeHandlerVersion | See [Azure Monitor agent extension versions](./azure-monitor-agent-extension-versions.md) | [Azure Monitor agent extension versions](./azure-monitor-agent-extension-versions.md) |
## Extension versions
-[View Azure Monitor Agent extension versions](./azure-monitor-agent-extension-versions.md).
+
+View [Azure Monitor agent extension versions](./azure-monitor-agent-extension-versions.md).
## Prerequisites+ The following prerequisites must be met prior to installing the Azure Monitor agent. -- **Permissions**: For methods other than Azure portal, you must have the following role assignments to install the agent:
+- **Permissions**: For methods other than using the Azure portal, you must have the following role assignments to install the agent:
- | Built-in Role | Scope(s) | Reason |
+ | Built-in role | Scopes | Reason |
|:|:|:|
- | <ul><li>[Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)</li><li>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, scale sets</li><li>Arc-enabled servers</li></ul> | To deploy the agent |
- | Any role that includes the action *Microsoft.Resources/deployments/** | <ul><li>Subscription and/or</li><li>Resource group and/or </li></ul> | To deploy ARM templates |
-- **Non-Azure**: For installing the agent on physical servers and virtual machines hosted *outside* of Azure (i.e. on-premises) or in other clouds, you must [install the Azure Arc Connected Machine agent](../../azure-arc/servers/agent-overview.md) first (at no added cost)-- **Authentication**: [Managed identity](../../active-directory/managed-identities-azure-resources/overview.md) must be enabled on Azure virtual machines. Both system-assigned and user-assigned managed identities are supported.
- - **User-assigned**: This is recommended for large-scale deployments, configurable via [built-in Azure policies](#using-azure-policy). You can create a user-assigned managed identity once and share it across multiple VMs, and is thus more scalable than a system-assigned managed identity. If you use a user-assigned managed identity, you must pass the managed identity details to Azure Monitor Agent via extension settings:
+ | <ul><li>[Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)</li><li>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, scale sets,</li><li>Azure Arc-enabled servers</li></ul> | To deploy the agent |
+ | Any role that includes the action *Microsoft.Resources/deployments/** | <ul><li>Subscription and/or</li><li>Resource group and/or </li></ul> | To deploy Azure Resource Manager templates |
+- **Non-Azure**: To install the agent on physical servers and virtual machines hosted *outside* of Azure (that is, on-premises) or in other clouds, you must [install the Azure Arc Connected Machine agent](../../azure-arc/servers/agent-overview.md) first, at no added cost.
+- **Authentication**: [Managed identity](../../active-directory/managed-identities-azure-resources/overview.md) must be enabled on Azure virtual machines. Both user-assigned and system-assigned managed identities are supported.
+ - **User-assigned**: This managed identity is recommended for large-scale deployments, configurable via [built-in Azure policies](#use-azure-policy). You can create a user-assigned managed identity once and share it across multiple VMs, which means it's more scalable than a system-assigned managed identity. If you use a user-assigned managed identity, you must pass the managed identity details to the Azure Monitor agent via extension settings:
+
```json { "authentication": {
The following prerequisites must be met prior to installing the Azure Monitor ag
} } ```
- We recommend using `mi_res_id` as the `identifier-name`. The sample commands below only show usage with `mi_res_id` for the sake of brevity. For more details on `mi_res_id`, `object_id`, and `client_id`, see the [managed identity documentation](../../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md#get-a-token-using-http).
- - **System-assigned**: This is suited for initial testing or small deployments. When used at scale (for example, for all VMs in a subscription) it results in substantial number of identities created (and deleted) in Azure AD (Azure Active Directory). To avoid this churn of identities, it is recommended to use user-assigned managed identities instead. **For Arc-enabled servers, system-assigned managed identity is enabled automatically** (as soon as you install the Arc agent) as it's the only supported type for Arc-enabled servers.
- - This is not required for Azure Arc-enabled servers. The system identity will be enabled automatically if the agent is installed via [creating and assigning a data collection rule using the Azure portal](data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association).
-- **Networking**: If using network firewalls, the [AzureResourceManager service tag](../../virtual-network/service-tags-overview.md) must be enabled on the virtual network for the virtual machine. Additionally, the virtual machine must have access to the following HTTPS endpoints:
+ We recommend that you use `mi_res_id` as the `identifier-name`. The following sample commands only show usage with `mi_res_id` for the sake of brevity. For more information on `mi_res_id`, `object_id`, and `client_id`, see the [Managed identity documentation](../../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md#get-a-token-using-http).
+ - **System-assigned**: This managed identity is suited for initial testing or small deployments. When used at scale, for example, for all VMs in a subscription, it results in a substantial number of identities created (and deleted) in Azure Active Directory. To avoid this churn of identities, use user-assigned managed identities instead. *For Azure Arc-enabled servers, system-assigned managed identity is enabled automatically* as soon as you install the Azure Arc agent. It's the only supported type for Azure Arc-enabled servers.
+ - **Not required for Azure Arc-enabled servers**: The system identity is enabled automatically if the agent is installed via [creating and assigning a data collection rule by using the Azure portal](data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association).
+- **Networking**: If you use network firewalls, the [AzureResourceManager service tag](../../virtual-network/service-tags-overview.md) must be enabled on the virtual network for the virtual machine. The virtual machine must also have access to the following HTTPS endpoints:
+ - global.handler.control.monitor.azure.com - `<virtual-machine-region-name>`.handler.control.monitor.azure.com (example: westus.handler.control.azure.com) - `<log-analytics-workspace-id>`.ods.opinsights.azure.com (example: 12345a01-b1cd-1234-e1f2-1234567g8h99.ods.opsinsights.azure.com)
- (If using private links on the agent, you must also add the [dce endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint))
-
+ (If you use private links on the agent, you must also add the [dce endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint)).
> [!NOTE]
-> This article only pertains to agent installation or management. After you install the agent, you must review the next article to [configure data collection rules and associate them with the machines](./data-collection-rule-azure-monitor-agent.md) with agents installed.
-> **The Azure Monitor agents cannot function without being associated with data collection rules.**
+> This article only pertains to agent installation or management. After you install the agent, you must review the next article to [configure data collection rules and associate them with the machines](./data-collection-rule-azure-monitor-agent.md) with agents installed. *The Azure Monitor agents can't function without being associated with data collection rules.*
+## Use the Azure portal
-## Using the Azure portal
+Follow these instructions to use the Azure portal.
### Install
-To install the Azure Monitor agent using the Azure portal, follow the process to [create a data collection rule](data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) in the Azure portal. This not only creates the rule, but it also associates it to the selected resources and installs the Azure Monitor agent on them if not already installed.
+
+To install the Azure Monitor agent by using the Azure portal, follow the process to [create a data collection rule](data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) in the Azure portal. This process creates the rule, associates it to the selected resources, and installs the Azure Monitor agent on them if it's not already installed.
### Uninstall
-To uninstall the Azure Monitor agent using the Azure portal, navigate to your virtual machine, scale set or Arc-enabled server, select the **Extensions** tab and click on **AzureMonitorWindowsAgent** or **AzureMonitorLinuxAgent**. In the dialog that pops up, click **Uninstall**.
+
+To uninstall the Azure Monitor agent by using the Azure portal, go to your virtual machine, scale set, or Azure Arc-enabled server. Select the **Extensions** tab and select **AzureMonitorWindowsAgent** or **AzureMonitorLinuxAgent**. In the dialog that opens, select **Uninstall**.
### Update
-To perform a **one time update** of the agent, you must first uninstall the existing agent version and then install the new version as described above.
-The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature. Navigate to your virtual machine or scale set, select the **Extensions** tab and click on **AzureMonitorWindowsAgent** or **AzureMonitorLinuxAgent**. In the dialog that pops up, click **Enable automatic upgrade**.
-## Using Resource Manager templates
+To perform a one-time update of the agent, you must first uninstall the existing agent version. Then install the new version as described.
+
+We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature. Go to your virtual machine or scale set, select the **Extensions** tab and select **AzureMonitorWindowsAgent** or **AzureMonitorLinuxAgent**. In the dialog that opens, select **Enable automatic upgrade**.
+
+## Use Resource Manager templates
+
+Follow these instructions to use Azure Resource Manager templates.
### Install+ You can use Resource Manager templates to install the Azure Monitor agent on Azure virtual machines and on Azure Arc-enabled servers and to create an association with data collection rules. You must create any data collection rule prior to creating the association.
-Get sample templates for installing the agent and creating the association from the following:
+Get sample templates for installing the agent and creating the association from the following resources:
-- [Template to install Azure Monitor agent (Azure and Azure Arc)](../agents/resource-manager-agent.md#azure-monitor-agent)
+- [Template to install Azure Monitor agent (Azure and Azure Arc)](../agents/resource-manager-agent.md#azure-monitor-agent)
- [Template to create association with data collection rule](./resource-manager-data-collection-rules.md)
-Install the templates using [any deployment method for Resource Manager templates](../../azure-resource-manager/templates/deploy-powershell.md) such as the following commands.
+Install the templates by using [any deployment method for Resource Manager templates](../../azure-resource-manager/templates/deploy-powershell.md), such as the following commands.
# [PowerShell](#tab/ARMAgentPowerShell)+ ```powershell New-AzResourceGroupDeployment -ResourceGroupName "<resource-group-name>" -TemplateFile "<template-filename.json>" -TemplateParameterFile "<parameter-filename.json>" ```+ # [CLI](#tab/ARMAgentCLI)+ ```azurecli az deployment group create --resource-group "<resource-group-name>" --template-file "<path-to-template>" --parameters "@<parameter-filename.json>" ```
-## Using PowerShell
-You can install the Azure Monitor agent on Azure virtual machines and on Azure Arc-enabled servers using the PowerShell command for adding a virtual machine extension.
+## Use PowerShell
+
+You can install the Azure Monitor agent on Azure virtual machines and on Azure Arc-enabled servers by using the PowerShell command for adding a virtual machine extension.
### Install on Azure virtual machines+ Use the following PowerShell commands to install the Azure Monitor agent on Azure virtual machines. Choose the appropriate command based on your chosen authentication method. #### User-assigned managed identity+ # [Windows](#tab/PowerShellWindows)+ ```powershell Set-AzVMExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> -SettingString '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}' ``` # [Linux](#tab/PowerShellLinux)+ ```powershell Set-AzVMExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> -SettingString '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}' ``` #### System-assigned managed identity+ # [Windows](#tab/PowerShellWindows)+ ```powershell Set-AzVMExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> ``` # [Linux](#tab/PowerShellLinux)+ ```powershell Set-AzVMExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> ``` ### Uninstall on Azure virtual machines
-Use the following PowerShell commands to install the Azure Monitor agent on Azure virtual machines.
+
+Use the following PowerShell commands to uninstall the Azure Monitor agent on Azure virtual machines.
+ # [Windows](#tab/PowerShellWindows)+ ```powershell Remove-AzVMExtension -Name AzureMonitorWindowsAgent -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> ``` # [Linux](#tab/PowerShellLinux)+ ```powershell Remove-AzVMExtension -Name AzureMonitorLinuxAgent -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> ``` ### Update on Azure virtual machines
-To perform a **one time update** of the agent, you must first uninstall the existing agent version and then install the new version as described above.
-The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature, using the following PowerShell commands.
+
+To perform a one-time update of the agent, you must first uninstall the existing agent version,. Then install the new version as described.
+
+We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature by using the following PowerShell commands.
+ # [Windows](#tab/PowerShellWindows)+ ```powershell Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Publisher Microsoft.Azure.Monitor -ExtensionType AzureMonitorWindowsAgent -TypeHandlerVersion <version-number> -Location <location> -EnableAutomaticUpgrade $true ```+ # [Linux](#tab/PowerShellLinux)+ ```powershell Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Publisher Microsoft.Azure.Monitor -ExtensionType AzureMonitorLinuxAgent -TypeHandlerVersion <version-number> -Location <location> -EnableAutomaticUpgrade $true ```
Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ResourceGroupName <reso
### Install on Azure Arc-enabled servers+ Use the following PowerShell commands to install the Azure Monitor agent on Azure Arc-enabled servers.+ # [Windows](#tab/PowerShellWindowsArc)+ ```powershell New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> ``` # [Linux](#tab/PowerShellLinuxArc)+ ```powershell New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> ``` ### Uninstall on Azure Arc-enabled servers
-Use the following PowerShell commands to install the Azure Monitor agent on Azure Arc-enabled servers.
+
+Use the following PowerShell commands to uninstall the Azure Monitor agent on Azure Arc-enabled servers.
+ # [Windows](#tab/PowerShellWindowsArc)+ ```powershell Remove-AzConnectedMachineExtension -MachineName <arc-server-name> -ResourceGroupName <resource-group-name> -Name AzureMonitorWindowsAgent ```+ # [Linux](#tab/PowerShellLinuxArc)+ ```powershell Remove-AzConnectedMachineExtension -MachineName <arc-server-name> -ResourceGroupName <resource-group-name> -Name AzureMonitorLinuxAgent ``` ### Upgrade on Azure Arc-enabled servers
-To perform a **one time** upgrade of the agent, use the following PowerShell commands:
+
+To perform a one-time upgrade of the agent, use the following PowerShell commands.
# [Windows](#tab/PowerShellWindowsArc)+ ```powershell $target = @{"Microsoft.Azure.Monitor.AzureMonitorWindowsAgent" = @{"targetVersion"=<target-version-number>}} Update-AzConnectedExtension -ResourceGroupName $env.ResourceGroupName -MachineName <arc-server-name> -ExtensionTarget $target ```+ # [Linux](#tab/PowerShellLinuxArc)+ ```powershell $target = @{"Microsoft.Azure.Monitor.AzureMonitorLinuxAgent" = @{"targetVersion"=<target-version-number>}} Update-AzConnectedExtension -ResourceGroupName $env.ResourceGroupName -MachineName <arc-server-name> -ExtensionTarget $target ```
-The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade (preview)](../../azure-arc/servers/manage-automatic-vm-extension-upgrade.md#enable-automatic-extension-upgrade) feature, using the following PowerShell commands.
+We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade (preview)](../../azure-arc/servers/manage-automatic-vm-extension-upgrade.md#enable-automatic-extension-upgrade) feature by using the following PowerShell commands.
+ # [Windows](#tab/PowerShellWindowsArc)+ ```powershell Update-AzConnectedMachineExtension -ResourceGroup <resource-group-name> -MachineName <arc-server-name> -Name AzureMonitorWindowsAgent -EnableAutomaticUpgrade ```+ # [Linux](#tab/PowerShellLinuxArc)+ ```powershell Update-AzConnectedMachineExtension -ResourceGroup <resource-group-name> -MachineName <arc-server-name> -Name AzureMonitorLinuxAgent -EnableAutomaticUpgrade ```
+## Use the Azure CLI
-## Using Azure CLI
-You can install the Azure Monitor agent on Azure virtual machines and on Azure Arc-enabled servers using the Azure CLI command for adding a virtual machine extension.
+You can install the Azure Monitor agent on Azure virtual machines and on Azure Arc-enabled servers by using the Azure CLI command for adding a virtual machine extension.
### Install on Azure virtual machines+ Use the following CLI commands to install the Azure Monitor agent on Azure virtual machines. Choose the appropriate command based on your chosen authentication method.+ #### User-assigned managed identity+ # [Windows](#tab/CLIWindows)+ ```azurecli az vm extension set --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --settings '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}' ```+ # [Linux](#tab/CLILinux)+ ```azurecli az vm extension set --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --settings '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}' ``` #### System-assigned managed identity+ # [Windows](#tab/CLIWindows)+ ```azurecli az vm extension set --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> ```+ # [Linux](#tab/CLILinux)+ ```azurecli az vm extension set --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> ``` ### Uninstall on Azure virtual machines
-Use the following CLI commands to install the Azure Monitor agent on Azure virtual machines.
+
+Use the following CLI commands to uninstall the Azure Monitor agent on Azure virtual machines.
+ # [Windows](#tab/CLIWindows)+ ```azurecli az vm extension delete --resource-group <resource-group-name> --vm-name <virtual-machine-name> -name AzureMonitorWindowsAgent ```+ # [Linux](#tab/CLILinux)++ ```azurecli az vm extension delete --resource-group <resource-group-name> --vm-name <virtual-machine-name> -name AzureMonitorLinuxAgent ``` ### Update on Azure virtual machines
-To perform a **one time update** of the agent, you must first uninstall the existing agent version and then install the new version as described above.
-The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature, using the following CLI commands.
+
+To perform a one-time update of the agent, you must first uninstall the existing agent version,. Then install the new version as described.
+
+We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature by using the following CLI commands.
+ # [Windows](#tab/CLIWindows)+ ```azurecli az vm extension set -name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --vm-name <virtual-machine-name> --resource-group <resource-group-name> --enable-auto-upgrade true ``` # [Linux](#tab/CLILinux)+ ```azurecli az vm extension set -name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --vm-name <virtual-machine-name> --resource-group <resource-group-name> --enable-auto-upgrade true ``` - ### Install on Azure Arc-enabled servers
-Use the following CLI commands to install the Azure Monitor agent onAzure Azure Arc-enabled servers.
+
+Use the following CLI commands to install the Azure Monitor agent on Azure Arc-enabled servers.
# [Windows](#tab/CLIWindowsArc)+ ```azurecli az connectedmachine extension create --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --type AzureMonitorWindowsAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> ```+ # [Linux](#tab/CLILinuxArc)+ ```azurecli az connectedmachine extension create --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --type AzureMonitorLinuxAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> ``` ### Uninstall on Azure Arc-enabled servers
-Use the following CLI commands to install the Azure Monitor agent onAzure Azure Arc-enabled servers.
+
+Use the following CLI commands to uninstall the Azure Monitor agent on Azure Arc-enabled servers.
# [Windows](#tab/CLIWindowsArc)+ ```azurecli az connectedmachine extension delete --name AzureMonitorWindowsAgent --machine-name <arc-server-name> --resource-group <resource-group-name> ``` # [Linux](#tab/CLILinuxArc)+ ```azurecli az connectedmachine extension delete --name AzureMonitorLinuxAgent --machine-name <arc-server-name> --resource-group <resource-group-name> ``` ### Upgrade on Azure Arc-enabled servers
-To perform a **one time upgrade** of the agent, use the following CLI commands:
+
+To perform a one-time upgrade of the agent, use the following CLI commands.
+ # [Windows](#tab/CLIWindowsArc)+ ```azurecli az connectedmachine upgrade-extension --extension-targets "{\"Microsoft.Azure.Monitor.AzureMonitorWindowsAgent\":{\"targetVersion\":\"<target-version-number>\"}}" --machine-name <arc-server-name> --resource-group <resource-group-name> ```+ # [Linux](#tab/CLILinuxArc)+ ```azurecli az connectedmachine upgrade-extension --extension-targets "{\"Microsoft.Azure.Monitor.AzureMonitorWindowsAgent\":{\"targetVersion\":\"<target-version-number>\"}}" --machine-name <arc-server-name> --resource-group <resource-group-name> ```
-The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade (preview)](../../azure-arc/servers/manage-automatic-vm-extension-upgrade.md#enable-automatic-extension-upgrade) feature, using the following PowerShell commands.
+ We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade (preview)](../../azure-arc/servers/manage-automatic-vm-extension-upgrade.md#enable-automatic-extension-upgrade) feature by using the following PowerShell commands.
+ # [Windows](#tab/CLIWindowsArc)+ ```azurecli az connectedmachine extension update --name AzureMonitorWindowsAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --enable-auto-upgrade true ```+ # [Linux](#tab/CLILinuxArc)+ ```azurecli az connectedmachine extension update --name AzureMonitorLinuxAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --enable-auto-upgrade true ```
+## Use Azure Policy
-## Using Azure Policy
-Use the following policies and policy initiatives to **automatically install the agent and associate it with a data collection rule**, every time you create a virtual machine, scale set, or Arc-enabled server.
+Use the following policies and policy initiatives to automatically install the agent and associate it with a data collection rule every time you create a virtual machine, scale set, or Azure Arc-enabled server.
> [!NOTE]
-> As per Microsoft Identity best practices, policies for installing Azure Monitor agent on **virtual machines and scale-sets** rely on **user-assigned managed identity**. This is the more scalable and resilient managed identity options for these resources.
-> For **Arc-enabled servers**, policies rely on only **system-assigned managed identity** as the only supported option today.
+> As per Microsoft Identity best practices, policies for installing the Azure Monitor agent on virtual machines and scale sets rely on user-assigned managed identity. This option is the more scalable and resilient managed identity for these resources.
+> For Azure Arc-enabled servers, policies rely on system-assigned managed identity as the only supported option today.
### Built-in policy initiatives
-Before proceeding, review [prerequisites for agent installation](azure-monitor-agent-manage.md#prerequisites).
-Policy initiatives for Windows and Linux **virtual machines, scale-sets** consist of individual policies that:
+Before you proceed, review [prerequisites for agent installation](azure-monitor-agent-manage.md#prerequisites).
+
+Policy initiatives for Windows and Linux virtual machines, scale sets consist of individual policies that:
-- (Optional) Create and assign built-in user-assigned managed identity, per subscription, per region. [Learn more](../../active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md#policy-definition-and-details).
- - `Bring Your Own User-Assigned Identity`: If set of `true`, it creates the built-in user-assigned managed identity in the predefined resource group, and assigns it to all machines that the policy is applied to. If set to `false`, you can instead use existing user-assigned identity that **you must assign** to the machines beforehand.
-- Install the Azure Monitor agent extension on the machine, and configure it to use user-assigned identity as specified by the parameters below
- - `Bring Your Own User-Assigned Managed Identity`: If set to `false`, it configures the agent to use the built-in user-assigned managed identity created by the policy above. If set to `true`, it configures the agent to use an existing user-assigned identity that **you must assign** to the machine(s) in scope beforehand.
- - `User-Assigned Managed Identity Name`: If using your own identity (selected `true`), specify the name of the identity that's assigned to the machine(s)
- - `User-Assigned Managed Identity Resource Group`: If using your own identity (selected `true`), specify the resource group where the identity exists
- - `Additional Virtual Machine Images`: Pass additional VM image names that you want to apply the policy to, if not already included
+- (Optional) Create and assign built-in user-assigned managed identity, per subscription, per region. [Learn more](../../active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md#policy-definition-and-details).
+ - `Bring Your Own User-Assigned Identity`: If set to `true`, it creates the built-in user-assigned managed identity in the predefined resource group and assigns it to all machines that the policy is applied to. If set to `false`, you can instead use existing user-assigned identity that *you must assign* to the machines beforehand.
+- Install the Azure Monitor agent extension on the machine, and configure it to use user-assigned identity as specified by the following parameters.
+ - `Bring Your Own User-Assigned Managed Identity`: If set to `false`, it configures the agent to use the built-in user-assigned managed identity created by the preceding policy. If set to `true`, it configures the agent to use an existing user-assigned identity that *you must assign* to the machines in scope beforehand.
+ - `User-Assigned Managed Identity Name`: If you use your own identity (selected `true`), specify the name of the identity that's assigned to the machines.
+ - `User-Assigned Managed Identity Resource Group`: If you use your own identity (selected `true`), specify the resource group where the identity exists.
+ - `Additional Virtual Machine Images`: Pass additional VM image names that you want to apply the policy to, if not already included.
- Create and deploy the association to link the machine to specified data collection rule.
- - `Data Collection Rule Resource Id`: The ARM resourceId of the rule you want to associate via this policy, to all machines the policy is applied to.
+ - `Data Collection Rule Resource Id`: The Azure Resource Manager resourceId of the rule you want to associate via this policy to all machines the policy is applied to.
+
+ ![Partial screenshot from the Azure Policy Definitions page that shows two built-in policy initiatives for configuring the Azure Monitor agent.](media/azure-monitor-agent-install/built-in-ama-dcr-initiatives.png)
-![Partial screenshot from the Azure Policy Definitions page showing two built-in policy initiatives for configuring the Azure Monitor agent.](media/azure-monitor-agent-install/built-in-ama-dcr-initiatives.png)
+#### Known issues
-#### Known issues:
-- Managed Identity default behavior: [Learn more](../../active-directory/managed-identities-azure-resources/managed-identities-faq.md#what-identity-will-imds-default-to-if-dont-specify-the-identity-in-the-request)-- Possible race condition with using built-in user-assigned identity creation policy above. [Learn more](../../active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md#known-issues)-- Assigning policy to resource groups: If the assignment scope of the policy is a resource group and not a subscription, the identity used by policy assignment (different from the user-assigned identity used by agent) must be manually granted [these roles](../../active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md#required-authorization) prior to assignment/remediation. Failing to do this will result in **deployment failures**.-- Other [Managed Identity limitations](../../active-directory/managed-identities-azure-resources/managed-identities-faq.md#limitations)
+- Managed Identity default behavior. [Learn more](../../active-directory/managed-identities-azure-resources/managed-identities-faq.md#what-identity-will-imds-default-to-if-dont-specify-the-identity-in-the-request).
+- Possible race condition with using built-in user-assigned identity creation policy. [Learn more](../../active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md#known-issues).
+- Assigning policy to resource groups. If the assignment scope of the policy is a resource group and not a subscription, the identity used by policy assignment (different from the user-assigned identity used by agent) must be manually granted [these roles](../../active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md#required-authorization) prior to assignment/remediation. Failing to do this step will result in *deployment failures*.
+- Other [Managed Identity limitations](../../active-directory/managed-identities-azure-resources/managed-identities-faq.md#limitations).
-### Built-in policies
-You can choose to use the individual policies from the policy initiative above to perform a single action at scale. For example, if you *only* want to automatically install the agent, use the second agent installation policy from the initiative as shown below.
+### Built-in policies
-![Partial screenshot from the Azure Policy Definitions page showing policies contained within the initiative for configuring the Azure Monitor agent.](media/azure-monitor-agent-install/built-in-ama-dcr-policy.png)
+You can choose to use the individual policies from the preceding policy initiative to perform a single action at scale. For example, if you *only* want to automatically install the agent, use the second agent installation policy from the initiative, as shown.
+
+![Partial screenshot from the Azure Policy Definitions page that shows policies contained within the initiative for configuring the Azure Monitor agent.](media/azure-monitor-agent-install/built-in-ama-dcr-policy.png)
### Remediation
-The initiatives or policies will apply to each virtual machine as it's created. A [remediation task](../../governance/policy/how-to/remediate-resources.md) deploys the policy definitions in the initiative to *existing resources*, so you can configure the Azure Monitor agent for any resources that were already created.
-When you create the assignment by using the Azure portal, you have the option of creating a remediation task at the same time. See [Remediate non-compliant resources with Azure Policy](../../governance/policy/how-to/remediate-resources.md) for details on the remediation.
+The initiatives or policies will apply to each virtual machine as it's created. A [remediation task](../../governance/policy/how-to/remediate-resources.md) deploys the policy definitions in the initiative to existing resources, so you can configure the Azure Monitor agent for any resources that were already created.
+
+When you create the assignment by using the Azure portal, you have the option of creating a remediation task at the same time. For information on the remediation, see [Remediate non-compliant resources with Azure Policy](../../governance/policy/how-to/remediate-resources.md).
![Screenshot that shows initiative remediation for the Azure Monitor agent.](media/azure-monitor-agent-install/built-in-ama-dcr-remediation.png) ## Next steps -- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
+[Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Data Sources Windows Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-windows-events.md
Title: Collect Windows event log data sources with Log Analytics agent in Azure Monitor
-description: Describes how to configure the collection of Windows Event logs by Azure Monitor and details of the records they create.
+description: The article describes how to configure the collection of Windows event logs by Azure Monitor and details of the records they create.
Last updated 04/06/2022
# Collect Windows event log data sources with Log Analytics agent
-Windows Event logs are one of the most common [data sources](../agents/agent-data-sources.md) for Log Analytics agents on Windows virtual machines since many applications write to the Windows event log. You can collect events from standard logs, such as System and Application, and any custom logs created by applications you need to monitor.
-![Diagram that shows the Log Analytics agent sending Windows events to the Event table in Azure Monitor.](media/data-sources-windows-events/overview.png)
+Windows event logs are one of the most common [data sources](../agents/agent-data-sources.md) for Log Analytics agents on Windows virtual machines because many applications write to the Windows event log. You can collect events from standard logs, such as System and Application, and any custom logs created by applications you need to monitor.
+
+![Diagram that shows the Log Analytics agent sending Windows events to the Event table in Azure Monitor.](media/data-sources-windows-events/overview.png)
[!INCLUDE [Log Analytics agent deprecation](../../../includes/log-analytics-agent-deprecation.md)]
-## Configuring Windows Event logs
-Configure Windows Event logs from the [Agents configuration menu](../agents/agent-data-sources.md#configuring-data-sources) for the Log Analytics workspace.
+## Configure Windows event logs
-Azure Monitor only collects events from the Windows event logs that are specified in the settings. You can add an event log by typing in the name of the log and clicking **+**. For each log, only the events with the selected severities are collected. Check the severities for the particular log that you want to collect. You can't provide any additional criteria to filter events.
+Configure Windows event logs from the [Agents configuration menu](../agents/agent-data-sources.md#configuring-data-sources) for the Log Analytics workspace.
-As you type the name of an event log, Azure Monitor provides suggestions of common event log names. If the log you want to add doesn't appear in the list, you can still add it by typing in the full name of the log. You can find the full name of the log by using event viewer. In event viewer, open the *Properties* page for the log and copy the string from the *Full Name* field.
+Azure Monitor only collects events from Windows event logs that are specified in the settings. You can add an event log by entering the name of the log and selecting **+**. For each log, only the events with the selected severities are collected. Check the severities for the particular log that you want to collect. You can't provide any other criteria to filter events.
-[![Screenshot showing the Windows event logs tab on the Agents configuration screen.](media/data-sources-windows-events/configure.png)](media/data-sources-windows-events/configure.png#lightbox)
+As you enter the name of an event log, Azure Monitor provides suggestions of common event log names. If the log you want to add doesn't appear in the list, you can still add it by entering the full name of the log. You can find the full name of the log by using event viewer. In event viewer, open the **Properties** page for the log and copy the string from the **Full Name** field.
-> [!IMPORTANT]
-> You can't configure collection of security events from the workspace using Log Analytics agent. You must use [Microsoft Defender for Cloud](../../security-center/security-center-enable-data-collection.md) or [Microsoft Sentinel](../../sentinel/connect-windows-security-events.md) to collect security events. [Azure Monitor agent](azure-monitor-agent-overview.md) can also be used to collect security events.
+[![Screenshot that shows the Windows event logs tab on the Agents configuration screen.](media/data-sources-windows-events/configure.png)](media/data-sources-windows-events/configure.png#lightbox)
+> [!IMPORTANT]
+> You can't configure collection of security events from the workspace by using the Log Analytics agent. You must use [Microsoft Defender for Cloud](../../security-center/security-center-enable-data-collection.md) or [Microsoft Sentinel](../../sentinel/connect-windows-security-events.md) to collect security events. The [Azure Monitor agent](azure-monitor-agent-overview.md) can also be used to collect security events.
-> [!NOTE]
-> Critical events from the Windows event log will have a severity of "Error" in Azure Monitor Logs.
+Critical events from the Windows event log will have a severity of "Error" in Azure Monitor Logs.
## Data collection
-Azure Monitor collects each event that matches a selected severity from a monitored event log as the event is created. The agent records its place in each event log that it collects from. If the agent goes offline for a while, it collects events from where it last left off, even if those events were created while the agent was offline. There's a potential for these events to not be collected if the event log wraps with uncollected events being overwritten while the agent is offline.
+
+Azure Monitor collects each event that matches a selected severity from a monitored event log as the event is created. The agent records its place in each event log that it collects from. If the agent goes offline for a while, it collects events from where it last left off, even if those events were created while the agent was offline. There's a potential for these events to not be collected if the event log wraps with uncollected events being overwritten while the agent is offline.
>[!NOTE]
->Azure Monitor does not collect audit events created by SQL Server from source *MSSQLSERVER* with event ID 18453 that contains keywords - *Classic* or *Audit Success* and keyword *0xa0000000000000*.
+>Azure Monitor doesn't collect audit events created by SQL Server from source *MSSQLSERVER* with event ID 18453 that contains keywords *Classic* or *Audit Success* and keyword *0xa0000000000000*.
> ## Windows event records properties
-Windows event records have a type of **Event** and have the properties in the following table:
+
+Windows event records have a type of event and have the properties in the following table:
| Property | Description | |: |: |
Windows event records have a type of **Event** and have the properties in the fo
| EventLevelName |Severity of the event in text form. | | EventLog |Name of the event log that the event was collected from. | | ParameterXml |Event parameter values in XML format. |
-| ManagementGroupName |Name of the management group for System Center Operations Manager agents. For other agents, this value is `AOI-<workspace ID>` |
-| RenderedDescription |Event description with parameter values |
+| ManagementGroupName |Name of the management group for System Center Operations Manager agents. For other agents, this value is `AOI-<workspace ID>`. |
+| RenderedDescription |Event description with parameter values. |
| Source |Source of the event. |
-| SourceSystem |Type of agent the event was collected from. <br> OpsManager ΓÇô Windows agent, either direct connect or Operations Manager managed <br> Linux ΓÇô All Linux agents <br> AzureStorage ΓÇô Azure Diagnostics |
+| SourceSystem |Type of agent the event was collected from. <br> OpsManager ΓÇô Windows agent, either direct connect or Operations Manager managed. <br> Linux ΓÇô All Linux agents. <br> AzureStorage ΓÇô Azure Diagnostics. |
| TimeGenerated |Date and time the event was created in Windows. | | UserName |User name of the account that logged the event. |
-## Log queries with Windows Events
-The following table provides different examples of log queries that retrieve Windows Event records.
+## Log queries with Windows events
+
+The following table provides different examples of log queries that retrieve Windows event records.
| Query | Description | |:|:|
The following table provides different examples of log queries that retrieve Win
| Event &#124; summarize count() by Source |Count of Windows events by source. | | Event &#124; where EventLevelName == "error" &#124; summarize count() by Source |Count of Windows error events by source. | - ## Next steps+ * Configure Log Analytics to collect other [data sources](../agents/agent-data-sources.md) for analysis.
-* Learn about [log queries](../logs/log-query-overview.md) to analyze the data collected from data sources and solutions.
+* Learn about [log queries](../logs/log-query-overview.md) to analyze the data collected from data sources and solutions.
* Configure [collection of performance counters](data-sources-performance-counters.md) from your Windows agents.
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
Last updated 10/12/2021 - # Application Insights for ASP.NET Core applications This article describes how to enable Application Insights for an [ASP.NET Core](/aspnet/core) application.
azure-monitor Asp Net Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-exceptions.md
# Diagnose exceptions in web apps with Application Insights
-Exceptions in web applications can be reported with [Application Insights](./app-insights-overview.md). You can correlate failed requests with exceptions and other events on both the client and server, so that you can quickly diagnose the causes. In this article, you'll learn how to set up exception reporting, report exceptions explicitly, diagnose failures, and more.
+Exceptions in web applications can be reported with [Application Insights](./app-insights-overview.md). You can correlate failed requests with exceptions and other events on both the client and server so that you can quickly diagnose the causes. In this article, you'll learn how to set up exception reporting, report exceptions explicitly, diagnose failures, and more.
## Set up exception reporting
-You can set up Application Insights to report exceptions that occur in either the server, or the client. Depending on the platform you're application is dependent on, you'll need the appropriate extension or SDK.
+You can set up Application Insights to report exceptions that occur in either the server or the client. Depending on the platform your application is dependent on, you'll need the appropriate extension or SDK.
### Server side
-To have exceptions reported from your server side application, consider the following scenarios:
+To have exceptions reported from your server-side application, consider the following scenarios:
- * **Azure web apps**: Add the [Application Insights Extension](./azure-web-apps.md)
- * **Azure VM and Azure virtual machine scale set IIS-hosted apps**: Add the [Application Monitoring Extension](./azure-vm-vmss-apps.md)
- * Install [Application Insights SDK](./asp-net.md) in your app code, or
- * **IIS web servers**: Run [Application Insights Agent](./status-monitor-v2-overview.md), or
- * **Java web apps**: Enable the [Java agent](./java-in-process-agent.md)
+ * Add the [Application Insights Extension](./azure-web-apps.md) for Azure web apps.
+ * Add the [Application Monitoring Extension](./azure-vm-vmss-apps.md) for Azure Virtual Machines and Azure Virtual Machine Scale Sets IIS-hosted apps.
+ * Install [Application Insights SDK](./asp-net.md) in your app code, run [Application Insights Agent](./status-monitor-v2-overview.md) for IIS web servers, or enable the [Java agent](./java-in-process-agent.md) for Java web apps.
### Client side
-The JavaScript SDK provides the ability for client side reporting of exceptions that occur in web browsers. To set up exception reporting on the client, see [Application Insights for web pages](./javascript.md).
+The JavaScript SDK provides the ability for client-side reporting of exceptions that occur in web browsers. To set up exception reporting on the client, see [Application Insights for webpages](./javascript.md).
### Application frameworks
-With some application frameworks there is a bit more configuration required, consider the following technologies:
+With some application frameworks, more configuration is required. Consider the following technologies:
* [Web forms](#web-forms) * [MVC](#mvc)
With some application frameworks there is a bit more configuration required, con
* [WCF](#wcf) > [!IMPORTANT]
-> This article is specifically focused on .NET Framework apps from a code example perspective. Some of the methods that work for .NET Framework are obsolete in the .NET Core SDK. For more information, see [.NET Core SDK documentation](./asp-net-core.md) when building apps with .NET Core.
+> This article is specifically focused on .NET Framework apps from a code example perspective. Some of the methods that work for .NET Framework are obsolete in the .NET Core SDK. For more information, see [.NET Core SDK documentation](./asp-net-core.md) when you build apps with .NET Core.
## Diagnose exceptions using Visual Studio
-Open the app solution in Visual Studio. Run the app, either on your server or on your development machine by using <kbd>F5</kbd>. Recreate the exception.
+Open the app solution in Visual Studio. Run the app, either on your server or on your development machine by using <kbd>F5</kbd>. Re-create the exception.
-Open the **Application Insights Search** telemetry window in Visual Studio. While debugging, select the **Application Insights** dropdown.
+Open the **Application Insights Search** telemetry window in Visual Studio. While debugging, select the **Application Insights** dropdown box.
-![Right-click the project and choose Application Insights, Open.](./media/asp-net-exceptions/34.png)
+![Screenshot that shows right-clicking the project and choosing Application Insights.](./media/asp-net-exceptions/34.png)
Select an exception report to show its stack trace. To open the relevant code file, select a line reference in the stack trace. If CodeLens is enabled, you'll see data about the exceptions:
-![CodeLens notification of exceptions.](./media/asp-net-exceptions/35.png)
+![Screenshot that shows CodeLens notification of exceptions.](./media/asp-net-exceptions/35.png)
## Diagnose failures using the Azure portal
-Application Insights comes with a curated Application Performance Management (APM) experience to help you diagnose failures in your monitored applications. To start, select on the **Failures** option in the Application Insights resource menu located in the **Investigate** section.
-You will see the failure rate trends for your requests, how many of them are failing, and how many users are impacted. As an **Overall** view, you'll see some of the most useful distributions specific to the selected failing operation, including top three response codes, top three exception types, and top three failing dependency types.
+Application Insights comes with a curated Application Performance Management experience to help you diagnose failures in your monitored applications. To start, in the Application Insights resource menu on the left, under **Investigate**, select the **Failures** option.
-![Failures triage view (operations tab)](./media/asp-net-exceptions/failures0719.png)
+You'll see the failure rate trends for your requests, how many of them are failing, and how many users are affected. The **Overall** view shows some of the most useful distributions specific to the selected failing operation. You'll see the top three response codes, the top three exception types, and the top three failing dependency types.
-To review representative samples for each of these subsets of operations, select the corresponding link. As an example, to diagnose exceptions, you can select the count of a particular exception to be presented with the **End-to-end transaction** details tab:
+![Screenshot that shows a failures triage view on the Operations tab.](./media/asp-net-exceptions/failures0719.png)
-![End-to-end transaction details tab](./media/asp-net-exceptions/end-to-end.png)
+To review representative samples for each of these subsets of operations, select the corresponding link. As an example, to diagnose exceptions, you can select the count of a particular exception to be presented with the **End-to-end transaction details** tab.
-Alternatively, instead of looking at exceptions of a specific failing operation, you can start from the **Overall** view of exceptions, by switching to the **Exceptions** tab at the top. Here you can see all the exceptions collected for your monitored app.
+![Screenshot that shows the End-to-end transaction details tab.](./media/asp-net-exceptions/end-to-end.png)
+
+Alternatively, instead of looking at exceptions of a specific failing operation, you can start from the **Overall** view of exceptions by switching to the **Exceptions** tab at the top. Here you can see all the exceptions collected for your monitored app.
## Custom tracing and log data
To get diagnostic data specific to your app, you can insert code to send your ow
Using the <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient?displayProperty=fullName>, you have several APIs available:
-* <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackEvent%2A?displayProperty=nameWithType> is typically used for monitoring usage patterns, but the data it sends also appears under **Custom Events** in diagnostic search. Events are named, and can carry string properties and numeric metrics on which you can [filter your diagnostic searches](./diagnostic-search.md).
+* <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackEvent%2A?displayProperty=nameWithType> is typically used for monitoring usage patterns, but the data it sends also appears under **Custom Events** in diagnostic search. Events are named and can carry string properties and numeric metrics on which you can [filter your diagnostic searches](./diagnostic-search.md).
* <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackTrace%2A?displayProperty=nameWithType> lets you send longer data such as POST information. * <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackException%2A?displayProperty=nameWithType> sends exception details, such as stack traces to Application Insights.
-To see these events, open [Search](./diagnostic-search.md) from the left menu, select the drop-down menu **Event types**, and then choose **Custom Event**, **Trace**, or **Exception**.
+To see these events, on the left menu, open [Search](./diagnostic-search.md). Select the dropdown menu **Event types**, and then choose **Custom Event**, **Trace**, or **Exception**.
-![Drill through](./media/asp-net-exceptions/customevents.png)
+![Screenshot that shows the Search screen.](./media/asp-net-exceptions/customevents.png)
> [!NOTE]
-> If your app generates a lot of telemetry, the adaptive sampling module will automatically reduce the volume that is sent to the portal by sending only a representative fraction of events. Events that are part of the same operation will be selected or deselected as a group, so that you can navigate between related events. For more information, see [Sampling in Application Insights](./sampling.md).
+> If your app generates a lot of telemetry, the adaptive sampling module will automatically reduce the volume that's sent to the portal by sending only a representative fraction of events. Events that are part of the same operation will be selected or deselected as a group so that you can navigate between related events. For more information, see [Sampling in Application Insights](./sampling.md).
-### How to see request POST data
+### See request POST data
Request details don't include the data sent to your app in a POST call. To have this data reported: * [Install the SDK](./asp-net.md) in your application project.
-* Insert code in your application to call [Microsoft.ApplicationInsights.TrackTrace()](./api-custom-events-metrics.md#tracktrace). Send the POST data in the message parameter. There is a limit to the permitted size, so you should try to send just the essential data.
+* Insert code in your application to call [Microsoft.ApplicationInsights.TrackTrace()](./api-custom-events-metrics.md#tracktrace). Send the POST data in the message parameter. There's a limit to the permitted size, so you should try to send only the essential data.
* When you investigate a failed request, find the associated traces.
-## <a name="exceptions"></a> Capturing exceptions and related diagnostic data
-At first, you won't see in the portal all the exceptions that cause failures in your app. You'll see any browser exceptions (if you're using the [JavaScript SDK](./javascript.md) in your web pages). But most server exceptions are caught by IIS and you have to write a bit of code to see them.
+## <a name="exceptions"></a> Capture exceptions and related diagnostic data
+
+At first, you won't see in the portal all the exceptions that cause failures in your app. You'll see any browser exceptions, if you're using the [JavaScript SDK](./javascript.md) in your webpages. But most server exceptions are caught by IIS and you have to write a bit of code to see them.
You can: * **Log exceptions explicitly** by inserting code in exception handlers to report the exceptions. * **Capture exceptions automatically** by configuring your ASP.NET framework. The necessary additions are different for different types of framework.
-## Reporting exceptions explicitly
+## Report exceptions explicitly
-The simplest way is to insert a call to `trackException()` in an exception handler.
+The simplest way to report is to insert a call to `trackException()` in an exception handler.
```javascript try
Catch ex as Exception
End Try ```
-The properties and measurements parameters are optional, but are useful for [filtering and adding](./diagnostic-search.md) extra information. For example, if you have an app that can run several games, you could find all the exception reports related to a particular game. You can add as many items as you like to each dictionary.
+The properties and measurements parameters are optional, but they're useful for [filtering and adding](./diagnostic-search.md) extra information. For example, if you have an app that can run several games, you could find all the exception reports related to a particular game. You can add as many items as you want to each dictionary.
## Browser exceptions Most browser exceptions are reported.
-If your web page includes script files from content delivery networks or other domains, ensure your script tag has the attribute `crossorigin="anonymous"`, and that the server sends [CORS headers](https://enable-cors.org/). This will allow you to get a stack trace and detail for unhandled JavaScript exceptions from these resources.
+If your webpage includes script files from content delivery networks or other domains, ensure your script tag has the attribute `crossorigin="anonymous"` and that the server sends [CORS headers](https://enable-cors.org/). This behavior will allow you to get a stack trace and detail for unhandled JavaScript exceptions from these resources.
## Reuse your telemetry client > [!NOTE]
-> The `TelemetryClient` is recommended to be instantiated once, and re-used throughout the life of an application.
+> We recommend that you instantiate the `TelemetryClient` once and reuse it throughout the life of an application.
With [Dependency Injection (DI) in .NET](/dotnet/core/extensions/dependency-injection), the appropriate .NET SDK, and correctly configuring Application Insights for DI, you can require the <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient> as a constructor parameter.
In the preceding example, the `_telemetryClient` is a class-scoped variable of t
## MVC
-Starting with Application Insights Web SDK version 2.6 (beta3 and later), Application Insights collects unhandled exceptions thrown in the MVC 5+ controllers methods automatically. If you have previously added a custom handler to track such exceptions, you may remove it to prevent double tracking of exceptions.
+Starting with Application Insights Web SDK version 2.6 (beta 3 and later), Application Insights collects unhandled exceptions thrown in the MVC 5+ controllers methods automatically. If you've previously added a custom handler to track such exceptions, you can remove it to prevent double tracking of exceptions.
-There are a number of scenarios when an exception filter cannot correctly handle errors, when exceptions are thrown:
+There are several scenarios when an exception filter can't correctly handle errors when exceptions are thrown:
-* From controller constructors.
-* From message handlers.
-* During routing.
-* During response content serialization.
-* During application start-up.
-* In background tasks.
+* From controller constructors
+* From message handlers
+* During routing
+* During response content serialization
+* During application start-up
+* In background tasks
-All exceptions *handled* by application still need to be tracked manually.
-Unhandled exceptions originating from controllers typically result in 500 "Internal Server Error" response. If such response is manually constructed as a result of handled exception (or no exception at all) it is tracked in corresponding request telemetry with `ResultCode` 500, however Application Insights SDK is unable to track corresponding exception.
+All exceptions *handled* by application still need to be tracked manually. Unhandled exceptions originating from controllers typically result in a 500 "Internal Server Error" response. If such response is manually constructed as a result of a handled exception, or no exception at all, it's tracked in corresponding request telemetry with `ResultCode` 500. However, the Application Insights SDK is unable to track a corresponding exception.
### Prior versions support If you use MVC 4 (and prior) of Application Insights Web SDK 2.5 (and prior), refer to the following examples to track exceptions.
-If the [CustomErrors](/previous-versions/dotnet/netframework-4.0/h0hfz6fc(v=vs.100)) configuration is `Off`, then exceptions will be available for the [HTTP Module](/previous-versions/dotnet/netframework-3.0/ms178468(v=vs.85)) to collect. However, if it is `RemoteOnly` (default), or `On`, then the exception will be cleared and not available for Application Insights to automatically collect. You can fix that by overriding the [System.Web.Mvc.HandleErrorAttribute class](/dotnet/api/system.web.mvc.handleerrorattribute), and applying the overridden class as shown for the different MVC versions below ([GitHub source](https://github.com/AppInsightsSamples/Mvc2UnhandledExceptions/blob/master/MVC2App/Controllers/AiHandleErrorAttribute.cs)):
+If the [CustomErrors](/previous-versions/dotnet/netframework-4.0/h0hfz6fc(v=vs.100)) configuration is `Off`, exceptions will be available for the [HTTP Module](/previous-versions/dotnet/netframework-3.0/ms178468(v=vs.85)) to collect. However, if it's `RemoteOnly` (default), or `On`, the exception will be cleared and not available for Application Insights to automatically collect. You can fix that behavior by overriding the [System.Web.Mvc.HandleErrorAttribute class](/dotnet/api/system.web.mvc.handleerrorattribute) and applying the overridden class as shown for the different MVC versions here (see the [GitHub source](https://github.com/AppInsightsSamples/Mvc2UnhandledExceptions/blob/master/MVC2App/Controllers/AiHandleErrorAttribute.cs)):
```csharp using System;
namespace MVC2App.Controllers
//The attribute should track exceptions only when CustomErrors setting is On //if CustomErrors is Off, exceptions will be caught by AI HTTP Module if (filterContext.HttpContext.IsCustomErrorEnabled)
- { //or reuse instance (recommended!). see note above
+ { //Or reuse instance (recommended!). See note above.
var ai = new TelemetryClient(); ai.TrackException(filterContext.Exception); }
namespace MVC2App.Controllers
#### MVC 2
-Replace the HandleError attribute with your new attribute in your controllers.
+Replace the HandleError attribute with your new attribute in your controllers:
```csharp namespace MVC2App.Controllers
public class MyMvcApplication : System.Web.HttpApplication
[Sample](https://github.com/AppInsightsSamples/Mvc3UnhandledExceptionTelemetry)
-#### MVC 4, MVC5
+#### MVC 4, MVC 5
Register `AiHandleErrorAttribute` as a global filter in *FilterConfig.cs*:
public class FilterConfig
## Web API
-Starting with Application Insights Web SDK version 2.6 (beta3 and later), Application Insights collects unhandled exceptions thrown in the controller methods automatically for WebAPI 2+. If you have previously added a custom handler to track such exceptions (as described in following examples), you may remove it to prevent double tracking of exceptions.
+Starting with Application Insights Web SDK version 2.6 (beta 3 and later), Application Insights collects unhandled exceptions thrown in the controller methods automatically for Web API 2+. If you've previously added a custom handler to track such exceptions, as described in the following examples, you can remove it to prevent double tracking of exceptions.
-There are a number of cases that the exception filters cannot handle. For example:
+There are several cases that the exception filters can't handle. For example:
* Exceptions thrown from controller constructors. * Exceptions thrown from message handlers. * Exceptions thrown during routing. * Exceptions thrown during response content serialization.
-* Exception thrown during application start-up.
+* Exception thrown during application startup.
* Exception thrown in background tasks.
-All exceptions *handled* by application still need to be tracked manually.
-Unhandled exceptions originating from controllers typically result in 500 "Internal Server Error" response. If such response is manually constructed as a result of handled exception (or no exception at all) it is tracked in a corresponding request telemetry with `ResultCode` 500, however Application Insights SDK is unable to track corresponding exception.
+All exceptions *handled* by application still need to be tracked manually. Unhandled exceptions originating from controllers typically result in a 500 "Internal Server Error" response. If such a response is manually constructed as a result of a handled exception, or no exception at all, it's tracked in a corresponding request telemetry with `ResultCode` 500. However, the Application Insights SDK can't track a corresponding exception.
### Prior versions support
-If you use WebAPI 1 (and prior) of Application Insights Web SDK 2.5 (and prior), refer to the following examples to track exceptions.
+If you use Web API 1 (and earlier) of Application Insights Web SDK 2.5 (and earlier), refer to the following examples to track exceptions.
#### Web API 1.x
namespace WebAPI.App_Start
public override void OnException(HttpActionExecutedContext actionExecutedContext) { if (actionExecutedContext != null && actionExecutedContext.Exception != null)
- { //or reuse instance (recommended!). see note above
+ { //Or reuse instance (recommended!). See note above.
var ai = new TelemetryClient(); ai.TrackException(actionExecutedContext.Exception); }
namespace ProductsAppPureWebAPI.App_Start
} ```
-Add this to the services in WebApiConfig:
+Add this snippet to the services in `WebApiConfig`:
```csharp using System.Web.Http;
namespace WebApi2WithMVC
As alternatives, you could:
-1. Replace the only ExceptionHandler with a custom implementation of IExceptionHandler. This is only called when the framework is still able to choose which response message to send (not when the connection is aborted for instance)
-2. Exception Filters (as described in the section on Web API 1.x controllers above) - not called in all cases.
+- Replace the only `ExceptionHandler` instance with a custom implementation of `IExceptionHandler`. This exception handler is only called when the framework is still able to choose which response message to send, not when the connection is aborted, for instance.
+- Use exception filters, as described in the preceding section on Web API 1.x controllers, which aren't called in all cases.
## WCF
-Add a class that extends Attribute and implements IErrorHandler and IServiceBehavior.
+Add a class that extends `Attribute` and implements `IErrorHandler` and `IServiceBehavior`.
```csharp using System;
namespace WcfService4
## Exception performance counters
-If you have [installed the Azure Monitor Application Insights Agent](./status-monitor-v2-overview.md) on your server, you can get a chart of the exceptions rate, measured by .NET. This includes both handled and unhandled .NET exceptions.
+If you've [installed the Azure Monitor Application Insights Agent](./status-monitor-v2-overview.md) on your server, you can get a chart of the exceptions rate, measured by .NET. Both handled and unhandled .NET exceptions are included.
-Open a Metric Explorer tab, add a new chart, and select **Exception rate**, listed under Performance Counters.
+Open a metrics explorer tab, add a new chart. Under **Performance Counters**, select **Exception rate**.
-The .NET framework calculates the rate by counting the number of exceptions in an interval and dividing by the length of the interval.
+The .NET Framework calculates the rate by counting the number of exceptions in an interval and dividing by the length of the interval.
-This is different from the 'Exceptions' count calculated by the Application Insights portal counting TrackException reports. The sampling intervals are different, and the SDK doesn't send TrackException reports for all handled and unhandled exceptions.
+This count is different from the Exceptions count calculated by the Application Insights portal counting `TrackException` reports. The sampling intervals are different, and the SDK doesn't send `TrackException` reports for all handled and unhandled exceptions.
## Next steps
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
# Explore .NET/.NET Core and Python trace logs in Application Insights
-Send diagnostic tracing logs for your ASP.NET/ASP.NET Core application from ILogger, NLog, log4Net, or System.Diagnostics.Trace to [Azure Application Insights][start]. For Python applications, send diagnostic tracing logs using AzureLogHandler in OpenCensus Python for Azure Monitor. You can then explore and search them. Those logs are merged with the other log files from your application, so you can identify traces that are associated with each user request and correlate them with other events and exception reports.
+Send diagnostic tracing logs for your ASP.NET/ASP.NET Core application from ILogger, NLog, log4Net, or System.Diagnostics.Trace to [Azure Application Insights][start]. For Python applications, send diagnostic tracing logs by using AzureLogHandler in OpenCensus Python for Azure Monitor. You can then explore and search for them. Those logs are merged with the other log files from your application. You can use them to identify traces that are associated with each user request and correlate them with other events and exception reports.
> [!NOTE]
-> Do you need the log-capture module? It's a useful adapter for third-party loggers. But if you aren't already using NLog, log4Net, or System.Diagnostics.Trace, consider just calling [**Application Insights TrackTrace()**](./api-custom-events-metrics.md#tracktrace) directly.
+> Do you need the log-capture module? It's a useful adapter for third-party loggers. But if you aren't already using NLog, log4Net, or System.Diagnostics.Trace, consider calling [**Application Insights TrackTrace()**](./api-custom-events-metrics.md#tracktrace) directly.
> > [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## Install logging on your app
-Install your chosen logging framework in your project, which should result in an entry in app.config or web.config.
+
+Install your chosen logging framework in your project, which should result in an entry in *app.config* or *web.config*.
```xml <configuration>
Install your chosen logging framework in your project, which should result in an
``` ## Configure Application Insights to collect logs+ [Add Application Insights to your project](./asp-net.md) if you haven't done that yet. You'll see an option to include the log collector. Or right-click your project in Solution Explorer to **Configure Application Insights**. Select the **Configure trace collection** option.
Or right-click your project in Solution Explorer to **Configure Application Insi
> No Application Insights menu or log collector option? Try [Troubleshooting](#troubleshooting). ## Manual installation
-Use this method if your project type isn't supported by the Application Insights installer (for example a Windows desktop project).
-1. If you plan to use log4Net or NLog, install it in your project.
-2. In Solution Explorer, right-click your project, and select **Manage NuGet Packages**.
-3. Search for "Application Insights."
-4. Select one of the following packages:
+Use this method if your project type isn't supported by the Application Insights installer. For example, if it's a Windows desktop project.
+
+1. If you plan to use log4net or NLog, install it in your project.
+1. In Solution Explorer, right-click your project, and select **Manage NuGet Packages**.
+1. Search for **Application Insights**.
+1. Select one of the following packages:
- - For ILogger: [Microsoft.Extensions.Logging.ApplicationInsights](https://www.nuget.org/packages/Microsoft.Extensions.Logging.ApplicationInsights/)
+ - **ILogger**: [Microsoft.Extensions.Logging.ApplicationInsights](https://www.nuget.org/packages/Microsoft.Extensions.Logging.ApplicationInsights/)
[![NuGet iLogger banner](https://img.shields.io/nuget/vpre/Microsoft.Extensions.Logging.ApplicationInsights.svg)](https://www.nuget.org/packages/Microsoft.Extensions.Logging.ApplicationInsights/)
- - For NLog: [Microsoft.ApplicationInsights.NLogTarget](https://www.nuget.org/packages/Microsoft.ApplicationInsights.NLogTarget/)
+ - **NLog**: [Microsoft.ApplicationInsights.NLogTarget](https://www.nuget.org/packages/Microsoft.ApplicationInsights.NLogTarget/)
[![NuGet NLog banner](https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.NLogTarget.svg)](https://www.nuget.org/packages/Microsoft.ApplicationInsights.NLogTarget/)
- - For Log4Net: [Microsoft.ApplicationInsights.Log4NetAppender](https://www.nuget.org/packages/Microsoft.ApplicationInsights.Log4NetAppender/)
+ - **log4net**: [Microsoft.ApplicationInsights.Log4NetAppender](https://www.nuget.org/packages/Microsoft.ApplicationInsights.Log4NetAppender/)
[![NuGet Log4Net banner](https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.Log4NetAppender.svg)](https://www.nuget.org/packages/Microsoft.ApplicationInsights.Log4NetAppender/)
- - For System.Diagnostics: [Microsoft.ApplicationInsights.TraceListener](https://www.nuget.org/packages/Microsoft.ApplicationInsights.TraceListener/)
+ - **System.Diagnostics**: [Microsoft.ApplicationInsights.TraceListener](https://www.nuget.org/packages/Microsoft.ApplicationInsights.TraceListener/)
[![NuGet System.Diagnostics banner](https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.TraceListener.svg)](https://www.nuget.org/packages/Microsoft.ApplicationInsights.TraceListener/) - [Microsoft.ApplicationInsights.DiagnosticSourceListener](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DiagnosticSourceListener/) [![NuGet Diagnostic Source Listener banner](https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.DiagnosticSourceListener.svg)](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DiagnosticSourceListener/)
The NuGet package installs the necessary assemblies and modifies web.config or a
For examples of using the Application Insights ILogger implementation with console applications and ASP.NET Core, see [ApplicationInsightsLoggerProvider for .NET Core ILogger logs](ilogger.md). ## Insert diagnostic log calls+ If you use System.Diagnostics.Trace, a typical call would be: ```csharp
If you prefer log4net or NLog, use:
``` ## Use EventSource events+ You can configure [System.Diagnostics.Tracing.EventSource](/dotnet/api/system.diagnostics.tracing.eventsource) events to be sent to Application Insights as traces. First, install the `Microsoft.ApplicationInsights.EventSourceListener` NuGet package. Then edit the `TelemetryModules` section of the [ApplicationInsights.config](./configuration-with-applicationinsights-config.md) file. ```xml
You can configure [System.Diagnostics.Tracing.EventSource](/dotnet/api/system.di
``` For each source, you can set the following parameters:+ * **Name** specifies the name of the EventSource to collect. * **Level** specifies the logging level to collect: *Critical*, *Error*, *Informational*, *LogAlways*, *Verbose*, or *Warning*. * **Keywords** (optional) specify the integer value of keyword combinations to use. ## Use DiagnosticSource events+ You can configure [System.Diagnostics.DiagnosticSource](https://github.com/dotnet/corefx/blob/master/src/System.Diagnostics.DiagnosticSource/src/DiagnosticSourceUsersGuide.md) events to be sent to Application Insights as traces. First, install the [`Microsoft.ApplicationInsights.DiagnosticSourceListener`](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DiagnosticSourceListener) NuGet package. Then edit the "TelemetryModules" section of the [ApplicationInsights.config](./configuration-with-applicationinsights-config.md) file. ```xml
You can configure [System.Diagnostics.DiagnosticSource](https://github.com/dotne
</Add> ```
-For each DiagnosticSource you want to trace, add an entry with the **Name** attribute set to the name of your DiagnosticSource.
+For each diagnostic source you want to trace, add an entry with the `Name` attribute set to the name of your diagnostic source.
## Use ETW events+ You can configure Event Tracing for Windows (ETW) events to be sent to Application Insights as traces. First, install the `Microsoft.ApplicationInsights.EtwCollector` NuGet package. Then edit the "TelemetryModules" section of the [ApplicationInsights.config](./configuration-with-applicationinsights-config.md) file. > [!NOTE]
You can configure Event Tracing for Windows (ETW) events to be sent to Applicati
``` For each source, you can set the following parameters:+ * **ProviderName** is the name of the ETW provider to collect. * **ProviderGuid** specifies the GUID of the ETW provider to collect. It can be used instead of `ProviderName`. * **Level** sets the logging level to collect. It can be *Critical*, *Error*, *Informational*, *LogAlways*, *Verbose*, or *Warning*. * **Keywords** (optional) set the integer value of keyword combinations to use. ## Use the Trace API directly+ You can call the Application Insights trace API directly. The logging adapters use this API. For example:
var telemetryClient = new TelemetryClient(configuration);
telemetryClient.TrackTrace("Slow response - database01"); ```
-An advantage of TrackTrace is that you can put relatively long data in the message. For example, you can encode POST data there.
+An advantage of `TrackTrace` is that you can put relatively long data in the message. For example, you can encode POST data there.
You can also add a severity level to your message. And, like other telemetry, you can add property values to help filter or search for different sets of traces. For example:
You can also add a severity level to your message. And, like other telemetry, yo
new Dictionary<string, string> { { "database", "db.ID" } }); ```
-This would enable you to easily filter out in [Search][diagnostic] all the messages of a particular severity level that relate to a particular database.
+Now you can easily filter out in [Search][diagnostic] all the messages of a particular severity level that relate to a particular database.
## AzureLogHandler for OpenCensus Python+ The Azure Monitor Log Handler allows you to export Python logs to Azure Monitor. Instrument your application with the [OpenCensus Python SDK](./opencensus-python.md) for Azure Monitor.
logger.warning('Hello, World!')
``` ## Explore your logs+ Run your app in debug mode or deploy it live.
-In your app's overview pane in [the Application Insights portal][portal], select [Search][diagnostic].
+In your app's overview pane in the [Application Insights portal][portal], select [Search][diagnostic].
You can, for example: * Filter on log traces or on items with specific properties. * Inspect a specific item in detail.
-* Find other system log data that relates to the same user request (has the same OperationId).
+* Find other system log data that relates to the same user request (has the same operation ID).
* Save the configuration of a page as a favorite. > [!NOTE]
-> If your application sends a lot of data and you're using the Application Insights SDK for ASP.NET version 2.0.0-beta3 or later, the *adaptive sampling* feature may operate and send only a portion of your telemetry. [Learn more about sampling.](./sampling.md)
+> If your application sends a lot of data and you're using the Application Insights SDK for ASP.NET version 2.0.0-beta3 or later, the *adaptive sampling* feature might operate and send only a portion of your telemetry. Learn more about [sampling](./sampling.md).
> ## Troubleshooting
-### Delayed telemetry, overloading network, or inefficient transmission
-System.Diagnostics.Tracing has an [Autoflush feature](/dotnet/api/system.diagnostics.trace.autoflush). This causes SDK to flush with every telemetry item, which is undesirable, and can cause logging adapter issues like delayed telemetry, overloading network, inefficient transmission, etc.
+Find answers to common questions.
+### What causes delayed telemetry, an overloaded network, and inefficient transmission?
+System.Diagnostics.Tracing has an [Autoflush feature](/dotnet/api/system.diagnostics.trace.autoflush). This feature causes SDK to flush with every telemetry item, which is undesirable, and can cause logging adapter issues like delayed telemetry, an overloaded network, and inefficient transmission.
### How do I do this for Java?
-The Application Insights Java agent collects logs from Log4j, Logback and java.util.logging out of the box.
+In Java codeless instrumentation, which is recommended, the logs are collected out of the box. Use [Java 3.0 agent](./java-in-process-agent.md).
+
+The Application Insights Java agent collects logs from Log4j, Logback, and java.util.logging out of the box.
+
+### Why is there no Application Insights option on the project context menu?
+
+* Make sure that Developer Analytics Tools is installed on the development machine. In Visual Studio, go to **Tools** > **Extensions and Updates**, and look for **Developer Analytics Tools**. If it isn't on the **Installed** tab, open the **Online** tab and install it.
+* This project type might be one that Developer Analytics Tools doesn't support. Use [manual installation](#manual-installation).
-### There's no Application Insights option on the project context menu
-* Make sure that Developer Analytics Tools is installed on the development machine. At Visual Studio **Tools** > **Extensions and Updates**, look for **Developer Analytics Tools**. If it isn't on the **Installed** tab, open the **Online** tab and install it.
-* This might be a project type that Developer Analytics Tools doesn't support. Use [manual installation](#manual-installation).
+### Why is there no log adapter option in the configuration tool?
-### There's no log adapter option in the configuration tool
* Install the logging framework first.
-* If you're using System.Diagnostics.Trace, make sure that you've it [configured in *web.config*](/dotnet/api/system.diagnostics.eventlogtracelistener).
-* Make sure that you have the latest version of Application Insights. In Visual Studio, go to **Tools** > **Extensions and Updates**, and open the **Updates** tab. If **Developer Analytics Tools** is there, select it to update it.
+* If you're using System.Diagnostics.Trace, make sure that you've [configured it in *web.config*](/dotnet/api/system.diagnostics.eventlogtracelistener).
+* Make sure that you have the latest version of Application Insights. In Visual Studio, go to **Tools** > **Extensions and Updates** and open the **Updates** tab. If **Developer Analytics Tools** is there, select it to update it.
+
+### <a name="emptykey"></a>Why do I get the "Instrumentation key cannot be empty" error message?
-### <a name="emptykey"></a>I get the "Instrumentation key cannot be empty" error message
You probably installed the logging adapter NuGet package without installing Application Insights. In Solution Explorer, right-click *ApplicationInsights.config*, and select **Update Application Insights**. You'll be prompted to sign in to Azure and create an Application Insights resource or reuse an existing one. That should fix the problem.
-### I can see traces but not other events in diagnostic search
+### Why can I see traces but not other events in diagnostic search?
+ It can take a while for all the events and requests to get through the pipeline. ### <a name="limits"></a>How much data is retained?
-Several factors affect the amount of data that's retained. For more information, see the [limits](./api-custom-events-metrics.md#limits) section of the customer event metrics page.
-### I don't see some log entries that I expected
-If your application sends voluminous amounts of data and you're using the Application Insights SDK for ASP.NET version 2.0.0-beta3 or later, the adaptive sampling feature may operate and send only a portion of your telemetry. [Learn more about sampling.](./sampling.md)
+Several factors affect the amount of data that's retained. For more information, see the [Limits](./api-custom-events-metrics.md#limits) section of the customer event metrics page.
+
+### Why don't I see some log entries that I expected?
+
+Perhaps your application sends voluminous amounts of data and you're using the Application Insights SDK for ASP.NET version 2.0.0-beta3 or later. In this case, the adaptive sampling feature might operate and send only a portion of your telemetry. Learn more about [sampling](./sampling.md).
## <a name="add"></a>Next steps
If your application sends voluminous amounts of data and you're using the Applic
[exceptions]: asp-net-exceptions.md [portal]: https://portal.azure.com/ [qna]: ../faq.yml
-[start]: ./app-insights-overview.md
+[start]: ./app-insights-overview.md
azure-monitor Azure Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps.md
Title: Monitor Azure app services performance | Microsoft Docs
-description: Application performance monitoring for Azure app services. Chart load and response time, dependency information, and set alerts on performance.
+ Title: Monitor Azure App Service performance | Microsoft Docs
+description: Application performance monitoring for Azure App Service. Chart load and response time, dependency information, and set alerts on performance.
Last updated 08/05/2021
-# Application Monitoring for Azure App Service Overview
+# Application monitoring for Azure App Service overview
-Enabling monitoring on your ASP.NET, ASP.NET Core, Java, and Node.js based web applications running on [Azure App Services](../../app-service/index.yml) is now easier than ever. Whereas previously you needed to manually instrument your app, the latest extension/agent is now built into the App Service image by default.
+It's now easier than ever to enable monitoring on your web applications based on ASP.NET, ASP.NET Core, Java, and Node.js running on [Azure App Service](../../app-service/index.yml). Previously, you needed to manually instrument your app, but the latest extension/agent is now built into the App Service image by default.
## Enable Application Insights
-There are two ways to enable application monitoring for Azure App Services hosted applications:
+There are two ways to enable monitoring for applications hosted on App Service:
-- **Auto-instrumentation application monitoring** (ApplicationInsightsAgent).
-
- - This method is the easiest to enable, and no code change or advanced configurations are required. It is often referred to as "runtime" monitoring. For Azure App Services we recommend at a minimum enabling this level of monitoring, and then based on your specific scenario you can evaluate whether more advanced monitoring through manual instrumentation is needed.
+- **Auto-instrumentation application monitoring** (ApplicationInsightsAgent).
+
+ This method is the easiest to enable, and no code change or advanced configurations are required. It's often referred to as "runtime" monitoring. For App Service, we recommend that at a minimum you enable this level of monitoring. Based on your specific scenario, you can evaluate whether more advanced monitoring through manual instrumentation is needed.
+
+ The following platforms are supported for auto-instrumentation monitoring:
+
+ - [.NET Core](./azure-web-apps-net-core.md)
+ - [.NET](./azure-web-apps-net.md)
+ - [Java](./azure-web-apps-java.md)
+ - [Node.js](./azure-web-apps-nodejs.md)
- - The following are supported for auto-instrumentation monitoring:
- - [.NET Core](./azure-web-apps-net-core.md)
- - [.NET](./azure-web-apps-net.md)
- - [Java](./azure-web-apps-java.md)
- - [Nodejs](./azure-web-apps-nodejs.md)
-
* **Manually instrumenting the application through code** by installing the Application Insights SDK.
- * This approach is much more customizable, but it requires the following approaches: SDK for [.NET Core](./asp-net-core.md), [.NET](./asp-net.md), [Node.js](./nodejs.md), [Python](./opencensus-python.md), and a standalone agent for [Java](./java-in-process-agent.md). This method, also means you have to manage the updates to the latest version of the packages yourself.
+ This approach is much more customizable, but it requires the following approaches: SDK for [.NET Core](./asp-net-core.md), [.NET](./asp-net.md), [Node.js](./nodejs.md), [Python](./opencensus-python.md), and a standalone agent for [Java](./java-in-process-agent.md). This method also means you must manage the updates to the latest version of the packages yourself.
+
+ If you need to make custom API calls to track events/dependencies not captured by default with auto-instrumentation monitoring, you'll need to use this method. To learn more, see [Application Insights API for custom events and metrics](./api-custom-events-metrics.md).
- * If you need to make custom API calls to track events/dependencies not captured by default with auto-instrumentation monitoring, you would need to use this method. Check out the [API for custom events and metrics article](./api-custom-events-metrics.md) to learn more.
+If both auto-instrumentation monitoring and manual SDK-based instrumentation are detected, in .NET only the manual instrumentation settings will be honored, while in Java only the auto-instrumentation will be emitting the telemetry. This practice is to prevent duplicate data from being sent.
> [!NOTE]
-> If both auto-instrumentation monitoring and manual SDK-based instrumentation are detected, in .NET only the manual instrumentation settings will be honored, while in Java only the auto-instrumentation will be emitting the telemetry. This is to prevent duplicate data from being sent.
+> Snapshot Debugger and Profiler are only available in .NET and .NET Core.
-> [!NOTE]
-> Snapshot debugger and profiler are only available in .NET and .NET Core
+## Next steps
-## Next Steps
-- Learn how to enable auto-instrumentation application monitoring for your [.NET Core](./azure-web-apps-net-core.md), [.NET](./azure-web-apps-net.md), [Java](./azure-web-apps-java.md) or [Nodejs](./azure-web-apps-nodejs.md) application running on App Service.
+Learn how to enable auto-instrumentation application monitoring for your [.NET Core](./azure-web-apps-net-core.md), [.NET](./azure-web-apps-net.md), [Java](./azure-web-apps-java.md), or [Nodejs](./azure-web-apps-nodejs.md) application running on App Service.
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Once the migration is complete, you can use [diagnostic settings](../essentials/
- Check your current retention settings under **General** > **Usage and estimated costs** > **Data Retention** for your Log Analytics workspace. This setting will affect how long any new ingested data is stored once you migrate your Application Insights resource. > [!NOTE]
- > - If you currently store Application Insights data for longer than the default 90 days and want to retain this larger retention period after migration, you will need to adjust your [workspace retention settings](https://docs.microsoft.com/azure/azure-monitor/logs/data-retention-archive?tabs=portal-1%2Cportal-2#set-retention-and-archive-policy-by-table) from the default 90 days to the desired longer retention period.
+ > - If you currently store Application Insights data for longer than the default 90 days and want to retain this larger retention period after migration, you will need to adjust your [workspace retention settings](/azure/azure-monitor/logs/data-retention-archive?tabs=portal-1%2Cportal-2#set-retention-and-archive-policy-by-table) from the default 90 days to the desired longer retention period.
> - If youΓÇÖve selected data retention greater than 90 days on data ingested into the Classic Application Insights resource prior to migration, data retention will continue to be billed to through that Application Insights resource until that data exceeds the retention period. > - If the retention setting for your Application Insights instance under **Configure** > **Usage and estimated costs** > **Data Retention** is enabled, then use that setting to control the retention days for the telemetry data still saved in your classic resource's storage.
azure-monitor Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/correlation.md
Title: Azure Application Insights telemetry correlation | Microsoft Docs
-description: Application Insights telemetry correlation
+description: This article explains Application Insights telemetry correlation.
Last updated 06/07/2019 ms.devlang: csharp, java, javascript, python
This article explains the data model used by Application Insights to correlate t
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] - ## Data model for telemetry correlation Application Insights defines a [data model](../../azure-monitor/app/data-model.md) for distributed telemetry correlation. To associate telemetry with a logical operation, every telemetry item has a context field called `operation_Id`. This identifier is shared by every telemetry item in the distributed trace. So even if you lose telemetry from a single layer, you can still associate telemetry reported by other components.
Every outgoing operation, such as an HTTP call to another component, is represen
You can build a view of the distributed logical operation by using `operation_Id`, `operation_parentId`, and `request.id` with `dependency.id`. These fields also define the causality order of telemetry calls.
-In a microservices environment, traces from components can go to different storage items. Every component can have its own connection string in Application Insights. To get telemetry for the logical operation, Application Insights queries data from every storage item. When the number of storage items is large, you'll need a hint about where to look next. The Application Insights data model defines two fields to solve this problem: `request.source` and `dependency.target`. The first field identifies the component that initiated the dependency request. The second field identifies which component returned the response of the dependency call.
+In a microservices environment, traces from components can go to different storage items. Every component can have its own connection string in Application Insights. To get telemetry for the logical operation, Application Insights queries data from every storage item.
+
+When the number of storage items is large, you'll need a hint about where to look next. The Application Insights data model defines two fields to solve this problem: `request.source` and `dependency.target`. The first field identifies the component that initiated the dependency request. The second field identifies which component returned the response of the dependency call.
-For information on querying from multiple disparate instances using the `app` query expression, see [app() expression in Azure Monitor query](../logs/app-expression.md#app-expression-in-azure-monitor-query).
+For information on querying from multiple disparate instances by using the `app` query expression, see [app() expression in Azure Monitor query](../logs/app-expression.md#app-expression-in-azure-monitor-query).
## Example
In the results, all telemetry items share the root `operation_Id`. When an Ajax
| request | GET Home/Stock | KqKwlrSt9PA= | qJSXU | STYz | | dependency | GET /api/stock/value | bBrf2L7mm2g= | KqKwlrSt9PA= | STYz |
-When the call `GET /api/stock/value` is made to an external service, you need to know the identity of that server so you can set the `dependency.target` field appropriately. When the external service doesn't support monitoring, `target` is set to the host name of the service (for example, `stock-prices-api.com`). But if the service identifies itself by returning a predefined HTTP header, `target` contains the service identity that allows Application Insights to build a distributed trace by querying telemetry from that service.
+When the call `GET /api/stock/value` is made to an external service, you need to know the identity of that server so you can set the `dependency.target` field appropriately. When the external service doesn't support monitoring, `target` is set to the host name of the service. An example is `stock-prices-api.com`. But if the service identifies itself by returning a predefined HTTP header, `target` contains the service identity that allows Application Insights to build a distributed trace by querying telemetry from that service.
## Correlation headers using W3C TraceContext
For more information, see [Application Insights telemetry data model](../../azur
### Enable W3C distributed tracing support for .NET apps
-W3C TraceContext based distributed tracing is enabled by default in all recent
+W3C TraceContext-based distributed tracing is enabled by default in all recent
.NET Framework/.NET Core SDKs, along with backward compatibility with legacy Request-Id protocol. ### Enable W3C distributed tracing support for Java apps #### Java 3.0 agent
- Java 3.0 agent supports W3C out of the box and no more configuration is needed.
+ Java 3.0 agent supports W3C out of the box, and no more configuration is needed.
#### Java SDK+ - **Incoming configuration**
- - For Java EE apps, add the following to the `<TelemetryModules>` tag in ApplicationInsights.xml:
+ For Java EE apps, add the following code to the `<TelemetryModules>` tag in *ApplicationInsights.xml*:
- ```xml
- <Add type="com.microsoft.applicationinsights.web.extensibility.modules.WebRequestTrackingTelemetryModule>
- <Param name = "W3CEnabled" value ="true"/>
- <Param name ="enableW3CBackCompat" value = "true" />
- </Add>
- ```
+ ```xml
+ <Add type="com.microsoft.applicationinsights.web.extensibility.modules.WebRequestTrackingTelemetryModule>
+ <Param name = "W3CEnabled" value ="true"/>
+ <Param name ="enableW3CBackCompat" value = "true" />
+ </Add>
+ ```
- - For Spring Boot apps, add these properties:
+ For Spring Boot apps, add these properties:
- - `azure.application-insights.web.enable-W3C=true`
- - `azure.application-insights.web.enable-W3C-backcompat-mode=true`
+ - `azure.application-insights.web.enable-W3C=true`
+ - `azure.application-insights.web.enable-W3C-backcompat-mode=true`
- **Outgoing configuration**
- Add the following to AI-Agent.xml:
+ Add the following code to *AI-Agent.xml*:
```xml <Instrumentation>
W3C TraceContext based distributed tracing is enabled by default in all recent
> [!NOTE] > Backward compatibility mode is enabled by default, and the `enableW3CBackCompat` parameter is optional. Use it only when you want to turn backward compatibility off. >
- > Ideally, you would turn this off when all your services have been updated to newer versions of SDKs that support the W3C protocol. We highly recommend that you move to these newer SDKs as soon as possible.
+ > Ideally, you'll' turn off this mode when all your services are updated to newer versions of SDKs that support the W3C protocol. We highly recommend that you move to these newer SDKs as soon as possible.
-> [!IMPORTANT]
-> Make sure the incoming and outgoing configurations are exactly the same.
+It's important to make sure the incoming and outgoing configurations are exactly the same.
-### Enable W3C distributed tracing support for Web apps
+### Enable W3C distributed tracing support for web apps
This feature is in `Microsoft.ApplicationInsights.JavaScript`. It's disabled by default. To enable it, use `distributedTracingMode` config. AI_AND_W3C is provided for backward compatibility with any legacy services instrumented by Application Insights. -- **[npm based setup](./javascript.md#npm-based-setup)**
+- **[npm-based setup](./javascript.md#npm-based-setup)**
-Add the following configuration:
+ Add the following configuration:
```JavaScript distributedTracingMode: DistributedTracingModes.W3C ``` -- **[Snippet based setup](./javascript.md#snippet-based-setup)**
+- **[Snippet-based setup](./javascript.md#snippet-based-setup)**
-Add the following configuration:
+ Add the following configuration:
``` distributedTracingMode: 2 // DistributedTracingModes.W3C ```
Add the following configuration:
OpenCensus Python supports [W3C Trace-Context](https://w3c.github.io/trace-context/) without requiring extra configuration.
-As a reference, the OpenCensus data model can be found [here](https://github.com/census-instrumentation/opencensus-specs/tree/master/trace).
+For a reference, you can find the OpenCensus data model on [this GitHub page](https://github.com/census-instrumentation/opencensus-specs/tree/master/trace).
### Incoming request correlation
if __name__ == '__main__':
``` This code runs a sample Flask application on your local machine, listening to port `8080`. To correlate trace context, you send a request to the endpoint. In this example, you can use a `curl` command:+ ``` curl --header "traceparent: 00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01" localhost:8080 ```+ By looking at the [Trace-Context header format](https://www.w3.org/TR/trace-context/#trace-context-http-headers-format), you can derive the following information: `version`: `00`
By looking at the [Trace-Context header format](https://www.w3.org/TR/trace-cont
`trace-flags`: `01`
-If you look at the request entry that was sent to Azure Monitor, you can see fields populated with the trace header information. You can find the data under Logs (Analytics) in the Azure Monitor Application Insights resource.
+If you look at the request entry that was sent to Azure Monitor, you can see fields populated with the trace header information. You can find the data under **Logs (Analytics)** in the Azure Monitor Application Insights resource.
-![Request telemetry in Logs (Analytics)](./media/opencensus-python/0011-correlation.png)
+![Screenshot that shows Request telemetry in Logs (Analytics).](./media/opencensus-python/0011-correlation.png)
-The `id` field is in the format `<trace-id>.<span-id>`, where the `trace-id` is taken from the trace header that was passed in the request and the `span-id` is a generated 8-byte array for this span.
+The `id` field is in the format `<trace-id>.<span-id>`, where `trace-id` is taken from the trace header that was passed in the request and `span-id` is a generated 8-byte array for this span.
-The `operation_ParentId` field is in the format `<trace-id>.<parent-id>`, where both the `trace-id` and the `parent-id` are taken from the trace header that was passed in the request.
+The `operation_ParentId` field is in the format `<trace-id>.<parent-id>`, where both `trace-id` and `parent-id` are taken from the trace header that was passed in the request.
### Log correlation
-OpenCensus Python enables you to correlate logs by adding a trace ID, a span ID, and a sampling flag to log records. You add these attributes by installing OpenCensus [logging integration](https://pypi.org/project/opencensus-ext-logging/). The following attributes will be added to Python `LogRecord` objects: `traceId`, `spanId`, and `traceSampled`. (applicable only for loggers that are created after the integration)
+OpenCensus Python enables you to correlate logs by adding a trace ID, a span ID, and a sampling flag to log records. You add these attributes by installing OpenCensus [logging integration](https://pypi.org/project/opencensus-ext-logging/). The following attributes will be added to Python `LogRecord` objects: `traceId`, `spanId`, and `traceSampled` (applicable only for loggers that are created after the integration).
Install the OpenCensus logging integration:
When this code runs, the following prints in the console:
2019-10-17 11:25:59,384 traceId=c54cb1d4bbbec5864bf0917c64aeacdc spanId=70da28f5a4831014 In the span 2019-10-17 11:25:59,385 traceId=c54cb1d4bbbec5864bf0917c64aeacdc spanId=0000000000000000 After the span ```+ Notice that there's a `spanId` present for the log message that's within the span. The `spanId` is the same as that which belongs to the span named `hello`.
-You can export the log data by using `AzureLogHandler`. For more information, see [this article](./opencensus-python.md#logs).
+You can export the log data by using `AzureLogHandler`. For more information, see [Set up Azure Monitor for your Python application](./opencensus-python.md#logs).
-We can also pass trace information from one component to another for proper correlation. For example, consider a scenario where there are two components `module1` and `module2`. Module1 calls functions in Module2 and to get logs from both `module1` and `module2` in a single trace we can use following approach:
+We can also pass trace information from one component to another for proper correlation. For example, consider a scenario where there are two components, `module1` and `module2`. Module1 calls functions in Module2. To get logs from both `module1` and `module2` in a single trace, we can use the following approach:
```python # module1.py
The Application Insights .NET SDK uses `DiagnosticSource` and `Activity` to coll
<a name="java-correlation"></a> ## Telemetry correlation in Java
-[Application Insights Java](./java-in-process-agent.md) supports automatic correlation of telemetry.
-It automatically populates `operation_id` for all telemetry (like traces, exceptions, and custom events) issued within the scope of a request. It also propagates the correlation headers (described earlier) for service-to-service calls via HTTP, RPC, and messaging. See the list of Application Insights Java's
-[autocollected dependencies which support distributed trace propagation](java-in-process-agent.md#autocollected-dependencies).
+[Java agent](./java-in-process-agent.md) supports automatic correlation of telemetry. It automatically populates `operation_id` for all telemetry (like traces, exceptions, and custom events) issued within the scope of a request. It also propagates the correlation headers that were described earlier for service-to-service calls via HTTP, if the [Java SDK agent](java-2x-agent.md) is configured.
> [!NOTE]
-> See [custom telemetry](./java-in-process-agent.md#custom-telemetry) if the auto-instrumentation does not cover all
-> of your needs.
+> Application Insights Java agent autocollects requests and dependencies for JMS, Kafka, Netty/Webflux, and more. For Java SDK, only calls made via Apache HttpClient are supported for the correlation feature. Automatic context propagation across messaging technologies like Kafka, RabbitMQ, and Azure Service Bus isn't supported in the SDK.
+
+To collect custom telemetry, you need to instrument the application with Java 2.6 SDK.
### Role names
-You might want to customize the way component names are displayed in the [Application Map](../../azure-monitor/app/app-map.md). To do so, you can manually set the `cloud_RoleName` by taking one of the following actions:
+You might want to customize the way component names are displayed in [Application Map](../../azure-monitor/app/app-map.md). To do so, you can manually set `cloud_RoleName` by taking one of the following actions:
- For Application Insights Java, set the cloud role name as follows:
You might want to customize the way component names are displayed in the [Applic
} ```
- You can also set the cloud role name using via environment variable or system property,
- see [configuring cloud role name](./java-standalone-config.md#cloud-role-name) for details.
+ You can also set the cloud role name by using the environment variable `APPLICATIONINSIGHTS_ROLE_NAME`.
+
+- With Application Insights Java SDK 2.5.0 and later, you can specify `cloud_RoleName`
+ by adding `<RoleName>` to your *ApplicationInsights.xml* file:
+
+ :::image type="content" source="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png" alt-text="Screenshot that shows Application Insights overview and connection string." lightbox="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png":::
+
+ ```xml
+ <?xml version="1.0" encoding="utf-8"?>
+ <ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings" schemaVersion="2014-05-30">
+ <ConnectionString>InstrumentationKey=00000000-0000-0000-0000-000000000000</ConnectionString>
+ <RoleName>** Your role name **</RoleName>
+ ...
+ </ApplicationInsights>
+ ```
+
+- If you use Spring Boot with the Application Insights Spring Boot Starter, set your custom name for the application in the *application.properties* file:
+
+ `spring.application.name=<name-of-app>`
+
+You can also set the cloud role name via environment variable or system property. See [Configuring cloud role name](./java-standalone-config.md#cloud-role-name) for details.
## Next steps
azure-monitor Diagnostic Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/diagnostic-search.md
Title: Using Search in Azure Application Insights | Microsoft Docs
+ Title: Use Search in Azure Application Insights | Microsoft Docs
description: Search and filter raw telemetry sent by your web app. Last updated 07/30/2019
-# Using Search in Application Insights
+# Use Search in Application Insights
-Transaction search is a feature of [Application Insights](./app-insights-overview.md) that you use to find and explore individual telemetry items, such as page views, exceptions, or web requests. And you can view log traces and events that you have coded.
+Transaction search is a feature of [Application Insights](./app-insights-overview.md) that you use to find and explore individual telemetry items, such as page views, exceptions, or web requests. You can also view log traces and events that you've coded.
-(For more complex queries over your data, use [Analytics](../logs/log-analytics-tutorial.md).)
+For more complex queries over your data, use [Log Analytics](../logs/log-analytics-tutorial.md).
## Where do you see Search?
+You can find **Search** in the Azure portal or Visual Studio.
+ ### In the Azure portal
-You can open transaction search from the Application Insights Overview tab of your application (located at in the top bar) or under investigate on the left.
+You can open transaction search from the Application Insights **Overview** tab of your application. You can also select **Search** under **Investigate** on the left menu.
-![Search tab](./media/diagnostic-search/view-custom-events.png)
+![Screenshot that shows the Search tab.](./media/diagnostic-search/view-custom-events.png)
-Go to the Event types' drop-down menu to see a list of telemetry items- server requests, page views, custom events that you have coded, and so on. At the top of the results' list, is a summary chart showing counts of events over time.
+Go to the **Event types** dropdown menu to see a list of telemetry items such as server requests, page views, and custom events that you've coded. At the top of the **Results** list is a summary chart showing counts of events over time.
-Click out of the drop-down menu or Refresh to get new events.
+Back out of the dropdown menu or select **Refresh** to get new events.
### In Visual Studio
-In Visual Studio, there's also an Application Insights Search window. It's most useful for displaying telemetry events generated by the application that you're debugging. But it can also show the events collected from your published app at the Azure portal.
+In Visual Studio, there's also an **Application Insights Search** window. It's most useful for displaying telemetry events generated by the application that you're debugging. But it can also show the events collected from your published app at the Azure portal.
-Open the Search window in Visual Studio:
+Open the **Application Insights Search** window in Visual Studio:
-![Visual Studio open Application Insights search](./media/diagnostic-search/32.png)
+![Screenshot that shows Visual Studio open to Application Insights Search.](./media/diagnostic-search/32.png)
-The Search window has features similar to the web portal:
+The **Application Insights Search** window has features similar to the web portal:
-![Visual Studio Application Insights search window](./media/diagnostic-search/34.png)
+![Screenshot that shows Visual Studio Application Insights Search window.](./media/diagnostic-search/34.png)
-The Track Operation tab is available when you open a request or a page view. An 'operation' is a sequence of events that is associated with to a single request or page view. For example, dependency calls, exceptions, trace logs, and custom events might be part of a single operation. The Track Operation tab shows graphically the timing and duration of these events in relation to the request or page view.
+The **Track Operation** tab is available when you open a request or a page view. An "operation" is a sequence of events that's associated with a single request or page view. For example, dependency calls, exceptions, trace logs, and custom events might be part of a single operation. The **Track Operation** tab shows graphically the timing and duration of these events in relation to the request or page view.
## Inspect individual items Select any telemetry item to see key fields and related items.
-![Screenshot of an individual dependency request](./media/diagnostic-search/telemetry-item.png)
+![Screenshot that shows an individual dependency request.](./media/diagnostic-search/telemetry-item.png)
-This will launch the end-to-end transaction details view.
+The end-to-end transaction details view opens.
## Filter event types
-Open the Event types' drop-down menu and choose the event types you want to see. (If, later, you want to restore the filters, click Reset.)
+Open the **Event types** dropdown menu and choose the event types you want to see. If you want to restore the filters later, select **Reset**.
The event types are:
-* **Trace** - [Diagnostic logs](./asp-net-trace-logs.md) including TrackTrace, log4Net, NLog, and System.Diagnostic.Trace calls.
-* **Request** - HTTP requests received by your server application, including pages, scripts, images, style files, and data. These events are used to create the request and response overview charts.
-* **Page View** - [Telemetry sent by the web client](./javascript.md), used to create page view reports.
-* **Custom Event** - If you inserted calls to TrackEvent() in order to [monitor usage](./api-custom-events-metrics.md), you can search them here.
-* **Exception** - Uncaught [exceptions in the server](./asp-net-exceptions.md), and those that you log by using TrackException().
-* **Dependency** - [Calls from your server application](./asp-net-dependencies.md) to other services such as REST APIs or databases, and AJAX calls from your [client code](./javascript.md).
-* **Availability** - Results of [availability tests](./monitor-web-app-availability.md).
+* **Trace**: [Diagnostic logs](./asp-net-trace-logs.md) including TrackTrace, log4Net, NLog, and System.Diagnostic.Trace calls.
+* **Request**: HTTP requests received by your server application including pages, scripts, images, style files, and data. These events are used to create the request and response overview charts.
+* **Page View**: [Telemetry sent by the web client](./javascript.md) used to create page view reports.
+* **Custom Event**: If you inserted calls to `TrackEvent()` to [monitor usage](./api-custom-events-metrics.md), you can search them here.
+* **Exception**: Uncaught [exceptions in the server](./asp-net-exceptions.md), and the exceptions that you log by using `TrackException()`.
+* **Dependency**: [Calls from your server application](./asp-net-dependencies.md) to other services such as REST APIs or databases, and AJAX calls from your [client code](./javascript.md).
+* **Availability**: Results of [availability tests](./monitor-web-app-availability.md).
## Filter on property values
-You can filter events on the values of their properties. The available properties depend on the event types you selected. Click on the filter icon ![Filter icon](./media/diagnostic-search/filter-icon.png) to start.
+You can filter events on the values of their properties. The available properties depend on the event types you selected. Select **Filter** ![Filter icon](./media/diagnostic-search/filter-icon.png) to start.
Choosing no values of a particular property has the same effect as choosing all values. It switches off filtering on that property.
Notice that the counts to the right of the filter values show how many occurrenc
## Find events with the same property
-To find all the items with the same property value, either type it into the search bar or click the checkbox when looking through properties in the filter tab.
+To find all the items with the same property value, either enter it in the **Search** box or select the checkbox when you look through properties on the **Filter** tab.
-![Click the checkbox of a property in the filter tab](./media/diagnostic-search/filter-property.png)
+![Screenshot that shows selecting the checkbox of a property on the Filter tab.](./media/diagnostic-search/filter-property.png)
## Search the data > [!NOTE]
-> To write more complex queries, open [**Logs (Analytics)**](../logs/log-analytics-tutorial.md) from the top of the Search blade.
+> To write more complex queries, open [Logs (Analytics)](../logs/log-analytics-tutorial.md) at the top of the **Search** pane.
>
-You can search for terms in any of the property values. This is useful if you have written [custom events](./api-custom-events-metrics.md) with property values.
+You can search for terms in any of the property values. This capability is useful if you've written [custom events](./api-custom-events-metrics.md) with property values.
-You might want to set a time range, as searches over a shorter range are faster.
+You might want to set a time range because searches over a shorter range are faster.
-![Open diagnostic search](./media/diagnostic-search/search-property.png)
+![Screenshot that shows opening a diagnostic search.](./media/diagnostic-search/search-property.png)
Search for complete words, not substrings. Use quotation marks to enclose special characters.
Search for complete words, not substrings. Use quotation marks to enclose specia
| HomeController.About |`home`<br/>`controller`<br/>`out` | `homecontroller`<br/>`about`<br/>`"homecontroller.about"`| |United States|`Uni`<br/>`ted`|`united`<br/>`states`<br/>`united AND states`<br/>`"united states"`
-Here are the search expressions you can use:
+You can use the following search expressions:
| Sample query | Effect | | | |
-| `apple` |Find all events in the time range whose fields include the word "apple" |
+| `apple` |Find all events in the time range whose fields include the word "apple". |
| `apple AND banana` <br/>`apple banana` |Find events that contain both words. Use capital "AND", not "and". <br/>Short form. | | `apple OR banana` |Find events that contain either word. Use "OR", not "or". | | `apple NOT banana` |Find events that contain one word but not the other. | ## Sampling
-If your app generates a large amount of telemetry (and you are using the ASP.NET SDK version 2.0.0-beta3 or later), the adaptive sampling module automatically reduces the volume that is sent to the portal by sending only a representative fraction of events. However, events that are related to the same request are selected or deselected as a group, so that you can navigate between related events.
+If your app generates a large amount of telemetry, and you're using the ASP.NET SDK version 2.0.0-beta3 or later, the adaptive sampling module automatically reduces the volume that's sent to the portal by sending only a representative fraction of events. Events that are related to the same request are selected or deselected as a group so that you can navigate between related events.
-[Learn about sampling](./sampling.md).
+Learn about [sampling](./sampling.md).
## Create work item You can create a bug in GitHub or Azure DevOps with the details from any telemetry item.
-Go to the end-to-end transaction detail view by clicking on any telemetry item then select **Create work item**.
-
-![Click New Work Item, edit the fields, and then click OK.](./media/diagnostic-search/work-item.png)
+Go to the end-to-end transaction detail view by selecting any telemetry item. Then select **Create work item**.
-The first time you do this, you are asked to configure a link to your Azure DevOps organization and project.
+![Screenshot that shows Create work item.](./media/diagnostic-search/work-item.png)
-(You can also configure the link on the Work Items tab.)
+The first time you do this step, you're asked to configure a link to your Azure DevOps organization and project. You can also configure the link on the **Work Items** tab.
## Send more telemetry to Application Insights In addition to the out-of-the-box telemetry sent by Application Insights SDK, you can: * Capture log traces from your favorite logging framework in [.NET](./asp-net-trace-logs.md) or [Java](./java-in-process-agent.md#autocollected-logs). This means you can search through your log traces and correlate them with page views, exceptions, and other events.+ * [Write code](./api-custom-events-metrics.md) to send custom events, page views, and exceptions.
-[Learn how to send logs and custom telemetry to Application Insights](./asp-net-trace-logs.md).
+Learn how to [send logs and custom telemetry to Application Insights](./asp-net-trace-logs.md).
## <a name="questions"></a>Q & A
+Find answers to common questions.
+ ### <a name="limits"></a>How much data is retained? See the [Limits summary](../service-limits.md#application-insights).
azure-monitor Container Insights Enable Existing Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-existing-clusters.md
To enable monitoring of your AKS cluster in the Azure portal from Azure Monitor,
3. On the **Monitor - containers** page, select **Unmonitored clusters**.
-4. From the list of unmonitored clusters, find the container in the list and click **Enable**.
+4. From the list of unmonitored clusters, find the cluster in the list and click **Enable**.
5. On the **Onboarding to Container insights** page, if you have an existing Log Analytics workspace in the same subscription as the cluster, select it from the drop-down list. The list preselects the default workspace and location that the AKS container is deployed to in the subscription.
azure-monitor Data Collection Rule Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-overview.md
The following resources describe different scenarios for creating data collectio
| Scenario | Resources | Description | |:|:|:| | Azure Monitor agent | [Configure data collection for the Azure Monitor agent](../agents/data-collection-rule-azure-monitor-agent.md) | Use the Azure portal to create a data collection rule that specifies events and performance counters to collect from a machine with the Azure Monitor agent and then apply that rule to one or more virtual machines. The Azure Monitor agent will be installed on any machines that don't currently have it. |
-| | [Use Azure Policy to install Azure Monitor agent and associate with DCR](../agents/azure-monitor-agent-manage.md#using-azure-policy) | Use Azure Policy to install the Azure Monitor agent and associate one or more data collection rules with any virtual machines or virtual machine scale sets as they're created in your subscription.
+| | [Use Azure Policy to install Azure Monitor agent and associate with DCR](../agents/azure-monitor-agent-manage.md#use-azure-policy) | Use Azure Policy to install the Azure Monitor agent and associate one or more data collection rules with any virtual machines or virtual machine scale sets as they're created in your subscription.
| Custom logs | [Configure custom logs using the Azure portal](../logs/tutorial-logs-ingestion-portal.md)<br>[Configure custom logs using Resource Manager templates and REST API](../logs/tutorial-logs-ingestion-api.md) | Send custom data using a REST API. The API call connects to a DCE and specifies a DCR to use. The DCR specifies the target table and potentially includes a transformation that filters and modifies the data before it's stored in a Log Analytics workspace. | | Workspace transformation | [Configure ingestion-time transformations using the Azure portal](../logs/tutorial-workspace-transformations-portal.md)<br>[Configure ingestion-time transformations using Resource Manager templates and REST API](../logs/tutorial-workspace-transformations-api.md) | Create a transformation for any supported table in a Log Analytics workspace. The transformation is defined in a DCR that's then associated with the workspace and applied to any data sent to that table from a legacy workload that doesn't use a DCR. |
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |AddRegion|Yes|Region Added|Count|Count|Region Added|Region| |AutoscaleMaxThroughput|No|Autoscale Max Throughput|Count|Maximum|Autoscale Max Throughput|DatabaseName, CollectionName|
-|AvailableStorage|No|(deprecated) Available Storage|Bytes|Total|"Available Storage"will be removed from Azure Monitor at the end of September 2023. Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more info about Cosmos DB storage quota, please check this doc https://docs.microsoft.com/azure/cosmos-db/concepts-limits. After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date.|CollectionName, DatabaseName, Region|
+|AvailableStorage|No|(deprecated) Available Storage|Bytes|Total|"Available Storage"will be removed from Azure Monitor at the end of September 2023. Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more info about Cosmos DB storage quota, please check this [here](/azure/cosmos-db/concepts-limits). After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date.|CollectionName, DatabaseName, Region|
|CassandraConnectionClosures|No|Cassandra Connection Closures|Count|Total|Number of Cassandra connections that were closed, reported at a 1 minute granularity|APIType, Region, ClosureReason| |CassandraConnectorAvgReplicationLatency|No|Cassandra Connector Average ReplicationLatency|MilliSeconds|Average|Cassandra Connector Average ReplicationLatency|No Dimensions| |CassandraConnectorReplicationHealthStatus|No|Cassandra Connector Replication Health Status|Count|Count|Cassandra Connector Replication Health Status|NotStarted, ReplicationInProgress, Error|
azure-monitor Workspace Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/workspace-design.md
While a single [Log Analytics workspace](log-analytics-workspace-overview.md) ma
## Design strategy Your design should always start with a single workspace since this reduces the complexity of managing multiple workspaces and in querying data from them. There are no performance limitations from the amount of data in your workspace, and multiple services and data sources can send data to the same workspace. As you identify criteria to create additional workspaces, your design should use the fewest number that will match your particular requirements.
-Designing a workspace configuration includes evaluation of multiple criteria, some of which may in conflict. For example, you may be able to reduce egress charges by creating a separate workspace in each Azure region, but consolidating into a single workspace might allow you to reduce charges even more with a commitment tier. Evaluate each of the criteria below independently and consider your particular requirements and priorities in determining which design will be most effective for your particular environment.
+Designing a workspace configuration includes evaluation of multiple criteria, some of which may be in conflict. For example, you may be able to reduce egress charges by creating a separate workspace in each Azure region, but consolidating into a single workspace might allow you to reduce charges even more with a commitment tier. Evaluate each of the criteria below independently and consider your particular requirements and priorities in determining which design will be most effective for your particular environment.
## Design criteria
azure-signalr Signalr Howto Reverse Proxy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-reverse-proxy-overview.md
+
+ Title: How to integrate Azure SignalR with reverse proxies
+description: This article provides information about the general practice integrating Azure SignalR with reverse proxies
++ Last updated : 08/16/2022++++
+# How to integrate Azure SignalR with reverse proxies
+
+A reverse proxy server can be used in front of Azure SignalR Service. Reverse proxy servers sit in between the clients and the Azure SignalR service and other services can help in various scenarios. For example, reverse proxy servers can load balance different client requests to different backend services, you can usually configure different routing rules for different client requests, and provide seamless user experience for users accessing different backend services. They can also protect your backend servers from common exploits vulnerabilities with centralized protection control. Services such as [Azure Application Gateway](/azure/application-gateway/overview), [Azure API Management](/azure/api-management/api-management-key-concepts) or [Akamai](https://www.akamai.com) can act as reverse proxy servers.
+
+A common architecture using a reverse proxy server with Azure SignalR is as below:
++
+## General practices
+There are several general practices to follow when using a reverse proxy in front of SignalR Service.
+
+* Make sure to rewrite the incoming HTTP [HOST header](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Host) with the Azure SignalR service URL, e.g. `https://demo.service.signalr.net`. Azure SignalR is a multi-tenant service, and it relies on the `HOST` header to resolve to the correct endpoint. For example, when [configuring Application Gateway](./signalr-howto-work-with-app-gateway.md#create-an-application-gateway-instance) for Azure SignalR, select **Yes** for the option *Override with new host name*.
+
+* When your client goes through your reverse proxy to Azure SignalR, set `ClientEndpoint` as your reverse proxy URL. When your client *negotiate*s with your hub server, the hub server will return the URL defined in `ClientEndpoint` for your client to connect. [Check here for more details.](./concept-connection-string.md#client-and-server-endpoints)
+
+ There are two ways to configure `ClientEndpoint`:
+ * Add a `ClientEndpoint` section to your ConnectionString: `Endpoint=...;AccessKey=...;ClientEndpoint=<reverse-proxy-URL>`
+ * Configure `ClientEndpoint` when calling `AddAzureSignalR`:
+
+ ```cs
+ services.AddSignalR().AddAzureSignalR(o =>
+ {
+ o.Endpoints = new Microsoft.Azure.SignalR.ServiceEndpoint[1]
+ {
+ new Microsoft.Azure.SignalR.ServiceEndpoint("<azure-signalr-connection-string>")
+ {
+ ClientEndpoint = new Uri("<reverse-proxy-URL>")
+ }
+ };
+ })
+ ```
+
+* When a client goes through your reverse proxy to Azure SignalR, there are two types of requests:
+ * HTTP post request to `<reverse-proxy-URL>/client/negotiate`, which we call as **negotiate request**
+ * WebSocket/SSE/LongPolling connection request depending on your transport type to `<reverse-proxy-URL>/client`, which we call as **connect request**.
+
+ Make sure that your reverse proxy supports both transport types for `/client` subpath. For example, when your transport type is WebSocket, make sure your reverse proxy supports both HTTP and WebSocket for `/client` subpath.
+
+ If you have configured multiple SignalR services behind your reverse proxy, make sure `negotiate` request and `connect` request with the same `asrs_request_id` query parameter(meaning they are for the same connection) are routed to the same SignalR service instance.
+
+* When reverse proxy is used, you can further secure your SignalR service by [disabling public network access](./howto-network-access-control.md) and using [private endpoints](howto-private-endpoints.md) to allow only private access from your reverse proxy to your SignalR service through VNet.
+
+## Next steps
+
+- Learn [how to work with Application Gateway](./signalr-howto-work-with-app-gateway.md).
+
+- Learn more about [the internals of Azure SignalR](./signalr-concept-internals.md).
azure-signalr Signalr Howto Work With App Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-work-with-app-gateway.md
+
+ Title: How to use SignalR Service with Azure Application Gateway
+description: This article provides information about using Azure SignalR Service with Azure Application Gateway.
++ Last updated : 08/16/2022++++
+# How to use Azure SignalR Service with Azure Application Gateway
+
+Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. Using Application Gateway with SignalR Service enables you to do the following:
+
+* Protect your applications from common web vulnerabilities.
+* Get application-level load-balancing for your scalable and highly available applications.
+* Set up end-to-end security.
+* Customize the domain name.
+
+This article contains two parts,
+* [The first part](#set-up-and-configure-application-gateway) shows how to configure Application Gateway so that the clients can access SignalR through Application Gateway.
+* [The second part](#secure-signalr-service) shows how to secure SignalR Service by adding access control to SignalR Service and only allow traffic from Application Gateway.
++
+## Set up and configure Application Gateway
+
+### Create a SignalR Service instance
+* Follow [the article](./signalr-quickstart-azure-signalr-service-arm-template.md) and create a SignalR Service instance **_ASRS1_**
+
+### Create an Application Gateway instance
+Create from the portal an Application Gateway instance **_AG1_**:
+* On the [Azure portal](https://portal.azure.com/), search for **Application Gateway** and **Create**.
+* On the **Basics** tab, use these values for the following application gateway settings:
+ - **Subscription** and **Resource group** and **Region**: the same as what you choose for SignalR Service
+ - **Application gateway name**: **_AG1_**
+ - **Virtual network**, select **Create new**, and in the **Create virtual network** window that opens, enter the following values to create the virtual network and two subnets, one for the application gateway, and another for the backend servers.
+ - **Name**: Enter **_VN1_** for the name of the virtual network.
+ - **Subnets**: Update the **Subnets** grid with below 2 subnets
+
+ | Subnet name | Address range| Note|
+ |--|--|--|
+ | *myAGSubnet* | (address range) | Subnet for the application gateway. The application gateway subnet can contain only application gateways. No other resources are allowed.
+ | *myBackendSubnet* | (another address range) | Subnet for the Azure SignalR instance.
+
+ - Accept the default values for the other settings and then select **Next: Frontends**
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/basics.png" alt-text="Screenshot of creating Application Gateway instance with Basics tab.":::
+
+* On the **Frontends** tab:
+ - **Frontend IP address type**: **Public**.
+ - Select **Add new** for the **Public IP address** and enter *myAGPublicIPAddress* for the public IP address name, and then select **OK**.
+ - Select **Next: Backends**
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-frontends.png" alt-text="Screenshot of creating Application Gateway instance with Frontends tab.":::
+
+* On the **Backends** tab, select **Add a backend pool**:
+ - **Name**: Enter **_signalr_** for the SignalR Service resource backend pool.
+ - Backend targets **Target**: the **host name** of your SignalR Service instance **_ASRS1_**, for example `asrs1.service.signalr.net`
+ - Select **Next: Configuration**
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-backends.png" alt-text="Screenshot of setting up the application gateway backend pool for the SignalR Service.":::
+
+* On the **Configuration** tab, select **Add a routing rule** in the **Routing rules** column:
+ - **Rule name**: **_myRoutingRule_**
+ - **Priority**: 1
+ - On the **Listener** tab within the **Add a routing rule** window, enter the following values for the listener:
+ - **Listener name**: Enter *myListener* for the name of the listener.
+ - **Frontend IP**: Select **Public** to choose the public IP you created for the frontend.
+ - **Protocol**: HTTP
+ * We use the HTTP frontend protocol on Application Gateway in this article to simplify the demo and help you get started easier. But in reality, you may need to enable HTTPs and Customer Domain on it with production scenario.
+ - Accept the default values for the other settings on the **Listener** tab
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-rule-listener.png" alt-text="Screenshot of setting up the application gateway routing rule listener tab for the SignalR Service.":::
+ - On the **Backend targets** tab, use the following values:
+ * **Target type**: Backend pool
+ * **Backend target**: select **signalr** we previously created
+ * **Backend settings**: select **Add new** to add a new setting.
+ * **Backend settings name**: *mySetting*
+ * **Backend protocol**: **HTTPS**
+ * **Use well known CA certificate**: **Yes**
+ * **Override with new host name**: **Yes**
+ * **Host name override**: **Pick host name from backend target**
+ * Others keep the default values
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-setup-backend.png" alt-text="Screenshot of setting up the application gateway backend setting for the SignalR Service.":::
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-rule-backends.png" alt-text="Screenshot of creating backend targets for application gateway.":::
+
+* Review and create the **_AG1_**
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-review.png" alt-text="Screenshot of reviewing and creating the application gateway instance.":::
+
+### Configure Application Gateway health probe
+
+When **_AG1_** is created, go to **Health probes** tab under **Settings** section in the portal, change the health probe path to `/api/health`
++
+### Quick test
+
+* Try with an invalid client request https://asrs1.service.signalr.net/client and it returns *400* with error message *'hub' query parameter is required.* It means the request arrived at the SignalR Service and did the request validation.
+ ```bash
+ curl -v https://asrs1.service.signalr.net/client
+ ```
+ returns
+ ```
+ < HTTP/1.1 400 Bad Request
+ < ...
+ <
+ 'hub' query parameter is required.
+ ```
+* Go to the Overview tab of **_AG1_**, and find out the Frontend public IP address
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/quick-test.png" alt-text="Screenshot of quick testing SignalR Service health endpoint through Application Gateway.":::
+
+* Visit the health endpoint through **_AG1_** `http://<frontend-public-IP-address>/client`, and it also returns *400* with error message *'hub' query parameter is required.* It means the request successfully went through Application Gateway to SignalR Service and did the request validation.
+
+ ```bash
+ curl -I http://<frontend-public-IP-address>/client
+ ```
+ returns
+ ```
+ < HTTP/1.1 400 Bad Request
+ < ...
+ <
+ 'hub' query parameter is required.
+ ```
+
+### Run chat through Application Gateway
+
+Now, the traffic can reach SignalR Service through the Application Gateway. The customer could use the Application Gateway public IP address or custom domain name to access the resource. LetΓÇÖs use [this chat application](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/ChatRoom) as an example. Let's start with running it locally.
+
+* First let's get the connection string of **_ASRS1_**
+ * On the **Connection strings** tab of **_ASRS1_**
+ * **Client endpoint**: Enter the URL using frontend public IP address of **_AG1_**, for example `http://20.88.8.8`. It's a connection string generator when using reverse proxies, and the value isn't preserved when next time you come back to this tab. When value entered, the connection string appends a `ClientEndpoint` section.
+ * Copy the Connection string
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/connection-string.png" alt-text="Screenshot of getting the connection string for SignalR Service with client endpoint.":::
+
+* Clone the GitHub repo https://github.com/aspnet/AzureSignalR-samples
+* Go to samples/Chatroom folder:
+* Set the copied connection string and run the application locally, you can see that there's a `ClientEndpoint` section in the ConnectionString.
+
+ ```bash
+ cd samples/Chatroom
+ dotnet restore
+ dotnet user-secrets set Azure:SignalR:ConnectionString "<copied-onnection-string-with-client-endpoint>"
+ dotnet run
+ ```
+* Open http://localhost:5000 from the browser and use F12 to view the network traces, you can see that the WebSocket connection is established through **_AG1_** 
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/chat-local-run.png" alt-text="Screenshot of running chat application locally with App Gateway and SignalR Service.":::
+
+## Secure SignalR Service
+
+In previous section, we successfully configured SignalR Service as the backend service of Application Gateway, we can call SignalR Service directly from public network, or through Application Gateway.
+
+In this section, let's configure SignalR Service to deny all the traffic from public network and only accept traffic from Application Gateway.
+
+### Configure SignalR Service
+
+Let's configure SignalR Service to only allow private access. You can find more details in [use private endpoint for SignalR Service](howto-private-endpoints.md).
+
+* Go to the SignalR Service instance **_ASRS1_** in the portal.
+* Go the **Networking** tab:
+ * On **Public access** tab: **Public network access** change to **Disabled** and **Save**, now you're no longer able to access SignalR Service from public network
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/disable-public-access.png" alt-text="Screenshot of disabling public access for SignalR Service.":::
+
+ * On **Private access** tab, select **+ Private endpoint**:
+ * On **Basics** tab:
+ * **Name**: **_PE1_**
+ * **Network Interface Name**: **_PE1-nic_**
+ * **Region**: make sure to choose the same region as your Application Gateway
+ * Select **Next: Resources**
+ * On **Resources** tab
+ * Keep default values
+ * Select **Next: Virtual Network**
+ * On **Virtual Network** tab
+ * **Virtual network**: Select previously created **_VN1_**
+ * **Subnet**: Select previously created **_VN1/myBackendSubnet_**
+ * Others keep the default settings
+ * Select **Next: DNS**
+ * On **DNS** tab
+ * **Integration with private DNS zone**: **Yes**
+ * Review and create the private endpoint
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-setup-private-endpoint.png" alt-text="Screenshot of setting up the private endpoint resource for the SignalR Service.":::
+
+### Refresh Application Gateway backend pool
+Since Application Gateway was set up before there was a private endpoint for it to use, we need to **refresh** the backend pool for it to look at the Private DNS Zone and figure out that it should route the traffic to the private endpoint instead of the public address. We do the **refresh** by setting the backend FQDN to some other value and then changing it back.
+
+Go to the **Backend pools** tab for **_AG1_**, and select **signalr**:
+* Step1: change Target `asrs1.service.signalr.net` to some other value, for example, `x.service.signalr.net`, and select **Save**
+* Step2: change Target back to `asrs1.service.signalr.net`
+
+### Quick test
+
+* Now let's visit https://asrs1.service.signalr.net/client again. With public access disabled, it returns *403* instead.
+ ```bash
+ curl -v https://asrs1.service.signalr.net/client
+ ```
+ returns
+ ```
+ < HTTP/1.1 403 Forbidden
+* Visit the endpoint through **_AG1_** `http://<frontend-public-IP-address>/client`, and it returns *400* with error message *'hub' query parameter is required*. It means the request successfully went through the Application Gateway to SignalR Service.
+
+ ```bash
+ curl -I http://<frontend-public-IP-address>/client
+ ```
+ returns
+ ```
+ < HTTP/1.1 400 Bad Request
+ < ...
+ <
+ 'hub' query parameter is required.
+ ```
+
+Now if you run the Chat application locally again, you'll see error messages `Failed to connect to .... The server returned status code '403' when status code '101' was expected.`, it is because public access is disabled so that localhost server connections are longer able to connect to the SignalR service.
+
+Let's deploy the Chat application into the same VNet with **_ASRS1_** so that the chat can talk with **_ASRS1_**.
+
+### Deploy the chat application to Azure
+* On the [Azure portal](https://portal.azure.com/), search for **App services** and **Create**.
+
+* On the **Basics** tab, use these values for the following application gateway settings:
+ - **Subscription** and **Resource group** and **Region**: the same as what you choose for SignalR Service
+ - **Name**: **_WA1_**
+ * **Publish**: **Code**
+ * **Runtime stack**: **.NET 6 (LTS)**
+ * **Operating System**: **Linux**
+ * **Region**: Make sure it's the same as what you choose for SignalR Service
+ * Select **Next: Docker**
+* On the **Networking** tab
+ * **Enable network injection**: select **On**
+ * **Virtual Network**: select **_VN1_** we previously created
+ * **Enable VNet integration**: **On**
+ * **Outbound subnet**: create a new subnet
+ * Select **Review + create**
+
+Now let's deploy our chat application to Azure. Below we use Azure CLI to deploy the web app, you can also choose other deployment environments following [publish your web app section](/azure/app-service/quickstart-dotnetcore#publish-your-web-app).
+
+Under folder samples/Chatroom, run the below commands:
+
+```bash
+# Build and publish the assemblies to publish folder
+dotnet publish -os linux -o publish
+# zip the publish folder as app.zip
+cd publish
+zip -r app.zip .
+# use az CLI to deploy app.zip to our webapp
+az login
+az account set -s <your-subscription-name-used-to-create-WA1>
+az webapp deployment source config-zip -n WA1 -g <resource-group-of-WA1> --src app.zip
+```
+
+Now the web app is deployed, let's go to the portal for **_WA1_** and make the following updates:
+* On the **Configuration** tab:
+ * New application settings:
+
+ | Name | Value |
+ | --| |
+ |**WEBSITE_DNS_SERVER**| **168.63.129.16** |
+ |**WEBSITE_VNET_ROUTE_ALL**| **1**|
+
+ * New connection string:
+
+ | Name | Value | Type|
+ | --| ||
+ |**Azure__SignalR__ConnectionString**| The copied connection string with ClientEndpoint value| select **Custom**|
+++
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/web-app-settings.png" alt-text="Screenshot of configuring web app connection string.":::
+
+* On the **TLS/SSL settings** tab:
+ * **HTTPS Only**: **Off**. To Simplify the demo, we used the HTTP frontend protocol on Application Gateway. Therefore, we need to turn off this option to avoid changing the HTTP URL to HTTPs automatically.
+
+* Go to the **Overview** tab and get the URL of **_WA1_**.
+* Get the URL, and replace scheme https with http, for example, http://wa1.azurewebsites.net, open the URL in the browser, now you can start chatting! Use F12 to open network traces, and you can see the SignalR connection is established through **_AG1_**.
+ > [!NOTE]
+ >
+ > Sometimes you need to disable browser's auto https redirection and browser cache to prevent the URL from redirecting to HTTPS automatically.
++
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/web-app-run.png" alt-text="Screenshot of running chat application in Azure with App Gateway and SignalR Service.":::
+
+## Next steps
+
+Now, you have successfully built a real-time chat application with SignalR Service and used Application Gateway to protect your applications and set up end-to-end security. [Learn more about SignalR Service](./signalr-overview.md).
azure-video-indexer Switch Tenants Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/switch-tenants-portal.md
+
+ Title: Switch between tenants on the Azure Video Indexer website
+description: This article shows how to switch between tenants in the Azure Video Indexer website.
+ Last updated : 08/26/2022++
+# Switch between multiple tenants
+
+This article shows how to switch between multiple tenants on the Azure Video Indexer website. When you create an Azure Resource Manager (ARM)-based account, the new account may not show up on the Azure Video Indexer website. So you need to make sure to sign in with the correct domain.
+
+The article shows how to sign in with the correct domain name into the Azure Video Indexer website:
+
+1. Sign into the [Azure portal](https://portal.azure.com/) with the same subscription where your Video Indexer ARM account was created.
+1. Get the domain name of the current Azure subscription tenant.
+1. Sign in with the correct domain name on the [Azure Video Indexer](https://www.videoindexer.ai/) website.
+
+## Get the domain name from the Azure portal
+
+1. In the [Azure portal](https://portal.azure.com/), sign in with the same subscription tenant in which your Azure Video Indexer Azure Resource Manager (ARM) account was created.
+1. Hover over your account name (in the right-top corner).
+
+ > [!div class="mx-imgBorder"]
+ > ![Hover over your account name.](./media/switch-directory/account-attributes.png)
+1. Get the domain name of the current Azure subscription, you'll need it for the last step of the following section.
+
+If you want to see domains for all of your directories and switch between them, see [Switch and manage directories with the Azure portal](../azure-portal/set-preferences.md#switch-and-manage-directories).
+
+## Sign in with the correct domain name on the AVI website
+
+1. Go to the [Azure Video Indexer](https://www.videoindexer.ai/) website.
+1. Press **Sign out** after pressing the button in the top-right corner.
+1. On the AVI website, press **Sign in** and choose the Azure AD account.
+
+ > [!div class="mx-imgBorder"]
+ > ![Sign in with the AAD account.](./media/switch-directory/choose-account.png)
+1. Press **Use another account**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Choose another account.](./media/switch-directory/use-another-account.png)
+1. Choose **Sign-in with other options**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Sign in with other options.](./media/switch-directory/sign-in-options.png)
+1. Press **Sign in to an organization**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Sign in to an organization.](./media/switch-directory/sign-in-organization.png)
+1. Enter the domain name you copied in the [Get the domain name from the Azure portal](#get-the-domain-name-from-the-azure-portal) section.
+
+ > [!div class="mx-imgBorder"]
+ > ![Find the organization.](./media/switch-directory/find-your-organization.png)
+
+## Next steps
+
+[FAQ](faq.yml)
azure-vmware Move Azure Vmware Solution Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/move-azure-vmware-solution-across-regions.md
Azure VMware Solution supports all backup solutions. You'll need CloudAdmin priv
- VM workload backup using [Veritas NetBackup solution](https://vrt.as/nb4avs). >[!TIP]
->You can use [Azure Resource Mover](../resource-mover/move-region-within-resource-group.md?toc=%2fazure%2fazure-resource-manager%2fmanagement%2ftoc.json) to verify and migrate the list of supported resources to move across regions, which are dependent on Azure VMware Solution.
+>You can use [Azure Resource Mover](../resource-mover/move-region-within-resource-group.md?toc=/azure/azure-resource-manager/management/toc.json) to verify and migrate the list of supported resources to move across regions, which are dependent on Azure VMware Solution.
### Locate the source ExpressRoute circuit ID
azure-vmware Tutorial Configure Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-configure-networking.md
Before selecting an existing vNet, there are specific requirements that must be
1. In the same region as Azure VMware Solution private cloud. 1. In the same resource group as Azure VMware Solution private cloud. 1. vNet must contain an address space that doesn't overlap with Azure VMware Solution.
-1. Validate solution design is within Azure VMware Solution limits (https://docs.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits).
+1. Validate solution design is within Azure VMware Solution limits (Microsoft technical documentation/azure/azure-resource-manager/management/azure-subscription-service-limits).
### Select an existing vNet
cloud-services Cloud Services Nodejs Develop Deploy App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-nodejs-develop-deploy-app.md
For more information, see the [Node.js Developer Center].
[Azure SDK for .NET 3.0]: https://www.microsoft.com/download/details.aspx?id=54917 [Connect PowerShell]: /powershell/azure/ [nodejs.org]: https://nodejs.org/
-[Overview of Creating a Hosted Service for Azure]: https://azure.microsoft.com/documentation/services/cloud-services/
+[Overview of Creating a Hosted Service for Azure]: /azure/cloud-services/
[Node.js Developer Center]: https://azure.microsoft.com/develop/nodejs/ <!-- IMG List -->
For more information, see the [Node.js Developer Center].
[A browser window displaying the hello world page; the URL indicates the page is hosted on Azure.]: ./media/cloud-services-nodejs-develop-deploy-app/node21.png [The status of the Stop-AzureService command]: ./media/cloud-services-nodejs-develop-deploy-app/node48.png [The status of the Remove-AzureService command]: ./media/cloud-services-nodejs-develop-deploy-app/node49.png---
cognitive-services Multivariate Anomaly Detection Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/tutorials/multivariate-anomaly-detection-synapse.md
If you have the need to run training code and inference code in separate noteboo
* Learn about [what is Multivariate Anomaly Detector](../overview-multivariate.md). * SynapseML documentation with [Multivariate Anomaly Detector feature](https://microsoft.github.io/SynapseML/docs/documentation/estimators/estimators_cognitive/#fitmultivariateanomaly).
-* Recipe: [Cognitive Services - Multivariate Anomaly Detector](https://microsoft.github.io/SynapseML/docs/next/features/cognitive_services/CognitiveServices).
+* Recipe: [Cognitive Services - Multivariate Anomaly Detector](https://microsoft.github.io/SynapseML/docs/features/cognitive_services/CognitiveServices%20-%20Multivariate%20Anomaly%20Detection/).
* Need support? [Join the Anomaly Detector Community](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR2Ci-wb6-iNDoBoNxrnEk9VURjNXUU1VREpOT0U1UEdURkc0OVRLSkZBNC4u). ### About Synapse
If you have the need to run training code and inference code in separate noteboo
* Quick start: [Configure prerequisites for using Cognitive Services in Azure Synapse Analytics](/azure/synapse-analytics/machine-learning/tutorial-configure-cognitive-services-synapse#create-a-key-vault-and-configure-secrets-and-access). * Visit [SynpaseML new website](https://microsoft.github.io/SynapseML/) for the latest docs, demos, and examples. * Learn more about [Synapse Analytics](/azure/synapse-analytics/).
-* Read about the [SynapseML v0.9.5 release](https://github.com/microsoft/SynapseML/releases/tag/v0.9.5) on GitHub.
+* Read about the [SynapseML v0.9.5 release](https://github.com/microsoft/SynapseML/releases/tag/v0.9.5) on GitHub.
cognitive-services Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/client-library.md
Last updated 06/13/2022
ms.devlang: csharp, golang, java, javascript, python
-zone_pivot_groups: programming-languages-computer-vision
+zone_pivot_groups: programming-languages-ocr
keywords: computer vision, computer vision service
Get started with the Computer Vision Read REST API or client libraries. The Read
::: zone-end --- ::: zone pivot="programming-language-javascript" [!INCLUDE [NodeJS SDK quickstart](../includes/quickstarts-sdk/node-sdk.md)]
cognitive-services How To Select Audio Input Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-select-audio-input-devices.md
void enumerateDeviceIds()
promise.Completed( [](winrt::Windows::Foundation::IAsyncOperation<DeviceInformationCollection> const& sender,
- winrt::Windows::Foundation::AsyncStatus /* asyncStatus */ ) {
+ winrt::Windows::Foundation::AsyncStatus /* asyncStatus */) {
auto info = sender.GetResults(); auto num_devices = info.Size();
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
Title: Azure OpenAI Models
+ Title: Azure OpenAI models
-description: Learn about the different AI models that are available.
+description: Learn about the different models that are available in Azure OpenAI.
Last updated 06/24/2022
recommendations: false
keywords:
-# Azure OpenAI Models
+# Azure OpenAI models
-The service provides access to many different models. Models describe a family of models and are broken out as follows:
+The service provides access to many different models, grouped by family and capability. A model family typically associates models by their intended task. The following table describes model families currently available in Azure OpenAI.
-|Modes | Description|
+| Model family | Description |
|--|--|
-| GPT-3 series | A set of GPT-3 models that can understand and generate natural language |
-| Codex Series | A set of models that can understand and generate code, including translating natural language to code |
-| Embeddings Series | An embedding is a special format of data representation that can be easily utilized by machine learning models and algorithms. The embedding is an information dense representation of the semantic meaning of a piece of text. Currently we offer three families of embedding models for different functionalities: text search, text similarity and code search |
+| [GPT-3](#gpt-3-models) | A series of models that can understand and generate natural language. |
+| [Codex](#codex-models) | A series of models that can understand and generate code, including translating natural language to code. |
+| [Embeddings](#embeddings-models) | A set of models that can understand and use embeddings. An embedding is a special format of data representation that can be easily utilized by machine learning models and algorithms. The embedding is an information dense representation of the semantic meaning of a piece of text. Currently, we offer three families of Embeddings models for different functionalities: similarity, text search, and code search. |
+
+## Model capabilities
+
+Each model family has a series of models that are further distinguished by capability. These capabilities are typically identified by names, and the alphabetical order of these names generally signifies the relative capability and cost of that model within a given model family. For example, GPT-3 models use names such as Ada, Babbage, Curie, and Davinci to indicate relative capability and cost. Davinci is more capable (at a higher cost) than Curie, which in turn is more capable (at a higher cost) than Babbage, and so on.
+
+> [!NOTE]
+> Any task that can be performed by a less capable model like Ada can be performed by a more capable model like Curie or Davinci.
## Naming convention
-Azure OpenAI's models follow a standard naming convention: `{task}-{model name}-{version #}`. For example, our most powerful natural language model is called `text-davinci-001` and a Codex series model would look like `code-cushman-001`.
+Azure OpenAI's model names typically correspond to the following standard naming convention:
+
+`{family}-{capability}[-{input-type}]-{identifier}`
+
+| Element | Description |
+| | |
+| `{family}` | The model family of the model. For example, [GPT-3 models](#gpt-3-models) uses `text`, while [Codex models](#codex-models) use `code`.|
+| `{capability}` | The relative capability of the model. For example, GPT-3 models include `ada`, `babbage`, `curie`, and `davinci`.|
+| `{input-type}` | ([Embeddings models](#embeddings-models) only) The input type of the embedding supported by the model. For example, text search embedding models support `doc` and `query`.|
+| `{identifier}` | The version identifier of the model. |
-> Older versions of the GPT-3 models are available as `ada`, `babbage`, `curie`, `davinci` and do not follow these conventions. These models are primarily intended to be used for fine-tuning and search.
+For example, our most powerful GPT-3 model is called `text-davinci-002`, while our most powerful Codex model is called `code-davinci-002`.
+
+> Older versions of the GPT-3 models are available, named `ada`, `babbage`, `curie`, and `davinci`. These older models do not follow the standard naming conventions, and they are primarily intended for fine tuning. For more information, see [Learn how to customize a model for your application](../how-to/fine-tuning.md).
## Finding what models are available You can easily see the models you have available for both inference and fine-tuning in your resource by using the [Models API](../reference.md#models).
+## Finding the right model
+
+We recommend starting with the most capable model in a model family because it's the best way to understand what the service is capable of. After you have an idea of what you want to accomplish, you can either stay with that model or move to a model with lower capability and cost, optimizing around that model's capabilities.
-## GPT-3 Series
+## GPT-3 models
-The GPT-3 models can understand and generate natural language. The service offers four model types with different levels of power suitable for different tasks. Davinci is the most capable model, and Ada is the fastest. Going forward these models are named with the following convention: `text-{model name}-XXX` where `XXX` refers to a numerical value for different versions of the model. Currently the latest versions are:
+The GPT-3 models can understand and generate natural language. The service offers four model capabilities, each with different levels of power and speed suitable for different tasks. Davinci is the most capable model, while Ada is the fastest. The following list represents the latest versions of GPT-3 models, ordered by increasing capability.
-- text-ada-001-- text-babbage-001-- text-curie-001-- text-davinci-001
+- `text-ada-001`
+- `text-babbage-001`
+- `text-curie-001`
+- `text-davinci-002`
-While Davinci is the most capable, the other models provide significant speed advantages. Our recommendation is for users to start with Davinci while experimenting since it will produce the best results and validate the value our service can provide. Once you have a prototype working, you can then optimize your model choice with the best latency - performance tradeoff for your application.
+While Davinci is the most capable, the other models provide significant speed advantages. Our recommendation is for users to start with Davinci while experimenting, because it will produce the best results and validate the value our service can provide. Once you have a prototype working, you can then optimize your model choice with the best latency/performance balance for your application.
-### Davinci
+### <a id="gpt-3-davinci"></a>Davinci
-Davinci is the most capable model and can perform any task the other models can perform and often with less instruction. For applications requiring deep understanding of the content, like summarization for a specific audience and creative content generation, Davinci is going to produce the best results. These increased capabilities require more compute resources, so Davinci costs more and isn't as fast as the other models.
+Davinci is the most capable model and can perform any task the other models can perform, often with less instruction. For applications requiring deep understanding of the content, like summarization for a specific audience and creative content generation, Davinci produces the best results. The increased capabilities provided by Davinci require more compute resources, so Davinci costs more and isn't as fast as other models.
Another area where Davinci excels is in understanding the intent of text. Davinci is excellent at solving many kinds of logic problems and explaining the motives of characters. Davinci has been able to solve some of the most challenging AI problems involving cause and effect.
Curie is powerful, yet fast. While Davinci is stronger when it comes to analyzin
### Babbage
-Babbage can perform straightforward tasks like simple classification. ItΓÇÖs also capable when it comes to semantic search ranking how well documents match up with search queries.
+Babbage can perform straightforward tasks like simple classification. ItΓÇÖs also capable when it comes to semantic search, ranking how well documents match up with search queries.
**Use for**: Moderate classification, semantic search classification
Babbage can perform straightforward tasks like simple classification. ItΓÇÖs als
Ada is usually the fastest model and can perform tasks like parsing text, address correction and certain kinds of classification tasks that donΓÇÖt require too much nuance. AdaΓÇÖs performance can often be improved by providing more context.
-**Use For** Parsing text, simple classification, address correction, keywords
-
-> [!NOTE]
-> Any task performed by a faster model like Ada can be performed by a more powerful model like Curie or Davinci.
+**Use for**: Parsing text, simple classification, address correction, keywords
-## Codex Series
+## Codex models
The Codex models are descendants of our base GPT-3 models that can understand and generate code. Their training data contains both natural language and billions of lines of public code from GitHub.
-TheyΓÇÖre most capable in Python and proficient in over a dozen languages, including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
+TheyΓÇÖre most capable in Python and proficient in over a dozen languages, including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell. The following list represents the latest versions of Codex models, ordered by increasing capability.
-Currently we only offer one Codex model: `code-cushman-001`.
+- `code-cushman-001`
+- `code-davinci-002`
-## Embeddings Models
+### <a id="codex-davinci"></a>Davinci
-Currently we offer three families of embedding models for different functionalities: text search, text similarity and code search. Each family includes up to four models across a spectrum of capabilities:
+Similar to GPT-3, Davinci is the most capable Codex model and can perform any task the other models can perform, often with less instruction. For applications requiring deep understanding of the content, Davinci produces the best results. These increased capabilities require more compute resources, so Davinci costs more and isn't as fast as other models.
-Ada (1024 dimensions),
-Babbage (2048 dimensions),
-Curie (4096 dimensions),
-Davinci (12,288 dimensions).
-Davinci is the most capable, but is slower and more expensive than the other models. Ada is the least capable, but is both faster and cheaper.
+### Cushman
-These embedding models are specifically created to be good at a particular task.
+Cushman is powerful, yet fast. While Davinci is stronger when it comes to analyzing complicated tasks, Cushman is a capable model for many code generation tasks. Cushman typically runs faster and cheaper than Davinci, as well.
-### Similarity embeddings
+## Embeddings models
-These models are good at capturing semantic similarity between two or more pieces of text.
+Currently, we offer three families of Embeddings models for different functionalities:
-| USE CASES | AVAILABLE MODELS |
-|||
-| Clustering, regression, anomaly detection, visualization |Text-similarity-ada-001, <br> text-similarity-babbage-001, <br> text-similarity-curie-001, <br> text-similarity-davinci-001 <br>|
+- [Similarity](#similarity-embedding)
+- [Text search](#text-search-embedding)
+- [Code search](#code-search-embedding)
-### Text search embeddings
+Each family includes models across a range of capability. The following list indicates the length of the numerical vector returned by the service, based on model capability:
+
+- Ada: 1024 dimensions
+- Babbage: 2048 dimensions
+- Curie: 4096 dimensions
+- Davinci: 12288 dimensions
+
+Davinci is the most capable, but is slower and more expensive than the other models. Ada is the least capable, but is both faster and cheaper.
-These models help measure whether long documents are relevant to a short search query. There are two types: one for embedding the documents to be retrieved, and one for embedding the search query.
+### Similarity embedding
-| USE CASES | AVAILABLE MODELS |
+These models are good at capturing semantic similarity between two or more pieces of text.
+
+| Use cases | Models |
|||
-| Search, context relevance, information retrieval | text-search-ada-doc-001, <br> text-search-ada-query-001 <br> text-search-babbage-doc-001, <br> text-search-babbage-query-001, <br> text-search-curie-doc-001, <br> text-search-curie-query-001, <br> text-search-davinci-doc-001, <br> text-search-davinci-query-001 <br> |
+| Clustering, regression, anomaly detection, visualization | `text-similarity-ada-001` <br> `text-similarity-babbage-001` <br> `text-similarity-curie-001` <br> `text-similarity-davinci-001` <br>|
-### Code search embeddings
+### Text search embedding
-Similar to text search embeddings, there are two types: one for embedding code snippets to be retrieved and one for embedding natural language search queries.
+These models help measure whether long documents are relevant to a short search query. There are two input types supported by this family: `doc`, for embedding the documents to be retrieved, and `query`, for embedding the search query.
-| USE CASES | AVAILABLE MODELS |
+| Use cases | Models |
|||
-| Code search and relevance | code-search-ada-code-001, <br> code-search-ada-text-001, <br> code-search-babbage-code-001, <br> code-search-babbage-text-001 |
+| Search, context relevance, information retrieval | `text-search-ada-doc-001` <br> `text-search-ada-query-001` <br> `text-search-babbage-doc-001` <br> `text-search-babbage-query-001` <br> `text-search-curie-doc-001` <br> `text-search-curie-query-001` <br> `text-search-davinci-doc-001` <br> `text-search-davinci-query-001` <br> |
-When using our embedding models, keep in mind their limitations and risks.
+### Code search embedding
-## Finding the right model
+Similar to text search embedding models, there are two input types supported by this family: `code`, for embedding code snippets to be retrieved, and `text`, for embedding natural language search queries.
+
+| Use cases | Models |
+|||
+| Code search and relevance | `code-search-ada-code-001` <br> `code-search-ada-text-001` <br> `code-search-babbage-code-001` <br> `code-search-babbage-text-001` |
-We recommend starting with our Davinci model since it will be the best way to understand what the service is capable of. After you have an idea of what you want to accomplish, you can either stay with Davinci if youΓÇÖre not concerned about cost and speed, or you can move onto Curie or another model and try to optimize around its capabilities.
+When using our Embeddings models, keep in mind their limitations and risks.
## Next steps
cognitive-services Completions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/completions.md
write a tagline for an ice cream shop
we serve up smiles with every scoop! ```
-The actual completion results you see may differ because the API is stochastic by default which means that you might get a slightly different completion every time you call it, even if your prompt stays the same. You can control this behavior with the temperature setting.
+The actual completion results you see may differ because the API is stochastic by default. In other words, you might get a slightly different completion every time you call it, even if your prompt stays the same. You can control this behavior with the temperature setting.
-This simple text-in, text-out interface means you can "program" the model by providing instructions or just a few examples of what you'd like it to do. Its success generally depends on the complexity of the task and quality of your prompt. A general rule is to think about how you would write a word problem for a middle school student to solve. A well-written prompt provides enough information for the model to know what you want and how it should respond.
+This simple, "text in, text out" interface means you can "program" the model by providing instructions or just a few examples of what you'd like it to do. Its success generally depends on the complexity of the task and quality of your prompt. A general rule is to think about how you would write a word problem for a middle school student to solve. A well-written prompt provides enough information for the model to know what you want and how it should respond.
> [!NOTE] > Keep in mind that the models' training data cuts off in October 2019, so they may not have knowledge of current events. We plan to add more continuous training in the future.
This simple text-in, text-out interface means you can "program" the model by pro
OpenAI's models can do everything from generating original stories to performing complex text analysis. Because they can do so many things, you have to be explicit in showing what you want. Showing, not just telling, is often the secret to a good prompt.
-The models try to predict what you want from the prompt. If you send the words "Give me a list of cat breeds," the model wouldn't automatically assume that you're asking for a list of cat breeds. You could just as easily be asking the model to continue a conversation where the first words are "Give me a list of cat breeds" and the next ones are "and I'll tell you which ones I like." If the model only assumed that you wanted a list of cats, it wouldn't be as good at content creation, classification, or other tasks.
+The models try to predict what you want from the prompt. If you send the words "Give me a list of cat breeds," the model wouldn't automatically assume that you're asking for a list of cat breeds. You could as easily be asking the model to continue a conversation where the first words are "Give me a list of cat breeds" and the next ones are "and I'll tell you which ones I like." If the model only assumed that you wanted a list of cats, it wouldn't be as good at content creation, classification, or other tasks.
There are three basic guidelines to creating prompts: **Show and tell.** Make it clear what you want either through instructions, examples, or a combination of the two. If you want the model to rank a list of items in alphabetical order or to classify a paragraph by sentiment, show it that's what you want.
-**Provide quality data.** If you're trying to build a classifier or get the model to follow a pattern, make sure that there are enough examples. Be sure to proofread your examples ΓÇö the model is usually smart enough to see through basic spelling mistakes and give you a response, but it also might assume this is intentional and it can affect the response.
+**Provide quality data.** If you're trying to build a classifier or get the model to follow a pattern, make sure that there are enough examples. Be sure to proofread your examples ΓÇö the model is usually smart enough to see through basic spelling mistakes and give you a response, but it also might assume that the mistakes are intentional and it can affect the response.
-**Check your settings.** The temperature and top_p settings control how deterministic the model is in generating a response. If you're asking it for a response where there's only one right answer, then you'd want to set these lower. If you're looking for a response that's not obvious, then you might want to set them higher. The number one mistake people use with these settings is assuming that they're "cleverness" or "creativity" controls.
+**Check your settings.** The temperature and top_p settings control how deterministic the model is in generating a response. If you're asking it for a response where there's only one right answer, then you'd want to set these settings to lower values. If you're looking for a response that's not obvious, then you might want to set them to higher values. The number one mistake people use with these settings is assuming that they're "cleverness" or "creativity" controls.
### Troubleshooting
While all prompts result in completions, it can be helpful to think of text comp
Vertical farming provides a novel solution for producing food locally, reducing transportation costs and ```
-This next prompt shows how you can use completion to help write React components. We send some code to the API, and it's able to continue the rest because it has an understanding of the React library. We recommend using OpenAI's Codex models for tasks that involve understanding or generating code. Currently only `code-cushman-001` is supported.
+This next prompt shows how you can use completion to help write React components. We send some code to the API, and it's able to continue the rest because it has an understanding of the React library. We recommend using models from our Codex series for tasks that involve understanding or generating code. Currently, we support two Codex models: `code-davinci-002` and `code-cushman-001`. For more information about Codex models, see the [Codex models](../concepts/models.md#codex-models) section in [Models](../concepts/models.md).
``` import React from 'react';
Q:
## Working with code
-The Codex model series is a descendant of OpenAI's base GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
+The Codex model series is a descendant of OpenAI's base GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
You can use Codex for a variety of tasks including:
Create an array of users and email addresses
""" ```
-**Put comments inside of functions can be helpful.** Recommended coding standards usually suggest placing the description of a function inside the function. Using this format helps Codex more clearly understand what you want the function to do.
+**Put comments inside of functions can be helpful.** Recommended coding standards suggest placing the description of a function inside the function. Using this format helps Codex more clearly understand what you want the function to do.
``` def getUserBalance(id):
Create a list of random animals and species
animals = [ {"name": "Chomper", "species": "Hamster"}, {"name": ```
-**Lower temperatures give more precise results.** Setting the API temperature to 0, or close to zero (such as 0.1 or 0.2) tends to give better results in most cases. Unlike GPT-3, where a higher temperature can provide useful creative and random results, higher temperatures with Codex may give you really random or erratic responses.
+**Lower temperatures give more precise results.** Setting the API temperature to 0, or close to zero (such as 0.1 or 0.2) tends to give better results in most cases. Unlike GPT-3 models, where a higher temperature can provide useful creative and random results, higher temperatures with Codex models may give you really random or erratic responses.
In cases where you need Codex to provide different potential results, start at zero and then increment upwards by .1 until you find suitable variation.
Use the lists to generate stories about what I saw at the zoo in each city
*/ ```
-**Use Codex to explain code.** Codex's ability to create and understand code allows us to use it to perform tasks like explaining what the code in a file does. One way to accomplish this is by putting a comment after a function that starts with "This function" or "This application is." Codex will usually interpret this as the start of an explanation and complete the rest of the text.
+**Use Codex to explain code.** Codex's ability to create and understand code allows us to use it to perform tasks like explaining what the code in a file does. One way to accomplish this is by putting a comment after a function that starts with "This function" or "This application is." Codex typically interprets this comment as the start of an explanation and completes the rest of the text.
``` /* Explain what the previous function is doing: It
cognitive-services Fine Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/fine-tuning.md
The Azure OpenAI Service lets you tailor our models to your personal datasets us
## Prerequisites -- An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)-- Access granted to service in the desired Azure subscription. This service is currently invite only. You can fill out a new use case request here: <https://aka.ms/oai/access>. Please open an issue on this repo to contact us if you have an issue
+- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>
+- Access granted to the Azure OpenAI service in the desired Azure subscription
+
+ Currently, access to this service is granted only by application. You can apply for access to the Azure OpenAI service by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
- The following Python libraries: os, requests, json-- An Azure OpenAI Service resource with a model deployed. If you don't have a resource/model the process is documented in our [resource deployment guide](../how-to/create-resource.md)
+- An Azure OpenAI Service resource with a model deployed
+
+ If you don't have a resource/model the process is documented in our [resource deployment guide](../how-to/create-resource.md)
## Fine-tuning workflow
cognitive-services Integrate Synapseml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/integrate-synapseml.md
The Azure OpenAI service can be used to solve a large number of natural language
## Prerequisites -- An Azure OpenAI resource ΓÇô request access [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUOFA5Qk1UWDRBMjg0WFhPMkIzTzhKQ1dWNyQlQCN0PWcu) before [creating a resource](create-resource.md?pivots=web-portal#create-a-resource)
+- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>
+- Access granted to the Azure OpenAI service in the desired Azure subscription
+
+ Currently, access to this service is granted only by application. You can apply for access to the Azure OpenAI service by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
+- An Azure OpenAI resource ΓÇô [create a resource](create-resource.md?pivots=web-portal#create-a-resource)
- An Apache Spark cluster with SynapseML installed - create a serverless Apache Spark pool [here](../../../synapse-analytics/get-started-analyze-spark.md#create-a-serverless-apache-spark-pool) We recommend [creating a Synapse workspace](../../../synapse-analytics/get-started-create-workspace.md), but an Azure Databricks, HDInsight, or Spark on Kubernetes, or even a Python environment with the `pyspark` package, will also work.
display(completed_autobatch_df)
### Prompt engineering for translation
-The Azure OpenAI service can solve many different natural language tasks through [prompt engineering](completions.md). Here we show an example of prompting for language translation:
+The Azure OpenAI service can solve many different natural language tasks through [prompt engineering](completions.md). Here, we show an example of prompting for language translation:
```python translate_df = spark.createDataFrame(
display(completion.transform(translate_df))
### Prompt for question answering
-Here, we prompt GPT-3 for general-knowledge question answering:
+Here, we prompt the GPT-3 model for general-knowledge question answering:
```python qa_df = spark.createDataFrame(
cognitive-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/managed-identity.md
In the following sections, you'll use the Azure CLI to assign roles, and obtain
## Prerequisites -- An Azure subscription-- Access granted to service in the desired Azure subscription. -- Azure CLI. [Installation Guide](/cli/azure/install-azure-cli)
+- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>
+- Access granted to the Azure OpenAI service in the desired Azure subscription
+
+ Currently, access to this service is granted only by application. You can apply for access to the Azure OpenAI service by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
+- Azure CLI - [Installation Guide](/cli/azure/install-azure-cli)
- The following Python libraries: os, requests, json ## Sign into the Azure CLI
cognitive-services Work With Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/work-with-code.md
keywords:
# Codex models and Azure OpenAI
-The Codex model series is a descendant of our GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
+The Codex model series is a descendant of our GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
You can use Codex for a variety of tasks including:
You can use Codex for a variety of tasks including:
## How to use the Codex models
-Here are a few examples of using Codex that can be tested in the [Azure OpenAI Studio's](https://oai.azure.com) playground with a deployment of a code series model such as `code-cushman-001`.
+Here are a few examples of using Codex that can be tested in the [Azure OpenAI Studio's](https://oai.azure.com) playground with a deployment of a Codex series model, such as `code-davinci-002`.
### Saying "Hello" (Python)
animals = [ {"name": "Chomper", "species": "Hamster"}, {"name":
### Lower temperatures give more precise results
-Setting the API temperature to 0, or close to zero (such as 0.1 or 0.2) tends to give better results in most cases. Unlike GPT-3, where a higher temperature can provide useful creative and random results, higher temperatures with Codex may give you really random or erratic responses.
+Setting the API temperature to 0, or close to zero (such as 0.1 or 0.2) tends to give better results in most cases. Unlike GPT-3 models, where a higher temperature can provide useful creative and random results, higher temperatures with Codex models may give you really random or erratic responses.
In cases where you need Codex to provide different potential results, start at zero and then increment upwards by 0.1 until you find suitable variation.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md
The Azure OpenAI service provides REST API access to OpenAI's powerful language
### Features overview | Feature | Azure OpenAI |
-| | |
-| Models available | GPT-3 base series <br> Codex Series <br> Embeddings Series <br> Learn more in our [Models](./concepts/models.md) page.|
-| Fine-tuning | Ada, <br>Babbage, <br> Curie,<br>Code-cushman-001* <br> Davinci*<br> \* available by request|
+| | |
+| Models available | GPT-3 base series <br> Codex series <br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.|
+| Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman* <br> Davinci* <br> \* available by request |
| Billing Model| Coming Soon | | Virtual network support | Yes | | Managed Identity| Yes, via Azure Active Directory | | UI experience | **Azure Portal** for account & resource management, <br> **Azure OpenAI Service Studio** for model exploration and fine tuning |
-| Regional availability | South Central US, <br> West Europe |
+| Regional availability | South Central US <br> West Europe |
| Content filtering | Prompts and completions are evaluated against our content policy with automated systems. High severity content will be filtered. | ## Responsible AI
The number of examples typically range from 0 to 100 depending on how many can f
### Models
-The service provides users access to several different models. Each model provides a different capability and price point. The base GPT-3 models are known as Davinci, Curie, Babbage and Ada in decreasing order of intelligence and speed.
+The service provides users access to several different models. Each model provides a different capability and price point. The GPT-3 base models are known as Davinci, Curie, Babbage, and Ada in decreasing order of capability and speed.
-The Codex series of models are a descendant of GPT-3 and have been trained on both natural language and code to power natural language to code use cases. Learn more about each model on our [models concept page](./concepts/models.md).
+The Codex series of models is a descendant of GPT-3 and has been trained on both natural language and code to power natural language to code use cases. Learn more about each model on our [models concept page](./concepts/models.md).
## Next steps
cognitive-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md
The following sections provide you with a quick guide to the quotas and limits t
| Max Files per resource | 50 | | Total size of all files per resource | 1 GB| | Max training job time (job will fail if exceeded) | 120 hours |
-| Max training job size (tokens in training file * # of epochs) | **Ada**: 4-M tokens <br> **Babbage**: 4-M tokens <br> **Curie**: 4-M tokens <br> **Cushman**: 4-M tokens <br> **DaVinci**: 500 K |
+| Max training job size (tokens in training file * # of epochs) | **Ada**: 4-M tokens <br> **Babbage**: 4-M tokens <br> **Curie**: 4-M tokens <br> **Cushman**: 4-M tokens <br> **Davinci**: 500 K |
### General best practices to mitigate throttling during autoscaling
cognitive-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/reference.md
POST https://{your-resource-name}.openai.azure.com/openai/fine-tunes?api-version
| validation_file| string | no | null | The ID of an uploaded file that contains validation data. <br> If you provide this file, the data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in the fine-tuning results file. Your train and validation data should be mutually exclusive. <br><br> Your dataset must be formatted as a JSONL file, where each validation example is a JSON object with the keys "prompt" and "completion". Additionally, you must upload your file with the purpose fine-tune. | | batch_size | integer | no | null | The batch size to use for training. The batch size is the number of training examples used to train a single forward and backward pass. <br><br> By default, the batch size will be dynamically configured to be ~0.2% of the number of examples in the training set, capped at 256 - in general, we've found that larger batch sizes tend to work better for larger datasets. | learning_rate_multiplier | number (double) | no | null | The learning rate multiplier to use for training. The fine-tuning learning rate is the original learning rate used for pre-training multiplied by this value.<br><br> We recommend experimenting with values in the range 0.02 to 0.2 to see what produces the best results. |
-| n_epochs | integer | no | 4 for `ada`, `babbage`, `curie`. 1 for `DaVinci` | The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. |
+| n_epochs | integer | no | 4 for `ada`, `babbage`, `curie`. 1 for `davinci` | The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. |
| prompt_loss_weight | number (double) | no | 0.1 | The weight to use for loss on the prompt tokens. This controls how much the model tries to learn to generate the prompt (as compared to the completion, which always has a weight of 1.0), and can add a stabilizing effect to training when completions are short. <br><br> | | compute_classification_metrics | boolean | no | false | If set, we calculate classification-specific metrics such as accuracy and F-1 score using the validation set at the end of every epoch. | | classification_n_classes | integer | no | null | The number of classes in a classification task. This parameter is required for multiclass classification |
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/chat/concepts.md
Some SDKs (like the JavaScript Chat SDK) support real-time notifications. This f
## Push notifications To send push notifications for messages missed by your users while they were away, Communication Services provides two different ways to integrate:
+ - Use an Event Grid resource to subscribe to chat related events (post operation) which can be plugged into your custom app notification service. For more details, see [Server Events](../../../event-grid/event-schema-communication-services.md?bc=/azure/bread/toc.json&toc=/azure/communication-services/toc.json).
- Connect a Notification Hub resource with Communication Services resource to send push notifications and notify your application users about incoming chats and messages when the mobile app is not running in the foreground. IOS and Android SDK can support the below event:
communication-services Add Chat Push Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/add-chat-push-notifications.md
Access the sample code for this tutorial on [GitHub](https://github.com/Azure-Sa
## Prerequisites
-1. Finish all the prerequisite steps in [Chat Quickstart](https://docs.microsoft.com/azure/communication-services/quickstarts/chat/get-started?pivots=programming-language-swift)
+1. Finish all the prerequisite steps in [Chat Quickstart](/azure/communication-services/quickstarts/chat/get-started?pivots=programming-language-swift)
2. ANH Setup Create an Azure Notification Hub within the same subscription as your Communication Services resource and link the Notification Hub to your Communication Services resource. See [Notification Hub provisioning](../concepts/notifications.md#notification-hub-provisioning).
In protocol extension, chat SDK provides the implementation of `decryptPayload(n
5. Plug the IOS device into your mac, run the program and click ΓÇ£allowΓÇ¥ when asked to authorize push notification on device. 6. As User B, send a chat message. You (User A) should be able to receive a push notification in your IOS device. --
connectors Connectors Native Reqres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-reqres.md
ms.suite: integration ms.reviewers: estfan, azla Previously updated : 08/16/2022 Last updated : 08/29/2022 tags: connectors
When you use the Request trigger to receive inbound requests, you can model the
> * If you have one or more Response actions in a complex workflow with branches, make sure that the workflow > processes at least one Response action during runtime. Otherwise, if all Response actions are skipped, > the caller receives a **502 Bad Gateway** error, even if the workflow finishes successfully.
+>
+> * In a Standard logic app *stateless* workflow, the Response action must appear last in your workflow. If the action appears
+> anywhere else, Azure Logic Apps still won't run the action until all other actions finish running.
+ ## [Consumption](#tab/consumption)
cosmos-db Dedicated Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/dedicated-gateway.md
Previously updated : 11/08/2021 Last updated : 08/29/2022
-# Azure Cosmos DB dedicated gateway - Overview (Preview)
+# Azure Cosmos DB dedicated gateway - Overview
[!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)] A dedicated gateway is server-side compute that is a front-end to your Azure Cosmos DB account. When you connect to the dedicated gateway, it both routes requests and caches data. Like provisioned throughput, the dedicated gateway is billed hourly. ## Overview
-You can provision a dedicated gateway to improve performance at scale. The most common reason that you would want to provision a dedicated gateway would be for caching. When you provision a dedicated gateway, an [integrated cache](integrated-cache.md) is automatically configured within the dedicated gateway. Point reads and queries that hit the integrated cache do not use any of your RUs. Provisioning a dedicated gateway with an integrated cache can help read-heavy workloads lower costs on Azure Cosmos DB.
+You can provision a dedicated gateway to improve performance at scale. The most common reason that you would want to provision a dedicated gateway is for caching. When you provision a dedicated gateway, an [integrated cache](integrated-cache.md) is automatically configured within the dedicated gateway. Point reads and queries that hit the integrated cache do not use any of your RUs. Provisioning a dedicated gateway with an integrated cache can help read-heavy workloads lower costs on Azure Cosmos DB.
-The dedicated gateway is built into Azure Cosmos DB. When you [provision a dedicated gateway](how-to-configure-integrated-cache.md), you have a fully-managed node that routes requests to backend partitions. Connecting to Azure Cosmos DB with the dedicated gateway provides lower and more predictable latency than connecting to Azure Cosmos DB with the standard gateway. Even cache misses will see latency improvements when comparing the dedicated gateway and standard gateway.
+The dedicated gateway is built into Azure Cosmos DB. When you [provision a dedicated gateway](how-to-configure-integrated-cache.md), you have a fully managed node that routes requests to backend partitions. Connecting to Azure Cosmos DB with the dedicated gateway provides lower and more predictable latency than connecting to Azure Cosmos DB with the standard gateway. Even cache misses will see latency improvements when comparing the dedicated gateway and standard gateway.
There are only minimal code changes required in order for your application to use a dedicated gateway. Both new and existing Azure Cosmos DB accounts can provision a dedicated gateway for improved read performance.
cosmoscachefeedback@microsoft.com
## Connection modes
-There are three ways to connect to an Azure Cosmos DB account:
+There are two [connectivity modes](./sql/sql-sdk-connection-modes.md) for Azure Cosmos DB, Direct mode and Gateway mode. With Gateway mode you can connect to either the standard gateway or the dedicated gateway depending on the endpoint you configure.
-- [Direct mode](#connect-to-azure-cosmos-db-using-direct-mode)-- [Gateway mode using the standard gateway](#connect-to-azure-cosmos-db-using-gateway-mode)-- [Gateway mode using the dedicated gateway](#connect-to-azure-cosmos-db-using-the-dedicated-gateway) (only available for SQL API accounts) ### Connect to Azure Cosmos DB using direct mode
-When you connect to Azure Cosmos DB using direct mode, your application connects directly to the Azure Cosmos DB backend. Even if you have many physical partitions, request routing is handled entirely client-side. Direct mode offers low latency because your application can communicate directly with the Azure Cosmos DB backend and doesn't need an intermediate network hop.
-
-Graphical representation of direct mode connection:
-
+When you connect to Azure Cosmos DB using direct mode, your application connects directly to the Azure Cosmos DB backend. Even if you have many physical partitions, request routing is handled entirely client-side. Direct mode offers low latency because your application can communicate directly with the Azure Cosmos DB backend and doesn't need an intermediate network hop. If you choose to connect with direct mode your requests will not use the dedicated gateway or the integrated cache.
### Connect to Azure Cosmos DB using gateway mode
If you connect to Azure Cosmos DB using gateway mode, your application will conn
When connecting to Azure Cosmos DB with gateway mode, you can connect with either of the following options:
-* **Standard gateway** - While the backend, which includes your provisioned throughput and storage, has dedicated capacity per container, the standard gateway is shared between many Azure Cosmos accounts. It is practical for many customers to share a standard gateway since the compute resources consumed by each individual customer is small.
+* **Standard gateway** - While the backend, which includes your provisioned throughput and storage, has dedicated capacity per container, the standard gateway is shared between many Azure Cosmos DB accounts. It is practical for many customers to share a standard gateway since the compute resources consumed by each individual customer are small.
* **Dedicated gateway** - In this gateway, the backend and gateway both have dedicated capacity. The integrated cache requires a dedicated gateway because it requires significant CPU and memory that is specific to your Azure Cosmos account.
-### Connect to Azure Cosmos DB using the dedicated gateway
-
-You must connect to Azure Cosmos DB using the dedicated gateway in order to use the integrated cache. The dedicated gateway has a different endpoint from the standard one provided with your Azure Cosmos DB account. When you connect to your dedicated gateway endpoint, your application sends a request to the dedicated gateway, which then routes the request to different backend nodes. If possible, the integrated cache will serve the result.
+You must connect to Azure Cosmos DB using the dedicated gateway in order to use the integrated cache. The dedicated gateway has a different endpoint from the standard one provided with your Azure Cosmos DB account, but requests are routed in the same way. When you connect to your dedicated gateway endpoint, your application sends a request to the dedicated gateway, which then routes the request to different backend nodes. If possible, the integrated cache will serve the result.
Diagram of gateway mode connection with a dedicated gateway: ## Provisioning the dedicated gateway
-A dedicated gateway cluster can be provisioned in Core (SQL) API accounts. A dedicated gateway cluster can have up to five nodes and you can add or remove nodes at any time. All dedicated gateway nodes within your account [share the same connection string](how-to-configure-integrated-cache.md#configuring-the-integrated-cache).
+A dedicated gateway cluster can be provisioned in Core (SQL) API accounts. A dedicated gateway cluster can have up to five nodes by default and you can add or remove nodes at any time. All dedicated gateway nodes within your account [share the same connection string](how-to-configure-integrated-cache.md#configuring-the-integrated-cache).
-Dedicated gateway nodes are independent from one another. When you provision multiple dedicated gateway nodes, any single node can route any given request. In addition, each node has a separate integrated cache from the others. The cached data within each node depends on the data that was recently [written or read](integrated-cache.md#item-cache) through that specific node. In other words, if an item or query is cached on one node, it isn't necessarily cached on the others.
+Dedicated gateway nodes are independent from one another. When you provision multiple dedicated gateway nodes, any single node can route any given request. In addition, each node has a separate integrated cache from the others. The cached data within each node depends on the data that was recently [written or read](integrated-cache.md#item-cache) through that specific node. If an item or query is cached on one node, it isn't necessarily cached on the others.
For development, we recommend starting with one node but for production, you should provision three or more nodes for high availability. [Learn how to provision a dedicated gateway cluster with an integrated cache](how-to-configure-integrated-cache.md). Provisioning multiple dedicated gateway nodes allows the dedicated gateway cluster to continue to route requests and serve cached data, even when one of the dedicated gateway nodes is unavailable.
-Because it is in public preview, the dedicated gateway does not have an availability SLA. However, you should generally expect comparable availability to the rest of your Azure Cosmos DB account.
-
-The dedicated gateway is available in the following sizes:
+The dedicated gateway is available in the following sizes. The integrated cache uses approximately 50% of the memory and the rest is reserved for metadata and routing requests to backend partitions.
| **Sku Name** | **vCPU** | **Memory** | | | -- | -- |
The dedicated gateway is available in the following sizes:
| **D16s** | **16** | **64 GB** | > [!NOTE]
-> Once created, you can't modify the size of the dedicated gateway nodes. However, you can add or remove nodes.
+> Once created, you can add or remove dedicated gateway nodes, but you can't modify the size of the nodes. To change the size of your dedicated gateway nodes you can deprovision the cluster and provision it again in a different size. This will result in a short period of downtime unless you change the connection string in your application to use the standard gateway during reprovisioning.
There are many different ways to provision a dedicated gateway: -- [Provision a dedicated gateway using the Azure Portal](how-to-configure-integrated-cache.md#provision-a-dedicated-gateway-cluster)-- [Use Azure Cosmos DB's REAT API](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/service/create)
+- [Provision a dedicated gateway using the Azure Portal](how-to-configure-integrated-cache.md#provision-the-dedicated-gateway)
+- [Use Azure Cosmos DB's REST API](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/service/create)
- [Azure CLI](/cli/azure/cosmosdb/service?view=azure-cli-latest&preserve-view=true#az-cosmosdb-service-create) - [ARM template](/azure/templates/microsoft.documentdb/databaseaccounts/services?tabs=bicep) - Note: You cannot deprovision a dedicated gateway using ARM templates
There are many different ways to provision a dedicated gateway:
When you provision a dedicated gateway cluster in multi-region accounts, identical dedicated gateway clusters are provisioned in each region. For example, consider an Azure Cosmos DB account in East US and North Europe. If you provision a dedicated gateway cluster with two D8 nodes in this account, you'd have four D8 nodes in total - two in East US and two in North Europe. You don't need to explicitly configure dedicated gateways in each region and your connection string remains the same. There are also no changes to best practices for performing failovers.
-> [!NOTE]
-> You cannot provision a dedicated gateway cluster in accounts with availability zones enabled
- Like nodes within a cluster, dedicated gateway nodes across regions are independent. It's possible that the cached data in each region will be different, depending on the recent reads or writes to that region. ## Limitations
-The dedicated gateway has the following limitations during the public preview:
+The dedicated gateway has the following limitations:
-- Dedicated gateways are only supported on SQL API accounts.-- You can't provision a dedicated gateway in Azure Cosmos DB accounts with [IP firewalls](how-to-configure-firewall.md) or [Private Link](how-to-configure-private-endpoints.md) configured.-- You can't provision a dedicated gateway in an Azure Cosmos DB account in a [Virtual Network (Vnet)](how-to-configure-vnet-service-endpoint.md)
+- Dedicated gateways are only supported on SQL API accounts
- You can't provision a dedicated gateway in Azure Cosmos DB accounts with [availability zones](../availability-zones/az-region.md). - You can't use [role-based access control (RBAC)](how-to-setup-rbac.md) to authenticate data plane requests routed through the dedicated gateway
-The dedicated gateway blade is hidden on Azure Cosmos DB accounts with IP firewalls, Vnet, Private Link, or availability zones.
-
-## Supported regions
-
-The dedicated gateway is in public preview and isn't supported in every Azure region yet. Throughout the public preview, we'll be adding new capacity. We won't have region restrictions when the dedicated gateway is generally available.
-
-Current list of supported Azure regions:
-
-| **Americas** | **Europe and Africa** | **Asia Pacific** |
-| | -- | -- |
-| Brazil South | France Central | Australia Central |
-| Canada Central | France South | Australia Central 2 |
-| Canada East | Germany North | Australia Southeast |
-| Central US | Germany West Central | Central India |
-| East US | North Europe | East Asia |
-| East US 2 | Switzerland North | Japan West |
-| North Central US | UK South | Korea Central |
-| South Central US | UK West | Korea South |
-| West Central US | West Europe | Southeast Asia |
-| West US | | UAE Central |
-| West US 2 | | West India |
- ## Next steps
Read more about dedicated gateway usage in the following articles:
- [Configure the integrated cache](how-to-configure-integrated-cache.md) - [Integrated cache FAQ](integrated-cache-faq.md) - Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- - If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db How To Configure Integrated Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-integrated-cache.md
Previously updated : 09/28/2021 Last updated : 08/29/2022
-# How to configure the Azure Cosmos DB integrated cache (Preview)
+# How to configure the Azure Cosmos DB integrated cache
[!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)] This article describes how to provision a dedicated gateway, configure the integrated cache, and connect your application.
This article describes how to provision a dedicated gateway, configure the integ
- An existing application that uses Azure Cosmos DB. If you don't have one, [here are some examples](https://github.com/AzureCosmosDB/labs). - An existing [Azure Cosmos DB SQL (core) API account](create-cosmosdb-resources-portal.md).
-## Provision a dedicated gateway cluster
+## Provision the dedicated gateway
1. Navigate to an Azure Cosmos DB account in the Azure portal and select the **Dedicated Gateway** tab.
- :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-tab.png" alt-text="An image that shows how to navigate to the dedicated gateway tab" lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-tab.png" border="false":::
+ :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-tab.png" alt-text="Screenshot of the Azure Portal that shows how to navigate to the Azure Cosmos DB dedicated gateway tab." lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-tab.png" :::
2. Fill out the **Dedicated gateway** form with the following details: * **Dedicated Gateway** - Turn on the toggle to **Provisioned**.
- * **SKU** - Select a SKU with the required compute and memory size.
- * **Number of instances** - Number of nodes. For development purpose, we recommend starting with one node of the D4 size. Based on the amount of data you need to cache, you can increase the node size after initial testing.
+ * **SKU** - Select a SKU with the required compute and memory size. The integrated cache will use approximately 50% of the memory, and the remaining memory is used for metadata and routing requests to the backend partitions.
+ * **Number of instances** - Number of nodes. For development purpose, we recommend starting with one node of the D4 size. Based on the amount of data you need to cache and to achieve high availability, you can increase the node size after initial testing.
- :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-input.png" alt-text="An image that shows sample input settings for creating a dedicated gateway cluster" lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-input.png" border="false":::
+ :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-input.png" alt-text="Screenshot of the Azure Portal dedicated gateway tab that shows sample input settings for creating a dedicated gateway cluster." lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-input.png" :::
3. Select **Save** and wait about 5-10 minutes for the dedicated gateway provisioning to complete. When the provisioning is done, you'll see the following notification:
- :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-notification.png" alt-text="An image that shows how to check if dedicated gateway provisioning is complete" lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-notification.png" border="false":::
+ :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-notification.png" alt-text="Screenshot of a notification in the Azure Portal that shows how to check if dedicated gateway provisioning is complete." lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-notification.png" :::
## Configuring the integrated cache
-1. When you create a dedicated gateway, an integrated cache is automatically provisioned. The integrated cache will use approximately 70% of the memory in the dedicated gateway. The remaining 30% of memory in the dedicated gateway is used for routing requests to the backend partitions.
+When you create a dedicated gateway, an integrated cache is automatically provisioned.
-2. Modify your application's connection string to use the new dedicated gateway endpoint.
+1. Modify your application's connection string to use the new dedicated gateway endpoint.
The updated dedicated gateway connection string is in the **Keys** blade:
- :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-connection-string.png" alt-text="An image that shows the dedicated gateway connection string" lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-connection-string.png" border="false":::
+ :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-connection-string.png" alt-text="Screenshot of the Azure Portal keys tab with the dedicated gateway connection string." lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-connection-string.png" :::
All dedicated gateway connection strings follow the same pattern. Remove `documents.azure.com` from your original connection string and replace it with `sqlx.cosmos.azure.com`. A dedicated gateway will always have the same connection string, even if you remove and reprovision it. You donΓÇÖt need to modify the connection string in all applications using the same Azure Cosmos DB account. For example, you could have one `CosmosClient` connect using gateway mode and the dedicated gateway endpoint while another `CosmosClient` uses direct mode. In other words, adding a dedicated gateway doesn't impact the existing ways of connecting to Azure Cosmos DB.
-3. If you're using the .NET or Java SDK, set the connection mode to [gateway mode](sql-sdk-connection-modes.md#available-connectivity-modes). This step isn't necessary for the Python and Node.js SDKs since they don't have additional options of connecting besides gateway mode.
+2. If you're using the .NET or Java SDK, set the connection mode to [gateway mode](sql-sdk-connection-modes.md#available-connectivity-modes). This step isn't necessary for the Python and Node.js SDKs since they don't have additional options of connecting besides gateway mode.
> [!NOTE] > If you are using the latest .NET or Java SDK version, the default connection mode is direct mode. In order to use the integrated cache, you must override this default.
-If you're using the Java SDK, you must also manually set [contentResponseOnWriteEnabled](/java/api/com.azure.cosmos.cosmosclientbuilder.contentresponseonwriteenabled?view=azure-java-stable&preserve-view=true) to `true` within the `CosmosClientBuilder`. If you're using any other SDK, this value already defaults to `true`, so you don't need to make any changes.
- ## Adjust request consistency
-You must adjust the request consistency to session or eventual. If not, the request will always bypass the integrated cache. The easiest way to configure a specific consistency for all read operations is to [set it at the account-level](consistency-levels.md#configure-the-default-consistency-level). You can also configure consistency at the [request-level](how-to-manage-consistency.md#override-the-default-consistency-level), which is recommended if you only want a subset of your reads to utilize the integrated cache.
+You must ensure the request consistency is session or eventual. If not, the request will always bypass the integrated cache. The easiest way to configure a specific consistency for all read operations is to [set it at the account-level](consistency-levels.md#configure-the-default-consistency-level). You can also configure consistency at the [request-level](how-to-manage-consistency.md#override-the-default-consistency-level), which is recommended if you only want a subset of your reads to utilize the integrated cache.
> [!NOTE] > If you are using the Python SDK, you **must** explicitly set the consistency level for each request. The default account-level setting will not automatically apply. ## Adjust MaxIntegratedCacheStaleness
-Configure `MaxIntegratedCacheStaleness`, which is the maximum time in which you are willing to tolerate stale cached data. We recommend setting the `MaxIntegratedCacheStaleness` as high as possible because it will increase the likelihood that repeated point reads and queries can be cache hits. If you set `MaxIntegratedCacheStaleness` to 0, your read request will **never** use the integrated cache, regardless of the consistency level. When not configured, the default `MaxIntegratedCacheStaleness` is 5 minutes.
+Configure `MaxIntegratedCacheStaleness`, which is the maximum time in which you are willing to tolerate stale cached data. It is recommended to set the `MaxIntegratedCacheStaleness` as high as possible because it will increase the likelihood that repeated point reads and queries can be cache hits. If you set `MaxIntegratedCacheStaleness` to 0, your read request will **never** use the integrated cache, regardless of the consistency level. When not configured, the default `MaxIntegratedCacheStaleness` is 5 minutes.
+
+Adjusting the `MaxIntegratedCacheStaleness` is supported in these versions of each SDK:
-**.NET**
+| SDK | Supported versions |
+| | |
+| **.NET SDK v3** | *>= 3.30.0* |
+| **Java SDK v4** | *>= 4.34.0* |
+| **Node.js SDK** | *>=3.17.0* |
+| **Python SDK** | *>=4.3.1* |
+
+### [.NET](#tab/dotnet)
```csharp
-FeedIterator<Food> myQuery = container.GetItemQueryIterator<Food>(new QueryDefinition("SELECT * FROM c"), requestOptions: new QueryRequestOptions
+FeedIterator<MyClass> myQuery = container.GetItemQueryIterator<MyClass>(new QueryDefinition("SELECT * FROM c"), requestOptions: new QueryRequestOptions
{
- ConsistencyLevel = ConsistencyLevel.Eventual,
DedicatedGatewayRequestOptions = new DedicatedGatewayRequestOptions { MaxIntegratedCacheStaleness = TimeSpan.FromMinutes(30)
FeedIterator<Food> myQuery = container.GetItemQueryIterator<Food>(new QueryDefin
); ```
-> [!NOTE]
-> Currently, you can only adjust the MaxIntegratedCacheStaleness using the latest [.NET](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.17.0-preview) and [Java](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.16.0-beta.1) preview SDK's.
+### [Java](#tab/java)
+
+```java
+DedicatedGatewayRequestOptions dgOptions = new DedicatedGatewayRequestOptions()
+ .setMaxIntegratedCacheStaleness(Duration.ofMinutes(30));
+CosmosQueryRequestOptions queryOptions = new CosmosQueryRequestOptions()
+ .setDedicatedGatewayRequestOptions(dgOptions);
+
+CosmosPagedFlux<MyClass> pagedFluxResponse = container.queryItems(
+ "SELECT * FROM c", queryOptions, MyClass.class);
+```
+
+### [Node.js](#tab/nodejs)
+
+```javascript
+ const queryRequestOptions = {
+ maxIntegratedCacheStalenessInMs: 1800000 };
+ const querySpec = {
+ query: "SELECT * from c"
+ };
+ const { resources: items } = await container.items
+ .query(querySpec, queryRequestOptions)
+ .fetchAll();
+```
+
+### [Python](#tab/python)
+
+```python
+query = "SELECT * FROM c"
+container.query_items(
+ query=query,
+ max_integrated_cache_staleness_in_ms=1800000
+)
+```
+++ ## Verify cache hits
-Finally, you can restart your application and verify integrated cache hits for repeated point reads or queries. Once youΓÇÖve modified your `CosmosClient` to use the dedicated gateway endpoint, all requests will be routed through the dedicated gateway.
+Finally, you can restart your application and verify integrated cache hits for repeated point reads or queries by seeing if the request charge is 0. Once youΓÇÖve modified your `CosmosClient` to use the dedicated gateway endpoint, all requests will be routed through the dedicated gateway.
For a read request (point read or query) to utilize the integrated cache, **all** of the following criteria must be true: - Your client connects to the dedicated gateway endpoint-- Your client uses gateway mode (Python and Node.js SDK's always use gateway mode)
+- Your client uses gateway mode (Python and Node.js SDKs always use gateway mode)
- The consistency for the request must be set to session or eventual > [!NOTE]
cosmos-db How To Configure Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-private-endpoints.md
Use the following steps to create a private endpoint for an existing Azure Cosmo
| Subscription| Select your subscription. | | Resource type | Select **Microsoft.AzureCosmosDB/databaseAccounts**. | | Resource |Select your Azure Cosmos account. |
- |Target sub-resource |Select the Azure Cosmos DB API type that you want to map. This defaults to only one choice for the SQL, MongoDB, and Cassandra APIs. For the Gremlin and Table APIs, you can also choose **Sql** because these APIs are interoperable with the SQL API. |
+ |Target sub-resource |Select the Azure Cosmos DB API type that you want to map. This defaults to only one choice for the SQL, MongoDB, and Cassandra APIs. For the Gremlin and Table APIs, you can also choose **Sql** because these APIs are interoperable with the SQL API. If you have a [dedicated gateway](./dedicated-gateway.md) provisioned for a SQL API account, you will also see an option for **SqlDedicated**. |
||| 1. Select **Next: Configuration**.
Use the following steps to create a private endpoint for an existing Azure Cosmo
| Virtual network| Select your virtual network. | | Subnet | Select your subnet. | |**Private DNS Integration**||
- |Integrate with private DNS zone |Select **Yes**. <br><br/> To connect privately with your private endpoint, you need a DNS record. We recommend that you integrate your private endpoint with a private DNS zone. You can also use your own DNS servers or create DNS records by using the host files on your virtual machines. <br><br/> When you select yes for this option, a private DNS zone group is also created. DNS zone group is a link between the private DNS zone and the private endpoint. This link helps you to auto update the private DNS Zone when there is an update to the private endpoint. For example, when you add or remove regions,the private DNS zone is automatically updated. |
+ |Integrate with private DNS zone |Select **Yes**. <br><br/> To connect privately with your private endpoint, you need a DNS record. We recommend that you integrate your private endpoint with a private DNS zone. You can also use your own DNS servers or create DNS records by using the host files on your virtual machines. <br><br/> When you select yes for this option, a private DNS zone group is also created. DNS zone group is a link between the private DNS zone and the private endpoint. This link helps you to auto update the private DNS Zone when there is an update to the private endpoint. For example, when you add or remove regions, the private DNS zone is automatically updated. |
|Private DNS Zone |Select **privatelink.documents.azure.com**. <br><br/> The private DNS zone is determined automatically. You can't change it by using the Azure portal.| |||
When you have approved Private Link for an Azure Cosmos account, in the Azure po
## <a id="private-zone-name-mapping"></a>API types and private zone names
-The following table shows the mapping between different Azure Cosmos account API types, supported sub-resources, and the corresponding private zone names. You can also access the Gremlin and Table API accounts through the SQL API, so there are two entries for these APIs.
+The following table shows the mapping between different Azure Cosmos account API types, supported sub-resources, and the corresponding private zone names. You can also access the Gremlin and Table API accounts through the SQL API, so there are two entries for these APIs. There is also an extra entry for the SQL API for accounts using the [dedicated gateway](./dedicated-gateway.md).
|Azure Cosmos account API type |Supported sub-resources (or group IDs) |Private zone name | |||| |Sql | Sql | privatelink.documents.azure.com |
+|Sql | SqlDedicated | privatelink.sqlx.cosmos.azure.com |
|Cassandra | Cassandra | privatelink.cassandra.cosmos.azure.com | |Mongo | MongoDB | privatelink.mongo.cosmos.azure.com | |Gremlin | Gremlin | privatelink.gremlin.cosmos.azure.com |
$ResourceGroupName = "myResourceGroup"
# Name of the Azure Cosmos account $CosmosDbAccountName = "mycosmosaccount"
-# API type of the Azure Cosmos account: Sql, MongoDB, Cassandra, Gremlin, or Table
-$CosmosDbApiType = "Sql"
+# Resource for the Azure Cosmos account: Sql, SqlDedicated, MongoDB, Cassandra, Gremlin, or Table
+$CosmosDbSubResourceType = "Sql"
# Name of the existing virtual network $VNetName = "myVnet" # Name of the target subnet in the virtual network
$Location = "westcentralus"
$cosmosDbResourceId = "/subscriptions/$($SubscriptionId)/resourceGroups/$($ResourceGroupName)/providers/Microsoft.DocumentDB/databaseAccounts/$($CosmosDbAccountName)"
-$privateEndpointConnection = New-AzPrivateLinkServiceConnection -Name "myConnectionPS" -PrivateLinkServiceId $cosmosDbResourceId -GroupId $CosmosDbApiType
+$privateEndpointConnection = New-AzPrivateLinkServiceConnection -Name "myConnectionPS" -PrivateLinkServiceId $cosmosDbResourceId -GroupId $CosmosDbSubResourceType
$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $ResourceGroupName -Name $VNetName
SubscriptionId="<your Azure subscription ID>"
# Name of the existing Azure Cosmos account CosmosDbAccountName="mycosmosaccount"
-# API type of your Azure Cosmos account: Sql, MongoDB, Cassandra, Gremlin, or Table
-CosmosDbApiType="Sql"
+# API type of your Azure Cosmos account: Sql, SqlDedicated, MongoDB, Cassandra, Gremlin, or Table
+CosmosDbSubResourceType="Sql"
# Name of the virtual network to create VNetName="myVnet"
az network private-endpoint create \
--vnet-name $VNetName \ --subnet $SubnetName \ --private-connection-resource-id "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.DocumentDB/databaseAccounts/$CosmosDbAccountName" \
- --group-ids $CosmosDbApiType \
+ --group-ids $CosmosDbSubResourceType \
--connection-name $PrivateConnectionName ```
$SubscriptionId = "<your Azure subscription ID>"
$ResourceGroupName = "myResourceGroup" # Name of the Azure Cosmos account $CosmosDbAccountName = "mycosmosaccount"
-# API type of the Azure Cosmos account. It can be one of the following: "Sql", "MongoDB", "Cassandra", "Gremlin", "Table"
-$CosmosDbApiType = "Sql"
+# API type of the Azure Cosmos account. It can be one of the following: "Sql", "SqlDedicated", "MongoDB", "Cassandra", "Gremlin", "Table"
+$CosmosDbSubResourceType = "Sql"
# Name of the existing virtual network $VNetName = "myVnet" # Name of the target subnet in the virtual network
$deploymentOutput = New-AzResourceGroupDeployment -Name "PrivateCosmosDbEndpoint
-TemplateParameterFile $PrivateEndpointParametersFilePath ` -SubnetId $SubnetResourceId ` -ResourceId $CosmosDbResourceId `
- -GroupId $CosmosDbApiType `
+ -GroupId $CosmosDbSubResourceType `
-PrivateEndpointName $PrivateEndpointName $deploymentOutput ```
-In the PowerShell script, the `GroupId` variable can contain only one value. That value is the API type of the account. Allowed values are: `Sql`, `MongoDB`, `Cassandra`, `Gremlin`, and `Table`. Some Azure Cosmos account types are accessible through multiple APIs. For example:
+In the PowerShell script, the `GroupId` variable can contain only one value. That value is the API type of the account. Allowed values are: `Sql`, `SqlDedicated`, `MongoDB`, `Cassandra`, `Gremlin`, and `Table`. Some Azure Cosmos account types are accessible through multiple APIs. For example:
+* A SQL API account has an added option for accounts configured to use the [Dedicated Gateway](./dedicated-gateway.md).
* A Gremlin API account can be accessed from both Gremlin and SQL API accounts. * A Table API account can be accessed from both Table and SQL API accounts.
-For those accounts, you must create one private endpoint for each API type. The corresponding API type is specified in the `GroupId` array.
+For those accounts, you must create one private endpoint for each API type. If you are creating a private endpoint for `SqlDedicated`, you only need to add a second endpoint for `Sql` if you want to also connect to your account using the standard gateway. The corresponding API type is specified in the `GroupId` array.
After the template is deployed successfully, you can see an output similar to what the following image shows. The `provisioningState` value is `Succeeded` if the private endpoints are set up correctly.
$SubscriptionId = "<your Azure subscription ID>"
$ResourceGroupName = "myResourceGroup" # Name of the Azure Cosmos account $CosmosDbAccountName = "mycosmosaccount"
-# API type of the Azure Cosmos account. It can be one of the following: "Sql", "MongoDB", "Cassandra", "Gremlin", "Table"
-$CosmosDbApiType = "Sql"
+# API type of the Azure Cosmos account. It can be one of the following: "Sql", "SqlDedicated", "MongoDB", "Cassandra", "Gremlin", "Table"
+$CosmosDbSubResourceType = "Sql"
# Name of the existing virtual network $VNetName = "myVnet" # Name of the target subnet in the virtual network
$deploymentOutput = New-AzResourceGroupDeployment -Name "PrivateCosmosDbEndpoint
-TemplateParameterFile $PrivateEndpointParametersFilePath ` -SubnetId $SubnetResourceId ` -ResourceId $CosmosDbResourceId `
- -GroupId $CosmosDbApiType `
+ -GroupId $CosmosDbSubResourceType `
-PrivateEndpointName $PrivateEndpointName $deploymentOutput
cosmos-db Integrated Cache Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/integrated-cache-faq.md
Previously updated : 09/20/2021 Last updated : 08/29/2022
# Azure Cosmos DB integrated cache frequently asked questions [!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)]
-The Azure Cosmos DB integrated cache is an in-memory cache that is built-in to Azure Cosmos DB. This article answers commonly asked questions about the Azure Cosmos DB integrated cache.
+The Azure Cosmos DB integrated cache is an in-memory cache that is built in to Azure Cosmos DB. This article answers commonly asked questions about the Azure Cosmos DB integrated cache.
## Frequently asked questions
In general, requests routed by the dedicated gateway will have a slightly lower
### What kind of latency should I expect from the integrated cache?
-A request served by the integrated cache is faster because the cached data is stored in-memory on the dedicated gateway, rather than on the backend. For cached point reads, you should expect latency of 2-4 ms.
+A request served by the integrated cache is fast because the cached data is stored in-memory on the dedicated gateway, rather than on the backend.
-For cached queries, latency depends on the query. The query cache works by caching the query engineΓÇÖs response for a particular query. This response is then sent back client-side to the SDK for processing. For simple queries, minimal work in the SDK is required and latencies of 2-4 ms are typical. However, more complex queries with `GROUP BY` or `DISTINCT` require more processing in the SDK so latency may be higher, even with the query cache.
+For cached point reads, you should expect a median latency of 2-4 ms. For cached queries, latency depends on the query. The query cache works by caching the query engineΓÇÖs response for a particular query. This response is then sent back client-side to the SDK for processing. For simple queries, minimal work in the SDK is required and median latencies of 2-4 ms are typical. More complex queries with `GROUP BY` or `DISTINCT` require more processing in the SDK so latency may be higher, even with the query cache.
-If you were previously connecting to Azure Cosmos DB with direct mode and switch to connecting with the dedicated gateway, you may observe a slight latency increase for some requests. Using gateway mode requires a request to be sent to the gateway (in this case the dedicated gateway) and then routed appropriately to the backend. Direct mode, as the name suggests, allows the client to communicate directly with the backend, removing an extra hop.
+If you were previously connecting to Azure Cosmos DB with direct mode and switch to connecting with the dedicated gateway, you may observe a slight latency increase for some requests. Using gateway mode requires a request to be sent to the gateway (in this case the dedicated gateway) and then routed appropriately to the backend. Direct mode, as the name suggests, allows the client to communicate directly with the backend, removing an extra hop. There is no latency SLA for requests using the dedicated gateway.
If your app previously used direct mode, the latency advantages of the integrated cache will be significant in only the following scenarios:
If your app previously used gateway mode with the standard gateway, the integrat
### Does the Azure Cosmos DB availability SLA extend to the dedicated gateway and integrated cache?
-We will have an availability SLA/SLO on the dedicated gateway (and therefore the integrated cache) once the feature is generally available. For scenarios that require high availability, you should provision 3x the number of dedicated gateway instances needed. For example, if one dedicated gateway node is needed in production, you should provision two additional dedicated gateway nodes to account for possible downtime or outages.
+For scenarios that require high availability and in order to be covered by the Azure Cosmos DB availability SLA, you should provision at least 3 dedicated gateway nodes. For example, if one dedicated gateway node is needed in production, you should provision two additional dedicated gateway nodes to account for possible downtime, outages and upgrades. If only one dedicated gateway node is provisioned, you will temporarily lose availability in these scenarios. Additionally, [ensure your dedicated gateway has enough nodes](./integrated-cache.md#i-want-to-understand-if-i-need-to-add-more-dedicated-gateway-nodes) to serve your workload.
### The integrated cache is only available for SQL (Core) API right now. Are you planning on releasing it for other APIs as well?
-Expanding the integrated cache beyond SQL API is planned on the long-term roadmap but beyond the initial public preview of the integrated cache.
+Expanding the integrated cache beyond SQL API is planned on the long-term roadmap but is beyond the initial scope of the integrated cache.
### What consistency does the integrated cache support?
The integrated cache supports both session and eventual consistency. You can als
- [Configure the integrated cache](how-to-configure-integrated-cache.md) - [Dedicated gateway](dedicated-gateway.md) - Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- - If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Integrated Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/integrated-cache.md
Previously updated : 09/28/2021 Last updated : 08/29/2022
-# Azure Cosmos DB integrated cache - Overview (Preview)
+# Azure Cosmos DB integrated cache - Overview
[!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)] The Azure Cosmos DB integrated cache is an in-memory cache that helps you ensure manageable costs and low latency as your request volume grows. The integrated cache is easy to set up and you donΓÇÖt need to spend time writing custom code for cache invalidation or managing backend infrastructure. Your integrated cache uses a [dedicated gateway](dedicated-gateway.md) within your Azure Cosmos DB account. The integrated cache is the first of many Azure Cosmos DB features that will utilize a dedicated gateway for improved performance. You can choose from three possible dedicated gateway sizes based on the number of cores and memory needed for your workload.
cosmoscachefeedback@microsoft.com
The main goal of the integrated cache is to reduce costs for read-heavy workloads. Low latency, while helpful, is not the main benefit of the integrated cache because Azure Cosmos DB is already fast without caching.
-Point reads and queries that hit the integrated cache won't use any RUs. In other words, any cache hits will have an RU charge of 0. Cache hits will have a much lower per-operation cost than reads from the backend database.
+Point reads and queries that hit the integrated cache will have an RU charge of 0. Cache hits will have a much lower per-operation cost than reads from the backend database.
Workloads that fit the following characteristics should evaluate if the integrated cache will help lower costs:
The query cache can be used to cache queries. The query cache transforms a query
### Working with the query cache
-You don't need special code when working with the query cache, even if your queries have multiple pages of results. The best practices and code for query pagination are the same, whether your query hits the integrated cache or is executed on the backend query engine.
+You don't need special code when working with the query cache, even if your queries have multiple pages of results. The best practices and code for query pagination are the same whether your query hits the integrated cache or is executed on the backend query engine.
-The query cache will automatically cache query continuation tokens, where applicable. If you have a query with multiple pages of results, any pages that are stored in the integrated cache will have an RU charge of 0. If your subsequent pages of query results require backend execution, they'll have a continuation token from the previous page so they can avoid duplicating previous work.
+The query cache will automatically cache query continuation tokens where applicable. If you have a query with multiple pages of results, any pages that are stored in the integrated cache will have an RU charge of 0. If your subsequent pages of query results require backend execution, they'll have a continuation token from the previous page so they can avoid duplicating previous work.
> [!NOTE]
-> Integrated cache instances within different dedicated gateway nodes have independent caches from one another. If data is cached within one node, it is not necessarily cached in the others.
+> Integrated cache instances within different dedicated gateway nodes have independent caches from one another. If data is cached within one node, it is not necessarily cached in the others. Multiple pages of the same query are not guaranteed to be routed to the same dedicated gateway node.
## Integrated cache consistency
-The integrated cache supports both session and eventual [consistency](consistency-levels.md) only. If a read has consistent prefix, bounded staleness, or strong consistency, it will always bypass the integrated cache.
+The integrated cache supports read requests with session and eventual [consistency](consistency-levels.md) only. If a read has consistent prefix, bounded staleness, or strong consistency, it will always bypass the integrated cache and be served from the backend.
The easiest way to configure either session or eventual consistency for all reads is to [set it at the account-level](consistency-levels.md#configure-the-default-consistency-level). However, if you would only like some of your reads to have a specific consistency, you can also configure consistency at the [request-level](how-to-manage-consistency.md#override-the-default-consistency-level).
+> [!NOTE]
+> Write requests with other consistencies will still populate the cache, but in order to read from the cache the request must have either session or eventual consistency.
+ ### Session consistency [Session consistency](consistency-levels.md#session-consistency) is the most widely used consistency level for both single region as well as globally distributed Azure Cosmos DB accounts. When using session consistency, single client sessions can read their own writes. When using the integrated cache, clients outside of the session performing writes will see eventual consistency.
It's important to understand that the `MaxIntegratedCacheStaleness`, when config
This is an improvement from how most caches work and allows the following additional customization: - You can set different staleness requirements for each point read or query-- Different clients, even if they run the same point read or query, can configure different `MaxIntegratedCacheStaleness` values.-- If you wanted to modify read consistency when using cached data, changing `MaxIntegratedCacheStaleness` will have an immediate effect on read consistency.
+- Different clients, even if they run the same point read or query, can configure different `MaxIntegratedCacheStaleness` values
+- If you wanted to modify read consistency when using cached data, changing `MaxIntegratedCacheStaleness` will have an immediate effect on read consistency
> [!NOTE] > When not explicitly configured, the MaxIntegratedCacheStaleness defaults to 5 minutes.
To better understand the `MaxIntegratedCacheStaleness` parameter, consider the f
| t = 40 sec | Run Query B with MaxIntegratedCacheStaleness = 60 seconds | Return results from integrated cache (0 RU charge) | | t = 50 sec | Run Query B with MaxIntegratedCacheStaleness = 20 seconds | Return results from backend database (normal RU charges) and refresh cache |
-> [!NOTE]
-> Customizing `MaxIntegratedCacheStaleness` is only supported in the latest .NET and Java preview SDK's.
- [Learn to configure the `MaxIntegratedCacheStaleness`.](how-to-configure-integrated-cache.md#adjust-maxintegratedcachestaleness) ## Metrics When using the integrated cache, it is helpful to monitor some key metrics. The integrated cache metrics include: -- `DedicatedGatewayAverageCpuUsage` - Average CPU usage across dedicated gateway nodes.-- `DedicatedGatewayMaxCpuUsage` - Maximum CPU usage across dedicated gateway nodes.-- `DedicatedGatewayAverageMemoryUsage` - Average memory usage across dedicated gateway nodes, which are used for both routing requests and caching data.-- `DedicatedGatewayRequests` - Total number of dedicated gateway requests across all dedicated gateway instances.-- `IntegratedCacheEvictedEntriesSize` ΓÇô The average amount of data evicted due to LRU from the integrated cache across dedicated gateway nodes. This value does not include data that expired due to exceeding the `MaxIntegratedCacheStaleness` time.-- `IntegratedCacheItemExpirationCount` - The number of items that are evicted from the integrated cache due to cached point reads exceeding the `MaxIntegratedCacheStaleness` time. This value is an average of integrated cache instances across all dedicated gateway nodes.-- `IntegratedCacheQueryExpirationCount` - The number of queries that are evicted from the integrated cache due to cached queries exceeding the `MaxIntegratedCacheStaleness` time. This value is an average of integrated cache instances across all dedicated gateway nodes.
+- `DedicatedGatewayCPUUsage` - CPU usage with Avg, Max, or Min Aggregation types for data across all dedicated gateway nodes.
+- `DedicatedGatewayAverageCPUUsage` - (Deprecated) Average CPU usage across all dedicated gateway nodes.
+- `DedicatedGatewayMaximumCPUUsage` - (Deprecated) Maximum CPU usage across all dedicated gateway nodes.
+- `DedicatedGatewayMemoryUsage` - Memory usage with Avg, Max, or Min Aggregation types for data across all dedicated gateway nodes.
+- `DedicatedGatewayAverageMemoryUsage` - (Deprecated) Average memory usage across all dedicated gateway nodes.
+- `DedicatedGatewayRequests` - Total number of dedicated gateway requests across all dedicated gateway nodes.
+- `IntegratedCacheEvictedEntriesSize` ΓÇô The average amount of data evicted from the integrated cache due to LRU across all dedicated gateway nodes. This value does not include data that expired due to exceeding the `MaxIntegratedCacheStaleness` time.
+- `IntegratedCacheItemExpirationCount` - The average number of items that are evicted from the integrated cache due to cached point reads exceeding the `MaxIntegratedCacheStaleness` time across all dedicated gateway nodes.
+- `IntegratedCacheQueryExpirationCount` - The average number of queries that are evicted from the integrated cache due to cached queries exceeding the `MaxIntegratedCacheStaleness` time across all dedicated gateway nodes.
- `IntegratedCacheItemHitRate` ΓÇô The proportion of point reads that used the integrated cache (out of all point reads routed through the dedicated gateway with session or eventual consistency). This value is an average of integrated cache instances across all dedicated gateway nodes. - `IntegratedCacheQueryHitRate` ΓÇô The proportion of queries that used the integrated cache (out of all queries routed through the dedicated gateway with session or eventual consistency). This value is an average of integrated cache instances across all dedicated gateway nodes. All existing metrics are available, by default, from the **Metrics** blade (not Metrics classic):
- :::image type="content" source="./media/integrated-cache/integrated-cache-metrics.png" alt-text="An image that shows the location of integrated cache metrics" border="false":::
+ :::image type="content" source="./media/integrated-cache/integrated-cache-metrics.png" alt-text="Screenshot of the Azure Portal that shows the location of integrated cache metrics." border="false":::
-Metrics are either an average, maximum, or sum across all dedicated gateway nodes. For example, if you provision a dedicated gateway cluster with five nodes, the metrics reflect the aggregated value across all five nodes. It isn't possible to determine the metric values for each individual nodes.
+Metrics are either an average, maximum, or sum across all dedicated gateway nodes. For example, if you provision a dedicated gateway cluster with five nodes, the metrics reflect the aggregated value across all five nodes. It isn't possible to determine the metric values for each individual node.
## Troubleshooting common issues
If most data is evicted from the cache due to exceeding the `MaxIntegratedCacheS
### I want to understand if I need to add more dedicated gateway nodes
-In some cases, if latency is unexpectedly high, you may need more dedicated gateway nodes rather than bigger nodes. Check the `DedicatedGatewayMaxCpuUsage` and `DedicatedGatewayAverageMemoryUsage` to determine if adding more dedicated gateway nodes would reduce latency. It's good to keep in mind that since all instances of the integrated cache are independent from one another, adding more dedicated gateway nodes won't reduce the `IntegratedCacheEvictedEntriesSize`. Adding more nodes will improve the request volume that your dedicated gateway cluster can handle, though.
+In some cases, if latency is unexpectedly high, you may need more dedicated gateway nodes rather than bigger nodes. Check the `DedicatedGatewayCPUUsage` and `DedicatedGatewayMemoryUsage` to determine if adding more dedicated gateway nodes would reduce latency. It's good to keep in mind that since all instances of the integrated cache are independent from one another, adding more dedicated gateway nodes won't reduce the `IntegratedCacheEvictedEntriesSize`. Adding more nodes will improve the request volume that your dedicated gateway cluster can handle, though.
## Next steps
In some cases, if latency is unexpectedly high, you may need more dedicated gate
- [Configure the integrated cache](how-to-configure-integrated-cache.md) - [Dedicated gateway](dedicated-gateway.md) - Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- - If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Change Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/change-log.md
Last updated 06/22/2022--++ # Change log for Azure Cosmos DB API for MongoDB
cosmos-db How To Use Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-use-stored-procedures-triggers-udfs.md
const container = client.database("myDatabase").container("myContainer");
const triggerId = "trgPreValidateToDoItemTimestamp"; await container.items.create({ category: "Personal",
- name : "Groceries",
- description : "Pick up strawberries",
- isComplete : false
+ name: "Groceries",
+ description: "Pick up strawberries",
+ isComplete: false
}, {preTriggerInclude: [triggerId]}); ```
cosmos-db Modeling Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/modeling-data.md
Just as there's no single way to represent a piece of data on a screen, there's
## Next steps
-* To learn more about Azure Cosmos DB, refer to the service's [documentation](https://azure.microsoft.com/documentation/services/cosmos-db/) page.
+* To learn more about Azure Cosmos DB, refer to the service's [documentation](/azure/cosmos-db/) page.
* To understand how to shard your data across multiple partitions, refer to [Partitioning Data in Azure Cosmos DB](../partitioning-overview.md).
cosmos-db Powerbi Visualize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/powerbi-visualize.md
To build a Power BI report/dashboard:
## Next steps * To learn more about Power BI, see [Get started with Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-get-started/).
-* To learn more about Azure Cosmos DB, see the [Azure Cosmos DB documentation landing page](https://azure.microsoft.com/documentation/services/cosmos-db/).
+* To learn more about Azure Cosmos DB, see the [Azure Cosmos DB documentation landing page](/azure/cosmos-db/).
cosmos-db Sql Api Java Sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-java-sdk-samples.md
The Query Samples files for [sync](https://github.com/Azure-Samples/azure-cosmos
<!-- | Query with ORDER BY for partitioned collections | CosmosContainer.queryItems <br> CosmosAsyncContainer.queryItems | --> ## Change feed examples
-The [Change Feed Processor Sample](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav) and [Change feed processor](https://docs.microsoft.com/azure/cosmos-db/sql/change-feed-processor?tabs=java).
+The [Change Feed Processor Sample](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav) and [Change feed processor](/azure/cosmos-db/sql/change-feed-processor?tabs=java).
| Task | API reference | | | |
The User Management Sample file shows how to do the following tasks:
Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Sql Query Index Of https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-index-of.md
Previously updated : 09/13/2019 Last updated : 08/30/2022 + # INDEX_OF (Azure Cosmos DB)+ [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
- Returns the starting position of the first occurrence of the second string expression within the first specified string expression, or -1 if the string is not found.
-
+Returns the starting position of the first occurrence of the second string expression within the first specified string expression, or `-1` if the string isn't found.
+ ## Syntax
-
+ ```sql
-INDEX_OF(<str_expr1>, <str_expr2> [, <numeric_expr>])
-```
-
+INDEX_OF(<str_expr1>, <str_expr2> [, <numeric_expr>])
+```
+ ## Arguments
-
-*str_expr1*
- Is the string expression to be searched.
-
-*str_expr2*
- Is the string expression to search for.
+
+*str_expr1*
+ Is the string expression to be searched.
+
+*str_expr2*
+ Is the string expression to search for.
*numeric_expr*
- Optional numeric expression that sets the position the search will start. The first position in *str_expr1* is 0.
-
+ Optional numeric expression that sets the position the search will start. The first position in *str_expr1* is 0.
+ ## Return types
-
- Returns a numeric expression.
-
+
+Returns a numeric expression.
+ ## Examples
-
- The following example returns the index of various substrings inside "abc".
-
+
+The following example returns the index of various substrings inside "abc".
+ ```sql
-SELECT INDEX_OF("abc", "ab") AS i1, INDEX_OF("abc", "b") AS i2, INDEX_OF("abc", "c") AS i3
-```
-
- Here is the result set.
-
+SELECT
+ INDEX_OF("abc", "ab") AS index_of_prefix,
+ INDEX_OF("abc", "b") AS index_of_middle,
+ INDEX_OF("abc", "c") AS index_of_last,
+ INDEX_OF("abc", "d") AS index_of_missing
+```
+
+Here's the result set.
+ ```json
-[{"i1": 0, "i2": 1, "i3": -1}]
-```
+[
+ {
+ "index_of_prefix": 0,
+ "index_of_middle": 1,
+ "index_of_last": 2,
+ "index_of_missing": -1
+ }
+]
+```
## Next steps
cosmos-db How To Use C Plus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-c-plus.md
Follow these links to learn more about Azure Storage and the Table API in Azure
* [Introduction to the Table API](introduction.md) * [List Azure Storage resources in C++](../../storage/common/storage-c-plus-plus-enumeration.md) * [Storage Client Library for C++ reference](https://azure.github.io/azure-storage-cpp)
-* [Azure Storage documentation](https://azure.microsoft.com/documentation/services/storage/)
+* [Azure Storage documentation](/azure/storage/)
cost-management-billing Troubleshoot Declined Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/troubleshoot-declined-card.md
Previously updated : 04/22/2022 Last updated : 08/30/2022
When you choose a card, Azure displays the card options that are valid in the co
## You're using a virtual or prepaid card
-Prepaid and virtual cards aren't accepted as payment for Azure subscriptions.
+Prepaid and virtual cards are not accepted as payment for Azure subscriptions.
## Your credit information is inaccurate or incomplete
cost-management-billing Prepay App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepay-app-service.md
You can buy Isolated Stamp reserved capacity in the [Azure portal](https://porta
- **Management group** - Applies the reservation discount to the matching resource in the list of subscriptions that are a part of both the management group and billing scope. 1. Select a **Region** to choose an Azure region that's covered by the reserved capacity and add the reservation to the cart. 1. Select an Isolated Plan type and then select **Select**.
- ![Example ](./media/prepay-app-service/app-service-isolated-stamp-select.png)
+ ![Example](./media/prepay-app-service/app-service-isolated-stamp-select.png)
1. Enter the quantity of App Service Isolated stamps to reserve. For example, a quantity of three would give you three reserved stamps a region. Select **Next: Review + Buy**. 1. Review and select **Buy now**.
cost-management-billing Reservation Renew https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-renew.md
Previously updated : 06/17/2022 Last updated : 08/29/2022
Emails are sent to different people depending on your purchase method:
- Individual subscription customers with pay-as-you-go rates - Emails are sent to users who are set up as account administrators. - Cloud Solution Provider customers - Emails are sent to the partner notification contact. This notification isn't currently supported for Microsoft Customer Agreement subscriptions (CSP Azure Plan subscription).
+Renewal notifications are not sent to any Microsoft Customer Agreement (Azure Plan) users.
+ ## Next steps - To learn more about Azure Reservations, see [What are Azure Reservations?](save-compute-costs-reservations.md)
data-factory Ci Cd Github Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ci-cd-github-troubleshoot-guide.md
If you are using old default parameterization template, new way to include globa
Default parameterization template should include all values from global parameter list. #### Resolution
-Use updated [default parameterization template.](https://docs.microsoft.com/azure/data-factory/continuous-integration-delivery-resource-manager-custom-parameters#default-parameterization-template) as one time migration to new method of including global parameters. This template references to all values in global parameter list. You also have to update the deployment task in the **release pipeline** if you are already overriding the template parameters there.
+Use updated [default parameterization template.](/azure/data-factory/continuous-integration-delivery-resource-manager-custom-parameters#default-parameterization-template) as one time migration to new method of including global parameters. This template references to all values in global parameter list. You also have to update the deployment task in the **release pipeline** if you are already overriding the template parameters there.
### Error code: InvalidTemplate
data-factory Transform Data Using Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-machine-learning.md
Last updated 09/09/2021
> [!NOTE] > Since Machine Learning Studio (classic) resources can no longer be created after 1 Dec, 2021, users are encouraged to use [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) with the [Machine Learning Execute Pipeline activity](transform-data-machine-learning-service.md) rather than using the Batch Execution activity to execute Machine Learning Studio (classic) batches.
-[ML Studio (classic)](https://azure.microsoft.com/documentation/services/machine-learning/) enables you to build, test, and deploy predictive analytics solutions. From a high-level point of view, it is done in three steps:
+[ML Studio (classic)](/azure/machine-learning/) enables you to build, test, and deploy predictive analytics solutions. From a high-level point of view, it is done in three steps:
1. **Create a training experiment**. You do this step by using the ML Studio (classic). ML Studio (classic) is a collaborative visual development environment that you use to train and test a predictive analytics model using training data. 2. **Convert it to a predictive experiment**. Once your model has been trained with existing data and you are ready to use it to score new data, you prepare and streamline your experiment for scoring. 3. **Deploy it as a web service**. You can publish your scoring experiment as an Azure web service. You can send data to your model via this web service end point and receive result predictions from the model. ### Using Machine Learning Studio (classic) with Azure Data Factory or Synapse Analytics
-Azure Data Factory and Synapse Analytics enable you to easily create pipelines that use a published [Machine Learning Studio (classic)](https://azure.microsoft.com/documentation/services/machine-learning) web service for predictive analytics. Using the **Batch Execution Activity** in a pipeline, you can invoke Machine Learning Studio (classic) web service to make predictions on the data in batch.
+Azure Data Factory and Synapse Analytics enable you to easily create pipelines that use a published [Machine Learning Studio (classic)](/azure/machine-learning) web service for predictive analytics. Using the **Batch Execution Activity** in a pipeline, you can invoke Machine Learning Studio (classic) web service to make predictions on the data in batch.
Over time, the predictive models in the Machine Learning Studio (classic) scoring experiments need to be retrained using new input datasets. You can retrain a model from a pipeline by doing the following steps:
data-factory Data Factory Azure Ml Batch Execution Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-ml-batch-execution-activity.md
Last updated 10/22/2021
> This article applies to version 1 of Data Factory. If you are using the current version of the Data Factory service, see [transform data using machine learning in Data Factory](../transform-data-using-machine-learning.md). ### Machine Learning Studio (classic)
-[ML Studio (classic)](https://azure.microsoft.com/documentation/services/machine-learning/) enables you to build, test, and deploy predictive analytics solutions. From a high-level point of view, it is done in three steps:
+[ML Studio (classic)](/azure/machine-learning/) enables you to build, test, and deploy predictive analytics solutions. From a high-level point of view, it is done in three steps:
1. **Create a training experiment**. You do this step by using ML Studio (classic). Studio (classic) is a collaborative visual development environment that you use to train and test a predictive analytics model using training data. 2. **Convert it to a predictive experiment**. Once your model has been trained with existing data and you are ready to use it to score new data, you prepare and streamline your experiment for scoring.
You can also use [Data Factory Functions](data-factory-functions-variables.md) i
[adf-build-1st-pipeline]: data-factory-build-your-first-pipeline.md
-[azure-machine-learning]: https://azure.microsoft.com/services/machine-learning/
+[azure-machine-learning]: https://azure.microsoft.com/services/machine-learning/
data-factory Data Factory Data Processing Using Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-data-processing-using-batch.md
After you process data, you can consume it with online tools such as Power BI. H
* [Azure and Power BI: Basic overview](https://powerbi.microsoft.com/documentation/powerbi-azure-and-power-bi/) ## References
-* [Azure Data Factory](https://azure.microsoft.com/documentation/services/data-factory/)
+* [Azure Data Factory](/azure/data-factory/)
* [Introduction to the Data Factory service](data-factory-introduction.md) * [Get started with Data Factory](data-factory-build-your-first-pipeline.md) * [Use custom activities in a Data Factory pipeline](data-factory-use-custom-activities.md)
-* [Azure Batch](https://azure.microsoft.com/documentation/services/batch/)
+* [Azure Batch](/azure/batch/)
* [Basics of Batch](/azure/azure-sql/database/sql-database-paas-overview) * [Overview of Batch features](../../batch/batch-service-workflow-features.md))
databox-online Azure Stack Edge Gpu Deploy Iot Edge Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md
Previously updated : 06/28/2022 Last updated : 08/30/2022
Deploying the IoT Edge runtime is part of VM creation, using the *cloud-init* sc
Here are the high-level steps to deploy the VM and IoT Edge runtime:
-1. In the [Azure portal](https://portal.azure.com), go to Azure Marketplace.
- 1. Connect to the Azure Cloud Shell or a client with Azure CLI installed. For detailed steps, see [Quickstart for Bash in Azure Cloud Shell](../cloud-shell/quickstart.md).
- 1. Use steps in [Search for Azure Marketplace images](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md#search-for-azure-marketplace-images) to search the Azure Marketplace for the following Ubuntu 20.04 LTS image:
+1. Acquire the Ubuntu VM image from Azure Marketplace. For detailed steps, follow the instructions in [Use Azure Marketplace image to create VM image for your Azure Stack Edge](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md).
+
+ 1. In the [Azure portal](https://portal.azure.com), go to Azure Marketplace.
+ 1. Connect to the Azure Cloud Shell or a client with Azure CLI installed. For detailed steps, see [Quickstart for Bash in Azure Cloud Shell](../cloud-shell/quickstart.md).
+
+ > [!NOTE]
+ > Closing the shell session will delete all variables created during the shell session. Reopening the session will require recreating the variables.
+
+ c. Run the following command to set the subscription.
+
+ ```
+ az account set ΓÇôsubscription <subscription id>
+ ```
+
+2. Use steps in [Search for Azure Marketplace images](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md#search-for-azure-marketplace-images) to search the Azure Marketplace for an Ubuntu 20.04 LTS image.
+
+ Example of an Ubuntu 20.04 LTS image:
- ```azurecli
- $urn = Canonical:0001-com-ubuntu-server-focal:20_04-lts:20.04.202007160
- ```
+ ```
+ $urn = Canonical:0001-com-ubuntu-server-focal:20_04-lts:20.04.202007160
+ ```
- 1. Create a new managed disk from the Marketplace image.
+3. Create a new managed disk from the Marketplace image. For detailed steps, see [Use Azure Marketplace image to create VM image for your Azure Stack Edge](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md).
- 1. Export a VHD from the managed disk to an Azure Storage account.
-
- For detailed steps, follow the instructions in [Use Azure Marketplace image to create VM image for your Azure Stack Edge](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md).
+4. Export a VHD from the managed disk to an Azure Storage account. For detailed steps, see [Export a VHD from the managed disk to Azure Storage](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md#export-a-vhd-from-the-managed-disk-to-azure-storage).
-1. Follow these steps to create an Ubuntu VM using the VM image.
+5. Follow these steps to create an Ubuntu VM using the VM image.
1. Specify the *cloud-init* script on the **Advanced** tab. To create a VM, see [Deploy GPU VM via Azure portal](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md?tabs=portal) or [Deploy VM via Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md). ![Screenshot of the Advanced tab of VM configuration in the Azure portal.](media/azure-stack-edge-gpu-deploy-iot-edge-linux-vm/azure-portal-create-vm-advanced-page-2.png)
Use these steps to verify that your IoT Edge runtime is running.
![Screenshot of the IoT Edge runtime status in the Azure portal.](media/azure-stack-edge-gpu-deploy-iot-edge-linux-vm/azure-portal-iot-edge-runtime-status.png)
+ To troubleshoot your IoT Edge device configuration, see [Troubleshoot your IoT Edge device](../iot-edge/troubleshoot.md?view=iotedge-2020-11&tabs=linux&preserve-view=true).
+
+ <!-- Cannot get the link to render properly for version at https://docs.microsoft.com/azure/iot-edge/troubleshoot?view=iotedge-2020-11 -->
+ ## Update the IoT Edge runtime To update the VM, follow the instructions in [Update IoT Edge](../iot-edge/how-to-update-iot-edge.md?view=iotedge-2020-11&tabs=linux&preserve-view=true). To find the latest version of Azure IoT Edge, see [Azure IoT Edge releases](../iot-edge/how-to-update-iot-edge.md?view=iotedge-2020-11&tabs=linux&preserve-view=true).
To update the VM, follow the instructions in [Update IoT Edge](../iot-edge/how-t
To deploy and run an IoT Edge module on your Ubuntu VM, see the steps in [Deploy IoT Edge modules](../iot-edge/how-to-deploy-modules-portal.md?view=iotedge-2020-11&preserve-view=true). To deploy NvidiaΓÇÖs DeepStream module, see [Deploy the Nvidia DeepStream module on Ubuntu VM on Azure Stack Edge Pro with GPU](azure-stack-edge-deploy-nvidia-deepstream-module.md).+
+To deploy NVIDIA DIGITS, see [Enable a GPU in a prefabricated NVIDIA module](/azure/iot-edge/configure-connect-verify-gpu?view=iotedge-2020-11&preserve-view=true#enable-a-gpu-in-a-prefabricated-nvidia-module).
ddos-protection Ddos Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md
The following table shows features and corresponding SKUs.
Simplified configuration immediately protects all resources on a virtual network as soon as DDoS Protection Standard is enabled. No intervention or user definition is required. ### Multi-Layered protection:
-When deployed with a web application firewall (WAF), DDoS Protection Standard protects both at the network layer (Layer 3 and 4, offered by Azure DDoS Protection Standard) and at the application layer (Layer 7, offered by a WAF). WAF offerings include Azure [Application Gateway WAF SKU](../web-application-firewall/ag/ag-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) and third-party web application firewall offerings available in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=web%20application%20firewall).
+When deployed with a web application firewall (WAF), DDoS Protection Standard protects both at the network layer (Layer 3 and 4, offered by Azure DDoS Protection Standard) and at the application layer (Layer 7, offered by a WAF). WAF offerings include Azure [Application Gateway WAF SKU](../web-application-firewall/ag/ag-overview.md?toc=/azure/virtual-network/toc.json) and third-party web application firewall offerings available in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=web%20application%20firewall).
### Extensive mitigation scale All L3/L4 attack vectors can be mitigated, with global capacity, to protect against the largest known DDoS attacks.
ddos-protection Ddos Protection Reference Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-reference-architectures.md
In this architecture, DDoS Protection Standard is enabled on the virtual network
### PaaS web application
-This reference architecture shows running an Azure App Service application in a single region. This architecture shows a set of proven practices for a web application that uses [Azure App Service](https://azure.microsoft.com/documentation/services/app-service/) and [Azure SQL Database](https://azure.microsoft.com/documentation/services/sql-database/).
+This reference architecture shows running an Azure App Service application in a single region. This architecture shows a set of proven practices for a web application that uses [Azure App Service](/azure/app-service/) and [Azure SQL Database](/azure/sql-database/).
A standby region is set up for failover scenarios. ![Diagram of the reference architecture for a PaaS web application](./media/ddos-best-practices/image-11.png)
This reference architecture shows configuring DDoS Protection Standard for an [A
In this architecture, traffic destined to the HDInsight cluster from the internet is routed to the public IP associated with the HDInsight gateway load balancer. The gateway load balancer then sends the traffic to the head nodes or the worker nodes directly. Because DDoS Protection Standard is enabled on the HDInsight virtual network, all public IPs in the virtual network get DDoS protection for Layer 3 and 4. This reference architecture can be combined with the N-Tier and multi-region reference architectures.
-For more information on this reference architecture, see the [Extend Azure HDInsight using an Azure Virtual Network](../hdinsight/hdinsight-plan-virtual-network-deployment.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
+For more information on this reference architecture, see the [Extend Azure HDInsight using an Azure Virtual Network](../hdinsight/hdinsight-plan-virtual-network-deployment.md?toc=/azure/virtual-network/toc.json)
documentation.
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
Defender for Servers provides two plans you can choose from:
- **Licensing**: Charges Defender for Endpoint licenses per hour instead of per seat, lowering costs by protecting virtual machines only when they are in use. - **Plan 2** - **Plan 1**: Includes everything in Defender for Servers Plan 1.
- - **Additional features**: All other enhanced Defender for Servers security capabilities for Windows and Linux machines running in Azure, AWS, GCP, and on-premises.
+ - **Additional features**: All other enhanced Defender for Servers security features.
## Plan features
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
You can also enable the MDE unified solution at scale through the supplied REST
This is an example request body for the PUT request to enable the MDE unified solution:
-URI: `https://management.microsoft.com/subscriptions/<subscriptionId>/providers/Microsoft.Security/settings&api-version=2022-05-01-preview`
+URI: `https://management.azure.com/subscriptions/<subscriptionId>/providers/Microsoft.Security/settings&api-version=2022-05-01-preview`
```json {
defender-for-cloud Multi Factor Authentication Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/multi-factor-authentication-enforcement.md
To see which accounts don't have MFA enabled, use the following Azure Resource G
```kusto securityresources | where type == "microsoft.security/assessments"
- | where properties.displayName == "MFA should be enabled on accounts with owner permissions on your subscription"
+ | where properties.displayName == "MFA should be enabled on accounts with owner permissions on subscriptions"
| where properties.status.code == "Unhealthy" ```
event-grid Partner Events Overview For Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview-for-partners.md
A verified partner is a partner organization whose identity has been validated b
Customers authorize you to create partner topics or partner destinations on their Azure subscription. The authorization is granted for a given resource group in a customer Azure subscription and it's time bound. You must create the channel before the expiration date set by the customer. You should have documentation suggesting the customer an adequate window of time for configuring your system to send or receive events and to create the channel before the authorization expires. If you attempt to create a channel without authorization or after it has expired, the channel creation will fail and no resource will be created on the customer's Azure subscription. > [!NOTE]
-> Event Grid will start **requiring authorizations to create partner topics or partner destinations** around June 30th, 2022. You should update your documentation asking your customers to grant you the authorization before you attempt to create a channel or an event channel.
+> Event Grid started **enforcing authorization checks to create partner topics or partner destinations** around June 30th, 2022. You should update your documentation asking your customers to grant you the authorization before you attempt to create a channel or an event channel.
>[!IMPORTANT] > **A verified partner is not an authorized partner**. Even if a partner has been vetted by Microsoft, you still need to be authorized before you can create a partner topic or partner destination on the customer's Azure subscription.
event-grid Subscribe To Partner Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-partner-events.md
You must grant your consent to the partner to create partner topics in a resourc
> For a greater security stance, specify the minimum expiration time that offers the partner enough time to configure your events to flow to Event Grid and to provision your partner topic. > [!NOTE]
-> Event Grid will start requiring authorizations to create partner topics or partner destinations around June 30th, 2022. Meanwhile, requiring your (subscriber's) authorization for a partner to create resources on your Azure subscription is an **optional** feature. We encourage you to opt-in to use this feature and try to use it in non-production Azure subscriptions before it becomes a mandatory step around June 30th, 2022. To opt-in to this feature, reach out to [mailto:askgrid@microsoft.com](mailto:askgrid@microsoft.com) using the subject line **Request to enforce partner authorization on my Azure subscription(s)** and provide your Azure subscription(s) in the email.
+> Event Grid will started enforcing authorization checks to create partner topics or partner destinations around June 30th, 2022.
Following example shows the way to create a partner configuration resource that contains the partner authorization. You must identify the partner by providing either its **partner registration ID** or the **partner name**. Both can be obtained from your partner, but only one of them is required. For your convenience, the following examples leave a sample expiration time in the UTC format.
event-hubs Event Hubs Kafka Spark Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-spark-tutorial.md
In this tutorial, you learn how to:
Before you start this tutorial, make sure that you have: - Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/). - [Apache Spark v2.4](https://spark.apache.org/downloads.html)-- [Apache Kafka v2.0]( https://kafka.apache.org/20/documentation.html)
+- [Apache Kafka v2.0](https://kafka.apache.org/20/documentation.html)
- [Git](https://www.git-scm.com/downloads) > [!NOTE]
To learn more about Event Hubs and Event Hubs for Kafka, see the following artic
- [Explore samples on our GitHub](https://github.com/Azure/azure-event-hubs-for-kafka) - [Connect Akka Streams to an event hub](event-hubs-kafka-akka-streams-tutorial.md) - [Apache Kafka developer guide for Azure Event Hubs](apache-kafka-developer-guide.md)-
expressroute Expressroute For Cloud Solution Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-for-cloud-solution-providers.md
The choices between these two options are based on your customerΓÇÖs needs and y
* **Azure role-based access control (Azure RBAC)** ΓÇô Azure RBAC is based on Azure Active Directory. For more information on Azure RBAC, see [here](../role-based-access-control/role-assignments-portal.md). * **Networking** ΓÇô Covers the various topics of networking in Microsoft Azure.
-* **Azure Active Directory (Azure AD)** ΓÇô Azure AD provides the identity management for Microsoft Azure and third-party SaaS applications. For more information about Azure AD, see [here](https://azure.microsoft.com/documentation/services/active-directory/).
+* **Azure Active Directory (Azure AD)** ΓÇô Azure AD provides the identity management for Microsoft Azure and third-party SaaS applications. For more information about Azure AD, see [here](/azure/active-directory/).
## Network speeds ExpressRoute supports network speeds from 50 Mb/s to 10 Gb/s. This allows customers to purchase the amount of network bandwidth needed for their unique environment.
Additional Information can be found at the following links:
[Azure in Cloud Solution Provider program](/azure/cloud-solution-provider). [Get ready to transact as a Cloud Solution Provider](https://partner.microsoft.com/solutions/cloud-reseller-pre-launch).
-[Microsoft Cloud Solution Provider resources](https://partner.microsoft.com/solutions/cloud-reseller-resources).
+[Microsoft Cloud Solution Provider resources](https://partner.microsoft.com/solutions/cloud-reseller-resources).
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **China Mobile International** |Supported |Supported | Hong Kong, Hong Kong2, Singapore | | **China Telecom Global** |Supported |Supported | Hong Kong, Hong Kong2 | | **[China Unicom Global](https://cloudbond.chinaunicom.cn/home-en)** |Supported |Supported | Frankfurt, Hong Kong, Singapore2, Tokyo2 |
-| **[Chunghwa Telecom](https://www.cht.com.tw/en/home/cht/about-cht/products-and-services/International/Cloud-Service)** |Supported |Supported | Taipei |
+| **Chunghwa Telecom** |Supported |Supported | Taipei |
| **[Claro](https://www.usclaro.com/enterprise-mnc/connectivity/mpls/)** |Supported |Supported | Miami | | **[Cologix](https://cologix.com/connectivity/cloud/cloud-connect/microsoft-azure/)** |Supported |Supported | Chicago, Dallas, Minneapolis, Montreal, Toronto, Vancouver, Washington DC | | **[Colt](https://www.colt.net/direct-connect/azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Berlin, Chicago, Dublin, Frankfurt, Geneva, Hong Kong, London, London2, Marseille, Milan, Munich, Newport, Osaka, Paris, Seoul, Silicon Valley, Singapore2, Tokyo, Tokyo2, Washington DC, Zurich |
The following table shows locations by service provider. If you want to view ava
| **[Transtelco](https://transtelco.net/enterprise-services/)** |Supported |Supported | Dallas, Queretaro(Mexico)| | **[T-Mobile/Sprint](https://www.t-mobile.com/business/solutions/networking/cloud-networking )** |Supported |Supported | Chicago, Silicon Valley, Washington DC | | **[T-Systems](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** |Supported |Supported | Frankfurt |
-| **[UOLDIVEO](https://www.uoldiveo.com.br/)** |Supported |Supported | Sao Paulo |
+| **UOLDIVEO** |Supported |Supported | Sao Paulo |
| **[UIH](https://www.uih.co.th/en/network-solutions/global-network/cloud-direct-for-microsoft-azure-expressroute)** | Supported | Supported | Bangkok | | **[Verizon](https://enterprise.verizon.com/products/network/application-enablement/secure-cloud-interconnect/)** |Supported |Supported | Amsterdam, Chicago, Dallas, Hong Kong SAR, London, Mumbai, Silicon Valley, Singapore, Sydney, Tokyo, Toronto, Washington DC | | **[Viasat](http://www.directcloud.viasatbusiness.com/)** | Supported | Supported | Washington DC2 |
expressroute How To Custom Route Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-custom-route-alert.md
In order to create an Automation account, you need privileges and permissions. F
### <a name="about"></a>1. Create an automation account
-Create an Automation account with run-as permissions. For instructions, see [Create an Azure Automation account](../automation/quickstarts/create-account-portal.md).
+Create an Automation account with run-as permissions. For instructions, see [Create an Azure Automation account](../automation/quickstarts/create-azure-automation-account-portal.md).
:::image type="content" source="./media/custom-route-alert-portal/create-account.png" alt-text="Add automation account" lightbox="./media/custom-route-alert-portal/create-account-expand.png":::
The final step is the workflow validation. In **Logic Apps Overview**, select **
## Next steps
-To learn more about how to customize the workflow, see [Azure Logic Apps](../logic-apps/logic-apps-overview.md).
+To learn more about how to customize the workflow, see [Azure Logic Apps](../logic-apps/logic-apps-overview.md).
expressroute Howto Routing Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/howto-routing-cli.md
This section helps you create, get, update, and delete the Microsoft peering con
> [!IMPORTANT] > Microsoft peering of ExpressRoute circuits that were configured prior to August 1, 2017 will have all service prefixes advertised through the Microsoft peering, even if route filters are not defined. Microsoft peering of ExpressRoute circuits that are configured on or after August 1, 2017 will not have any prefixes advertised until a route filter is attached to the circuit. For more information, see [Configure a route filter for Microsoft peering](how-to-routefilter-powershell.md).
->
- ### To create Microsoft peering
firewall Protect Azure Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-azure-virtual-desktop.md
Azure Virtual Desktop is a desktop and app virtualization service that runs on A
[ ![Azure Virtual Desktop architecture](media/protect-windows-virtual-desktop/windows-virtual-desktop-architecture-diagram.png) ](media/protect-windows-virtual-desktop/windows-virtual-desktop-architecture-diagram.png#lightbox)
-Follow the guidelines in this article to provide additional protection for your Azure Virtual Desktop host pool using Azure Firewall.
+Follow the guidelines in this article to provide extra protection for your Azure Virtual Desktop host pool using Azure Firewall.
## Prerequisites - A deployed Azure Virtual Desktop environment and host pool.
+ - An Azure Firewall deployed with at least one Firewall Manager Policy.
+ - DNS and DNS Proxy enabled in the Firewall Policy to use [FQDN in Network Rules](../firewall/fqdn-filtering-network-rules.md).
- For more information, see [Tutorial: Create a host pool by using the Azure portal](../virtual-desktop/create-host-pools-azure-marketplace.md)
+For more information, see [Tutorial: Create a host pool by using the Azure portal](../virtual-desktop/create-host-pools-azure-marketplace.md)
To learn more about Azure Virtual Desktop environments see [Azure Virtual Desktop environment](../virtual-desktop/environment-setup.md).
To learn more about Azure Virtual Desktop environments see [Azure Virtual Deskto
The Azure virtual machines you create for Azure Virtual Desktop must have access to several Fully Qualified Domain Names (FQDNs) to function properly. Azure Firewall provides an Azure Virtual Desktop FQDN Tag to simplify this configuration. Use the following steps to allow outbound Azure Virtual Desktop platform traffic:
-You will need to create an Azure Firewall Policy and create Rule Collections for Network Rules and Applications Rules. Give the Rule Collection a priority and an allow or deny action.
+You'll need to create an Azure Firewall Policy and create Rule Collections for Network Rules and Applications Rules. Give the Rule Collection a priority and an allow or deny action.
+In order to identify a specific AVD Host Pool as "Source" in the tables below, [IP Group](../firewall/ip-groups.md) can be created to represent it.
### Create network rules
-| Name | Source type | Source | Protocol | Destination ports | Destination type | Destination |
-| | -- | - | -- | -- | - | |
-| Rule Name | IP Address | VNet or Subnet IP Address | TCP | 80 | IP Address | 169.254.169.254, 168.63.129.16 |
-| Rule Name | IP Address | VNet or Subnet IP Address | TCP | 443 | Service Tag | AzureCloud, WindowsVirtualDesktop, AzureFrontDoor.Frontend |
-| Rule Name | IP Address | VNet or Subnet IP Address | TCP, UDP | 53 | IP Address | * |
-|Rule name | IP Address | VNet or Subnet IP Address | TCP | 1688 | IP address | 20.118.99.224, 40.83.235.53 (azkms.core.windows.net)|
-|Rule name | IP Address | VNet or Subnet IP Address | TCP | 1688 | IP address | 23.102.135.246 (kms.core.windows.net)|
+Based on the Azure Virtual Desktop (AVD) [reference article](../virtual-desktop/safe-url-list.md), these are the ***mandatory*** rules to allow outbound access to the control plane and core dependent
+
+| Name | Source type | Source | Protocol | Destination ports | Destination type | Destination |
+| | -- | - | -- | -- | - | |
+| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 443 | FQDN | login.microsoftonline.com |
+| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 80 | IP Address | 169.254.169.254, 168.63.129.16 |
+| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 443 | Service Tag | WindowsVirtualDesktop, AzureFrontDoor.Frontend, AzureMonitor |
+| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP, UDP | 53 | IP Address | * |
+| Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 1688 | IP address | 20.118.99.224, 40.83.235.53 (azkms.core.windows.net) |
+| Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 1688 | IP address | 23.102.135.246 (kms.core.windows.net) |
+| Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 443 | FQDN | mrsglobalsteus2prod.blob.core.windows.net |
+| Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 443 | FQDN | wvdportalstorageblob.blob.core.windows.net |
+| Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 80 | FQDN | oneocsp.microsoft.com |
+| Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 80 | FQDN | www.microsoft.com |
> [!NOTE] > Some deployments might not need DNS rules. For example, Azure Active Directory Domain controllers forward DNS queries to Azure DNS at 168.63.129.16.
+Azure Virtual Desktop (AVD) official documentation reports the following Network rules as **optional** depending on the usage and scenario:
+
+| Name | Source type | Source | Protocol | Destination ports | Destination type | Destination |
+| -| -- | - | -- | -- | - | |
+| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | UDP | 123 | FQDN | time.windows.com |
+| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 443 | FQDN | login.windows.net |
+| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 443 | FQDN | www.msftconnecttest.com |
++ ### Create application rules
-| Name | Source type | Source | Protocol | Destination type | Destination |
-| | -- | - | - | - | - |
-| Rule Name | IP Address | VNet or Subnet IP Address | Https:443 | FQDN Tag | WindowsVirtualDesktop, WindowsUpdate, Windows Diagnostics, MicrosoftActiveProtectionService |
+Azure Virtual Desktop (AVD) official documentation reports the following Application rules as **optional** depending on the usage and scenario:
+
+| Name | Source type | Source | Protocol | Destination type | Destination |
+| | -- | --| - | - | - |
+| Rule Name | IP Address or Group | VNet or Subnet IP Address | Https:443 | FQDN Tag | WindowsUpdate, Windows Diagnostics, MicrosoftActiveProtectionService |
+| Rule Name | IP Address or Group | VNet or Subnet IP Address | Https:443 | FQDN | *.events.data.microsoft.com |
+| Rule Name | IP Address or Group | VNet or Subnet IP Address | Https:443 | FQDN | *.sfx.ms |
+| Rule Name | IP Address or Group | VNet or Subnet IP Address | Https:443 | FQDN | *.digicert.com |
+| Rule Name | IP Address or Group | VNet or Subnet IP Address | Https:443 | FQDN | *.azure-dns.com, *.azure-dns.net |
> [!IMPORTANT] > We recommend that you don't use TLS inspection with Azure Virtual Desktop. For more information, see the [proxy server guidelines](../virtual-desktop/proxy-server-support.md#dont-use-ssl-termination-on-the-proxy-server).
+## Azure Firewall Policy Sample
+All the mandatory and optional rules mentioned above can be easily deployed a single Azure Firewall Policy using the template published at [this link](https://github.com/Azure/RDS-Templates/tree/master/AzureFirewallPolicyForAVD).
+Before deploying into production, it's highly recommended to review all the Network and Application rules defined, ensure alignment with Azure Virtual Desktop official documentation and security requirements.
+ ## Host pool outbound access to the Internet
-Depending on your organization needs, you might want to enable secure outbound internet access for your end users. If the list of allowed destinations is well-defined (for example, for [Microsoft 365 access](/microsoft-365/enterprise/microsoft-365-ip-web-service)), you can use Azure Firewall application and network rules to configure the required access. This routes end-user traffic directly to the internet for best performance. If you need to allow network connectivity for Windows 365 or Intune, see [Network requirments for Windows 365](/windows-365/requirements-network#allow-network-connectivity) and [Network endpoints for Intune](/mem/intune/fundamentals/intune-endpoints).
+Depending on your organization needs, you might want to enable secure outbound internet access for your end users. If the list of allowed destinations is well-defined (for example, for [Microsoft 365 access](/microsoft-365/enterprise/microsoft-365-ip-web-service)), you can use Azure Firewall application and network rules to configure the required access. This routes end-user traffic directly to the internet for best performance. If you need to allow network connectivity for Windows 365 or Intune, see [Network requirements for Windows 365](/windows-365/requirements-network#allow-network-connectivity) and [Network endpoints for Intune](/mem/intune/fundamentals/intune-endpoints).
If you want to filter outbound user internet traffic by using an existing on-premises secure web gateway, you can configure web browsers or other applications running on the Azure Virtual Desktop host pool with an explicit proxy configuration. For example, see [How to use Microsoft Edge command-line options to configure proxy settings](/deployedge/edge-learnmore-cmdline-options-proxy-settings). These proxy settings only influence your end-user internet access, allowing the Azure Virtual Desktop platform outbound traffic directly via Azure Firewall.
If you want to filter outbound user internet traffic by using an existing on-pre
Admins can allow or deny user access to different website categories. Add a rule to your Application Collection from your specific IP address to web categories you want to allow or deny. Review all the [web categories](web-categories.md).
-## Additional considerations
-
-You might need to configure additional firewall rules, depending on your requirements:
--- NTP server access-
- By default, virtual machines running Windows connect to `time.windows.com` over UDP port 123 for time synchronization. Create a network rule to allow this access, or for a time server that you use in your environment.
- ## Next steps - Learn more about Azure Virtual Desktop: [What is Azure Virtual Desktop?](../virtual-desktop/overview.md)+
frontdoor Front Door Wildcard Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-wildcard-domain.md
For accepting HTTPS traffic on your wildcard domain, you must enable HTTPS on th
## Adding wildcard domains
-You can add a wildcard domain under the section for front-end hosts or domains. Similar to subdomains, Azure Front Door (classic) validates that there's CNAME record mapping for your wildcard domain. This DNS mapping can be a direct CNAME record mapping like `*.contoso.com` mapped to `contoso.azurefd.net`. Or you can use afdverify temporary mapping. For example, `afdverify.contoso.com` mapped to `afdverify.contoso.azurefd.net` validates the CNAME record map for the wildcard.
+You can add a wildcard domain under the section for front-end hosts or domains. Similar to subdomains, Azure Front Door (classic) validates that there's CNAME record mapping for your wildcard domain. This DNS mapping can be a direct CNAME record mapping like `*.contoso.com` mapped to `endpoint.azurefd.net`. Or you can use afdverify temporary mapping. For example, `afdverify.contoso.com` mapped to `afdverify.endpoint.azurefd.net` validates the CNAME record map for the wildcard.
> [!NOTE] > Azure DNS supports wildcard records.
governance Machine Configuration Create Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create-definition.md
and the details about machine configuration policy effects
> configuration extension version **1.29.24** or later, > or Arc agent **1.10.0** or later, is required. >
-> Custom machine configuration policy definitions using **AuditIfNotExists** are
-> Generally Available, but definitions using **DeployIfNotExists** with guest
-> configuration are **in preview**.
+> Custom machine configuration policy definitions using either **AuditIfNotExists** or **DeployIfNotExists** are now
+> Generally Available.
Use the following steps to create your own policies that audit compliance or manage the state of Azure or Arc-enabled machines.
configuration package, in a specified path:
```powershell $PolicyConfig = @{ PolicyId = '_My GUID_'
- ContentUri = <_ContentUri output from the Publish command_>
+ ContentUri = $contenturi
DisplayName = 'My audit policy' Description = 'My audit policy'
- Path = './policies'
+ Path = './policies/auditIfNotExists.json'
Platform = 'Windows' PolicyVersion = 1.0.0 }
configuration package, in a specified path:
```powershell $PolicyConfig2 = @{ PolicyId = '_My GUID_'
- ContentUri = <_ContentUri output from the Publish command_>
+ ContentUri = $contenturi
DisplayName = 'My audit policy' Description = 'My audit policy'
- Path = './policies'
+ Path = './policies/deployIfNotExists.json'
Platform = 'Windows' PolicyVersion = 1.0.0 Mode = 'ApplyAndAutoCorrect'
$PolicyParameterInfo = @(
# ...and then passed into the `New-GuestConfigurationPolicy` cmdlet $PolicyParam = @{ PolicyId = 'My GUID'
- ContentUri = '<ContentUri output from the Publish command>'
+ ContentUri = $contenturi
DisplayName = 'Audit Windows Service.' Description = "Audit if a Windows Service isn't enabled on Windows machine."
- Path = '.\policies'
+ Path = '.\policies\auditIfNotExists.json'
Parameter = $PolicyParameterInfo PolicyVersion = 1.0.0 }
requirements are documented in the [Azure Policy Overview](./overview.md) page.
role is **Resource Policy Contributor**. ```powershell
-New-AzPolicyDefinition -Name 'mypolicydefinition' -Policy '.\policies'
+New-AzPolicyDefinition -Name 'mypolicydefinition' -Policy '.\policies\auditIfNotExists.json'
+```
+
+Or, if this is a deploy if not exist policy (DINE) please use
+
+```powershell
+New-AzPolicyDefinition -Name 'mypolicydefinition' -Policy '.\policies\deployIfNotExists.json'
``` With the policy definition created in Azure, the last step is to assign the definition. See how to assign the
governance Machine Configuration Create Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create-publish.md
$Context = New-AzStorageContext -ConnectionString "DefaultEndpointsProtocol=http
Next, add the configuration package to the storage account. This example uploads the zip file ./MyConfig.zip to the blob "guestconfiguration". ```powershell
-Set-AzStorageBlobContent -Container "guestconfiguration" -File ./MyConfig.zip -Blob "guestconfiguration" -Context $Context
+Set-AzStorageBlobContent -Container "guestconfiguration" -File ./MyConfig.zip -Context $Context
```
-Optionally, you can add a SAS token in the URL, this ensures that the content package will be accessed securely. The below example generates a blob SAS token with full blob permission and returns the full blob URI with the shared access signature token.
+Optionally, you can add a SAS token in the URL, this ensures that the content package will be accessed securely. The below example generates a blob SAS token with read access and returns the full blob URI with the shared access signature token. In this example, this includes a time limit of 3 years.
```powershell
-$contenturi = New-AzStorageBlobSASToken -Context $Context -FullUri -Container guestconfiguration -Blob "guestconfiguration" -Permission rwd
+$StartTime = Get-Date
+$EndTime = $startTime.AddYears(3)
+$contenturi = New-AzStorageBlobSASToken -StartTime $StartTime -ExpiryTime $EndTime -Container "guestconfiguration" -Blob "MyConfig.zip" -Permission rwd -Context $Context -FullUri
``` ## Next steps
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/overview.md
-# Understand the machine configuration feature of Azure Policy
+# Understand the machine configuration feature of Azure Automanage
[!INCLUDE [Machine config rename banner](../includes/banner.md)]
hdinsight Enable Private Link On Kafka Rest Proxy Hdi Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/enable-private-link-on-kafka-rest-proxy-hdi-cluster.md
Title: Enable Private Link on an HDInsight Kafka Rest Proxy cluster
-description: Learn how to Enable Private Link on an HDInsight Kafka Rest Proxy cluster.
+ Title: Enable Private Link on an Azure HDInsight Kafka Rest Proxy cluster
+description: Learn how to Enable Private Link on an Azure HDInsight Kafka Rest Proxy cluster.
Follow these extra steps to enable private link for Kafka Rest Proxy HDI cluster
## Prerequisites
-As a prerequisite, complete the steps mentioned in [Enable Private Link on an HDInsight cluster document](./hdinsight-private-link.md), then perform the below steps.
+As a prerequisite, complete the steps mentioned in [Enable Private Link on an Azure HDInsight cluster document](./hdinsight-private-link.md), then perform the below steps.
## Create private endpoints
hdinsight Apache Hadoop Use Hive Curl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-hive-curl.md
description: Learn how to remotely submit Apache Pig jobs to Azure HDInsight usi
Previously updated : 01/06/2020 Last updated : 08/30/2022 # Run Apache Hive queries with Apache Hadoop in HDInsight using REST
For information on other ways you can work with Hadoop on HDInsight:
* [Use Apache Hive with Apache Hadoop on HDInsight](hdinsight-use-hive.md) * [Use MapReduce with Apache Hadoop on HDInsight](hdinsight-use-mapreduce.md)
-For more information on the REST API used in this document, see the [WebHCat reference](https://cwiki.apache.org/confluence/display/Hive/WebHCat+Reference) document.
+For more information on the REST API used in this document, see the [WebHCat reference](https://cwiki.apache.org/confluence/display/Hive/WebHCat+Reference) document.
hdinsight Apache Hadoop Use Hive Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-hive-powershell.md
description: Use PowerShell to run Apache Hive queries in Apache Hadoop in Azure
Previously updated : 12/24/2019 Last updated : 08/30/2022 # Run Apache Hive queries using PowerShell
hdinsight Apache Hadoop Use Mapreduce Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-mapreduce-ssh.md
description: Learn how to use SSH to run MapReduce jobs using Apache Hadoop on H
Previously updated : 01/10/2020 Last updated : 08/30/2022 # Use MapReduce with Apache Hadoop on HDInsight with SSH
hdinsight Hbase Troubleshoot Unassigned Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-unassigned-regions.md
Title: Issues with region servers in Azure HDInsight
description: Issues with region servers in Azure HDInsight Previously updated : 06/30/2020 Last updated : 08/30/2022 # Issues with region servers in Azure HDInsight
hdinsight Hdinsight Apache Storm With Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-apache-storm-with-kafka.md
Last updated 08/05/2022+ #Customer intent: As a developer, I want to learn how to build a streaming pipeline that uses Storm and Kafka to process streaming data.
hdinsight Hdinsight Config For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-config-for-vscode.md
Title: Azure HDInsight configuration settings reference
description: Introduce the configuration of Azure HDInsight extension. Previously updated : 04/07/2021 Last updated : 08/30/2022
For general information about working with settings in VS Code, refer to [User a
## Next steps - For information about Azure HDInsight for VSCode, see [Spark & Hive for Visual Studio Code Tools](/sql/big-data-cluster/spark-hive-tools-vscode).-- For a video that demonstrates using Spark & Hive for Visual Studio Code, see [Spark & Hive for Visual Studio Code](https://go.microsoft.com/fwlink/?linkid=858706).
+- For a video that demonstrates using Spark & Hive for Visual Studio Code, see [Spark & Hive for Visual Studio Code](https://go.microsoft.com/fwlink/?linkid=858706).
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
HDI Hive 3.1 version is upgraded to OSS Hive 3.1.2. This version has all fixes a
| Support parallel load for HastTables - Interfaces|[HIVE-25583](https://issues.apache.org/jira/browse/HIVE-25583)| | Include MultiDelimitSerDe in HiveServer2 By Default|[HIVE-20619](https://issues.apache.org/jira/browse/HIVE-20619)| | Remove glassfish.jersey and mssql-jdbc classes from jdbc-standalone jar|[HIVE-22134](https://issues.apache.org/jira/browse/HIVE-22134)|
-| Null pointer exception on running compaction against an MM table.|[HIVE-21280 ](https://issues.apache.org/jira/browse/HIVE-21280)|
+| Null pointer exception on running compaction against an MM table.|[HIVE-21280](https://issues.apache.org/jira/browse/HIVE-21280)|
| Hive query with large size via knox fails with Broken pipe Write failed|[HIVE-22231](https://issues.apache.org/jira/browse/HIVE-22231)| | Adding ability for user to set bind user|[HIVE-21009](https://issues.apache.org/jira/browse/HIVE-21009)| | Implement UDF to interpret date/timestamp using its internal representation and Gregorian-Julian hybrid calendar|[HIVE-22241](https://issues.apache.org/jira/browse/HIVE-22241)| | Beeline option to show/not show execution report|[HIVE-22204](https://issues.apache.org/jira/browse/HIVE-22204)|
-| Tez: SplitGenerator tries to look for plan files, which won't exist for Tez|[HIVE-22169 ](https://issues.apache.org/jira/browse/HIVE-22169)|
+| Tez: SplitGenerator tries to look for plan files, which won't exist for Tez|[HIVE-22169](https://issues.apache.org/jira/browse/HIVE-22169)|
| Remove expensive logging from the LLAP cache hotpath|[HIVE-22168](https://issues.apache.org/jira/browse/HIVE-22168)| | UDF: FunctionRegistry synchronizes on org.apache.hadoop.hive.ql.udf.UDFType class|[HIVE-22161](https://issues.apache.org/jira/browse/HIVE-22161)| | Prevent the creation of query routing appender if property is set to false|[HIVE-22115](https://issues.apache.org/jira/browse/HIVE-22115)| | Remove cross-query synchronization for the partition-eval|[HIVE-22106](https://issues.apache.org/jira/browse/HIVE-22106)| | Skip setting up hive scratch dir during planning|[HIVE-21182](https://issues.apache.org/jira/browse/HIVE-21182)| | Skip creating scratch dirs for tez if RPC is on|[HIVE-21171](https://issues.apache.org/jira/browse/HIVE-21171)|
-| switch Hive UDFs to use Re2J regex engine|[HIVE-19661 ](https://issues.apache.org/jira/browse/HIVE-19661)|
+| switch Hive UDFs to use Re2J regex engine|[HIVE-19661](https://issues.apache.org/jira/browse/HIVE-19661)|
| Migrated clustered tables using bucketing_version 1 on hive 3 uses bucketing_version 2 for inserts|[HIVE-22429](https://issues.apache.org/jira/browse/HIVE-22429)|
-| Bucketing: Bucketing version 1 is incorrectly partitioning data|[HIVE-21167 ](https://issues.apache.org/jira/browse/HIVE-21167)|
+| Bucketing: Bucketing version 1 is incorrectly partitioning data|[HIVE-21167](https://issues.apache.org/jira/browse/HIVE-21167)|
| Adding ASF License header to the newly added file|[HIVE-22498](https://issues.apache.org/jira/browse/HIVE-22498)| | Schema tool enhancements to support mergeCatalog|[HIVE-22498](https://issues.apache.org/jira/browse/HIVE-22498)| | Hive with TEZ UNION ALL and UDTF results in data loss|[HIVE-21915](https://issues.apache.org/jira/browse/HIVE-21915)|
HDI Hive 3.1 version is upgraded to OSS Hive 3.1.2. This version has all fixes a
| Fix wrong results/ArrayOutOfBound exception in left outer map joins on specific boundary conditions|[HIVE-22120](https://issues.apache.org/jira/browse/HIVE-22120)| | Remove distribution management tag from pom.xml|[HIVE-19667](https://issues.apache.org/jira/browse/HIVE-19667)| | Parsing time can be high if there's deeply nested subqueries|[HIVE-21980](https://issues.apache.org/jira/browse/HIVE-21980)|
-| For ALTER TABLE t SET TBLPROPERTIES ('EXTERNAL'='TRUE'); `TBL_TYPE` attribute changes not reflecting for non-CAPS|[HIVE-20057 ](https://issues.apache.org/jira/browse/HIVE-20057)|
+| For ALTER TABLE t SET TBLPROPERTIES ('EXTERNAL'='TRUE'); `TBL_TYPE` attribute changes not reflecting for non-CAPS|[HIVE-20057](https://issues.apache.org/jira/browse/HIVE-20057)|
| JDBC: HiveConnection shades log4j interfaces|[HIVE-18874](https://issues.apache.org/jira/browse/HIVE-18874)| | Update repo URLs in poms - branh 3.1 version|[HIVE-21786](https://issues.apache.org/jira/browse/HIVE-21786)| | DBInstall tests broken on master and branch-3.1|[HIVE-21758](https://issues.apache.org/jira/browse/HIVE-21758)|
hdinsight Hdinsight Sdk Dotnet Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-sdk-dotnet-samples.md
Title: 'Azure HDInsight: .NET samples'
description: Find C# .NET examples on GitHub for common tasks using the HDInsight SDK for .NET. Previously updated : 12/06/2019 Last updated : 08/30/2022 # Azure HDInsight: .NET samples
hdinsight Network Virtual Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/network-virtual-appliance.md
Title: Configure network virtual appliance in Azure HDInsight
description: Learn how to configure a number of additional features for your network virtual appliance in Azure HDInsight. Previously updated : 06/30/2020 Last updated : 08/30/2022 # Configure network virtual appliance in Azure HDInsight
hdinsight Apache Spark Shell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-shell.md
description: An interactive Spark Shell provides a read-execute-print process fo
Previously updated : 02/10/2020 Last updated : 08/30/2022 # Run Apache Spark from the Spark Shell
hdinsight Use Scp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/use-scp.md
description: This document provides information on connecting to HDInsight using
Previously updated : 04/22/2020 Last updated : 08/30/2022 # Use SCP with Apache Hadoop in Azure HDInsight
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/convert-data.md
# Converting your data to FHIR for Azure API for FHIR
-The `$convert-data` custom endpoint in the FHIR service is meant for data conversion from different data types to FHIR. It uses the Liquid template engine and the templates from the [FHIR Converter](https://github.com/microsoft/FHIR-Converter) project as the default templates. You can customize these conversion templates as needed. Currently it supports three types of data conversion: **C-CDA to FHIR**, **HL7v2 to FHIR**, **JSON to FHIR**.
+The `$convert-data` custom endpoint in the FHIR service is meant for data conversion from different data types to FHIR. It uses the Liquid template engine and the templates from the [FHIR Converter](https://github.com/microsoft/FHIR-Converter) project as the default templates. You can customize these conversion templates as needed. Currently it supports three types of data conversion: **C-CDA to FHIR**, **HL7v2 to FHIR**, **JSON to FHIR**, **FHIR STU3 to FHIR R4(new!)**.
> [!NOTE] > `$convert-data` endpoint can be used as a component within an ETL pipeline for the conversion of raw healthcare data from legacy formats into FHIR format. However, it is not an ETL pipeline in itself. We recommend you to use an ETL engine such as Logic Apps or Azure Data Factory for a complete workflow in preparing your FHIR data to be persisted into the FHIR server. The workflow might include: data reading and ingestion, data validation, making $convert-data API calls, data pre/post-processing, data enrichment, and data de-duplication.
$convert-data takes a [Parameter](http://hl7.org/fhir/parameters.html) resource
| Parameter Name | Description | Accepted values | | -- | -- | -- |
-| inputData | Data to be converted. | For `Hl7v2`: string <br> For `Ccda`: XML <br> For `Json`: JSON |
-| inputDataType | Data type of input. | ```HL7v2```, ``Ccda``, ``Json`` |
-| templateCollectionReference | Reference to an [OCI image ](https://github.com/opencontainers/image-spec) template collection on [Azure Container Registry (ACR)](https://azure.microsoft.com/services/container-registry/). It's the image containing Liquid templates to use for conversion. It can be a reference either to the default templates or a custom template image that is registered within the FHIR service. See below to learn about customizing the templates, hosting those on ACR, and registering to the FHIR service. | For ***default/sample*** templates: <br> **HL7v2** templates: <br>```microsofthealth/fhirconverter:default``` <br>``microsofthealth/hl7v2templates:default``<br> **C-CDA** templates: <br> ``microsofthealth/ccdatemplates:default`` <br> **JSON** templates: <br> ``microsofthealth/jsontemplates:default`` <br><br> For ***custom*** templates: <br> \<RegistryServer\>/\<imageName\>@\<imageDigest\>, \<RegistryServer\>/\<imageName\>:\<imageTag\> |
-| rootTemplate | The root template to use while transforming the data. | For **HL7v2**:<br> "ADT_A01", "ADT_A02", "ADT_A03", "ADT_A04", "ADT_A05", "ADT_A08", "ADT_A11", "ADT_A13", "ADT_A14", "ADT_A15", "ADT_A16", "ADT_A25", "ADT_A26", "ADT_A27", "ADT_A28", "ADT_A29", "ADT_A31", "ADT_A47", "ADT_A60", "OML_O21", "ORU_R01", "ORM_O01", "VXU_V04", "SIU_S12", "SIU_S13", "SIU_S14", "SIU_S15", "SIU_S16", "SIU_S17", "SIU_S26", "MDM_T01", "MDM_T02"<br><br> For **C-CDA**:<br> "CCD", "ConsultationNote", "DischargeSummary", "HistoryandPhysical", "OperativeNote", "ProcedureNote", "ProgressNote", "ReferralNote", "TransferSummary" <br><br> For **JSON**: <br> "ExamplePatient", "Stu3ChargeItem" <br> |
+| inputData | Data to be converted. | For `Hl7v2`: string <br> For `Ccda`: XML <br> For `Json`: JSON <br> For `FHIR STU3`: JSON|
+| inputDataType | Data type of input. | ```HL7v2```, ``Ccda``, ``Json``, ``Fhir``|
+| templateCollectionReference | Reference to an [OCI image ](https://github.com/opencontainers/image-spec) template collection on [Azure Container Registry (ACR)](https://azure.microsoft.com/services/container-registry/). It's the image containing Liquid templates to use for conversion. It can be a reference either to the default templates or a custom template image that is registered within the FHIR service. See below to learn about customizing the templates, hosting those on ACR, and registering to the FHIR service. | For ***default/sample*** templates: <br> **HL7v2** templates: <br>```microsofthealth/fhirconverter:default``` <br>``microsofthealth/hl7v2templates:default``<br> **C-CDA** templates: <br> ``microsofthealth/ccdatemplates:default`` <br> **JSON** templates: <br> ``microsofthealth/jsontemplates:default`` <br> **FHIR-STU3** templates: <br> ``microsofthealth/stu3tor4templates:default`` <br><br> For ***custom*** templates: <br> \<RegistryServer\>/\<imageName\>@\<imageDigest\>, \<RegistryServer\>/\<imageName\>:\<imageTag\> |
+| rootTemplate | The root template to use while transforming the data. | For **HL7v2**:<br> "ADT_A01", "ADT_A02", "ADT_A03", "ADT_A04", "ADT_A05", "ADT_A08", "ADT_A11", "ADT_A13", "ADT_A14", "ADT_A15", "ADT_A16", "ADT_A25", "ADT_A26", "ADT_A27", "ADT_A28", "ADT_A29", "ADT_A31", "ADT_A47", "ADT_A60", "OML_O21", "ORU_R01", "ORM_O01", "VXU_V04", "SIU_S12", "SIU_S13", "SIU_S14", "SIU_S15", "SIU_S16", "SIU_S17", "SIU_S26", "MDM_T01", "MDM_T02"<br><br> For **C-CDA**:<br> "CCD", "ConsultationNote", "DischargeSummary", "HistoryandPhysical", "OperativeNote", "ProcedureNote", "ProgressNote", "ReferralNote", "TransferSummary" <br><br> For **JSON**: <br> "ExamplePatient", "Stu3ChargeItem" <br><br> **For FHIR STU3 to R4**": <br>Name of the root template that is the same as the STU3 resource name e.g., "Patient", "Observation", "Organization". |
> [!NOTE] > JSON templates are sample templates for use, not "default" templates that adhere to any pre-defined JSON message types. JSON doesn't have any standardized message types, unlike HL7v2 messages or C-CDA documents. Therefore, instead of default templates we provide you with some sample templates that you can use as a starting guide for your own customized templates.
healthcare-apis Configure Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-import-data.md
Title: Configure import settings in the FHIR service - Azure Health Data Services description: This article describes how to configure import settings in the FHIR service.-+ Last updated 06/06/2022-+ # Configure bulk-import settings (Preview)
healthcare-apis De Identified Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/de-identified-export.md
Previously updated : 08/15/2022- Last updated : 08/30/2022+ # Exporting de-identified data
healthcare-apis Get Started With Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started-with-iot.md
Title: Get started with the MedTech service in Azure Health Data Services
-description: This document describes how to get started with the MedTech service in Azure Health Data Services.
+description: This document describes how to get you started with the MedTech service in Azure Health Data Services.
Previously updated : 08/02/2022 Last updated : 08/30/2022
The following diagram outlines the basic architectural path that enables the Med
### Data processing -- Step 5 represents the data flow from a device to an event hub and is processed through the five parts of the MedTech service.
+- Step 5 represents the data flow from a device to an event hub and the way it's processed through the five parts of the MedTech service.
- Step 6 demonstrates the path to verify processed data sent from MedTech service to the FHIR service.
healthcare-apis Iot Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-overview.md
## Overview
-MedTech service in Azure Health Data Services is a Platform as a service (PaaS) that enables you to gather data from diverse medical devices and change it into a Fast Healthcare Interoperability Resources (FHIR&#174;) service format. MedTech service's device data translation capabilities make it possible to convert a wide variety of data into a unified FHIR format that provides secure health data management in a cloud environment.
+MedTech service in Azure Health Data Services is a Platform as a service (PaaS) that enables you to gather data from diverse medical devices and convert it into a Fast Healthcare Interoperability Resources (FHIR&#174;) service format. MedTech service's device data translation capabilities make it possible to transform a wide variety of data into a unified FHIR format that provides secure health data management in a cloud environment.
MedTech service is important because healthcare data can be difficult to access or lost when it comes from diverse or incompatible devices, systems, or formats. If medical information isn't easy to access, it may have a negative impact on gaining clinical insights and a patient's health and wellness. The ability to translate many types of medical device data into a unified FHIR format enables MedTech service to successfully link devices, health data, labs, and remote in-person care to support the clinician, care team, patient, and family. As a result, this capability can facilitate the discovery of important clinical insights and trend capture. It can also help make connections to new device applications and enable advanced research projects.
iot-edge Development Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/development-environment.md
Only the IoT Edge runtime is supported for production deployments, but the follo
| IoT EdgeHub dev tool | iotedgehubdev | Windows, Linux, macOS | Simulating a device to debug modules. | | IoT Edge dev container | iotedgedev | Windows, Linux, macOS | Developing without installing dependencies. | | IoT Edge runtime in a container | iotedgec | Windows, Linux, macOS, ARM | Testing on a device that may not support the runtime. |
-| IoT Edge device container | toolboc/azure-iot-edge-device-container | Windows, Linux, macOS, ARM | Testing a scenario with many IoT Edge devices at scale. |
### IoT EdgeHub dev tool
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md
All IoT Edge devices use certificates to create secure connections between the r
## Install production certificates
-When you first install IoT Edge and provision your device, the device is set up with temporary certificates so that you can test the service.
-These temporary certificates expire in 90 days, or can be reset by restarting your machine.
+When you first install IoT Edge and provision your device, the device is set up with temporary certificates (known as quickstart CA) so that you can test the service.
+These temporary certificates expire in 90 days.
Once you move into a production scenario, or you want to create a gateway device, you need to provide your own certificates. This article demonstrates the steps to install certificates on your IoT Edge devices.
If you are using IoT Edge for Linux on Windows, you need to use the SSH key loca
sudo iotedge config apply ```
+## Automatic certificate renewal
+
+IoT Edge has built-in ability to renew certificates before expiry.
+
+Certificates renewal requires an issuance method that IoT Edge can manage. Generally, this means an EST server is required, but IoT Edge can also automatically renew the quickstart CA without configuration. Certificate renewal is configured per type of certificate. To configure, go to the relevant certificate configuration section in `config.toml` and add:
+
+```toml
+# To use auto renew with other types of certs, swap `edge_ca` with other certificate types
+# And put into the relevant section
+[edge_ca]
+method = "est"
+#...
+[edge_ca.auto_renew]
+rotate_key = true
+threshold = "80%"
+retry = "4%"
+```
+
+Here:
+- `rotate_key` controls if the private key should be rotated.
+- `threshold` sets when IoT Edge should start renewing the certificate . It can be specified as:
+ - *Percentage* - integer between `0` and `100` followed by `%`. Renewal starts relative to the certificate lifetime. For example, when set to `80%`, a certificate that is valid for 100 days begins renewal at 20 days before its expiry.
+ - *Absolute time* - integer followed by `m` (minutes) or `d` (days). Renewal starts relative to the certificate expiration time. For example, when set to `4d` for 4 days or `10m` for 10 minutes, the certificate begins renewing at that time before expiry. To avoid unintentional misconfiguration where the `threshold` is bigger than the certificate lifetime, we recommend to use *percentage* instead whenever possible.
+- `retry` controls how often renewal should be retried on failure. Like `threshold`, it can similarly be specified as a *percentage* or *absolute time* using the same format.
+ :::moniker-end <!-- end iotedge-2020-11 -->
-## Customize certificate lifetime
+## Customize quickstart CA lifetime
IoT Edge automatically generates certificates on the device in several cases, including:
-<!-- 1.2 -->
-If you don't provide your own production certificates when you install and provision IoT Edge, the IoT Edge security manager automatically generates an **edge CA certificate**. This self-signed certificate is only meant for development and testing scenarios, not production. This certificate expires after 90 days.
-<!-- end 1.2 -->
- <!-- 1.1. --> :::moniker range="iotedge-2018-06"
-* If you don't provide your own production certificates when you install and provision IoT Edge, the IoT Edge security manager automatically generates a **device CA certificate**. This self-signed certificate is only meant for development and testing scenarios, not production. This certificate expires after 90 days.
+* If you don't provide your own production certificates when you install and provision IoT Edge, the IoT Edge security manager automatically generates a **device CA certificate**. This self-signed certificate is known as the quickstart CA and only meant for development and testing scenarios, not production. This certificate expires after 90 days.
* The IoT Edge security manager also generates a **workload CA certificate** signed by the device CA certificate :::moniker-end <!-- end 1.1 -->
+<!-- 1.2 -->
+If you don't provide your own production certificates when you install and provision IoT Edge, the IoT Edge security manager automatically generates an **edge CA certificate**. This self-signed certificate is known as the quickstart CA and only meant for development and testing scenarios, not production. This certificate expires after 90 days.
+<!-- end 1.2 -->
+ For more information about the function of the different certificates on an IoT Edge device, see [Understand how Azure IoT Edge uses certificates](iot-edge-certs.md).
-For these two automatically generated certificates, you have the option of setting a flag in the config file to configure the number of days for the lifetime of the certificates.
+You have the option of setting a flag in the config file to configure the number of days for the lifetime of the certificates.
>[!NOTE] >There is a third auto-generated certificate that the IoT Edge security manager creates, the **IoT Edge hub server certificate**. This certificate always has a 30 day lifetime, but is automatically renewed before expiring. The auto-generated CA lifetime value set in the config file doesn't affect this certificate.
-<!-- 1.2 -->
-Upon expiry after the specified number of days, IoT Edge has to be restarted to regenerate the edge CA certificate. The edge CA certificate won't be renewed automatically.
-<!-- end 1.2 -->
- <!-- 1.1. --> :::moniker range="iotedge-2018-06" Upon expiry after the specified number of days, IoT Edge has to be restarted to regenerate the device CA certificate. The device CA certificate won't be renewed automatically.
Upon expiry after the specified number of days, IoT Edge has to be restarted to
:::moniker-end <!-- end iotedge-2020-11 -->
+<!-- 1.2 -->
+
+### Renew quickstart Edge CA
+
+By default, IoT Edge automatically regenerates the Edge CA certificate when at 80% of the certificate lifetime. So for certificate with 90 day lifetime, IoT Edge automatically regenerates the Edge CA certificate at 72 days from issuance.
+
+To configure the auto-renewal logic, add this part to the "Edge CA certificate" section in `config.toml`.
+
+```toml
+[edge_ca.auto_renew]
+rotate_key = true
+threshold = "70%"
+retry = "2%"
+```
+<!-- end 1.2 -->
+ ## Next steps Installing certificates on an IoT Edge device is a necessary step before deploying your solution in production. Learn more about how to [Prepare to deploy your IoT Edge solution in production](production-checklist.md).
iot-edge Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/version-history.md
This table provides recent version history for IoT Edge package releases, and hi
| Release notes and assets | Type | Date | Highlights | | | - | - | - |
-| [1.4](https://github.com/Azure/azure-iotedge/releases/tag/1.4.0) | Stable | August 2022 | Automatic image clean-up of unused Docker images <br> Ability to pass a [custom JSON payload to DPS on provisioning](../iot-dps/how-to-send-additional-data.md#iot-edge-support) <br> Ability to require all modules in a deployment be downloaded before restart <br> Use of the TCG TPM2 Software Stack which enables TPM hierarchy authorization values, specifying the TPM index at which to persist the DPS authentication key, and accommodating more [TPM configurations](http://github.com/Azure/iotedge/blob/897aed8c5573e8cad4b602e5a1298bdc64cd28b4/edgelet/contrib/config/linux/template.toml#L262-L288)
+| [1.4](https://github.com/Azure/azure-iotedge/releases/tag/1.4.0) | Long-term support (LTS) | August 2022 | Automatic image clean-up of unused Docker images <br> Ability to pass a [custom JSON payload to DPS on provisioning](../iot-dps/how-to-send-additional-data.md#iot-edge-support) <br> Ability to require all modules in a deployment be downloaded before restart <br> Use of the TCG TPM2 Software Stack which enables TPM hierarchy authorization values, specifying the TPM index at which to persist the DPS authentication key, and accommodating more [TPM configurations](http://github.com/Azure/iotedge/blob/897aed8c5573e8cad4b602e5a1298bdc64cd28b4/edgelet/contrib/config/linux/template.toml#L262-L288)
| [1.3](https://github.com/Azure/azure-iotedge/releases/tag/1.3.0) | Stable | June 2022 | Support for Red Hat Enterprise Linux 8 on AMD and Intel 64-bit architectures.<br>Edge Hub now enforces that inbound/outbound communication uses minimum TLS version 1.2 by default<br>Updated runtime modules (edgeAgent, edgeHub) based on .NET 6 | [1.2](https://github.com/Azure/azure-iotedge/releases/tag/1.2.0) | Stable | April 2021 | [IoT Edge devices behind gateways](how-to-connect-downstream-iot-edge-device.md?view=iotedge-2020-11&preserve-view=true)<br>[IoT Edge MQTT broker (preview)](how-to-publish-subscribe.md?view=iotedge-2020-11&preserve-view=true)<br>New IoT Edge packages introduced, with new installation and configuration steps. For more information, see [Update from 1.0 or 1.1 to latest release](how-to-update-iot-edge.md#special-case-update-from-10-or-11-to-latest-release).<br>Includes [Microsoft Defender for IoT micro-agent for Edge](../defender-for-iot/device-builders/overview.md).<br> Integration with Device Update. For more information, see [Update IoT Edge](how-to-update-iot-edge.md). | [1.1](https://github.com/Azure/azure-iotedge/releases/tag/1.1.0) | Long-term support (LTS) | February 2021 | [Long-term support plan and supported systems updates](support.md) |
iot-hub-device-update Device Update Configure Repo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-configure-repo.md
Such as:
- You need to deliver over-the-air updates to your devices from a private package repository with approved versions of libraries and components - You need devices to get packages from a specific vendor's repository
-Following this document, learn how to configure a package repository using [OSConfig for IoT](https://docs.microsoft.com/azure/osconfig/overview-osconfig-for-iot) and deploy packages based updates from that repository to your device fleet using [Device Update for IoT Hub](understand-device-update.md). Package-based updates are targeted updates that alter only a specific component or application on the device. They lead to lower consumption of bandwidth and help reduce the time to download and install the update. Package-based updates also typically allow for less downtime of devices when you apply an update and avoid the overhead of creating images.
+Following this document, learn how to configure a package repository using [OSConfig for IoT](/azure/osconfig/overview-osconfig-for-iot) and deploy packages based updates from that repository to your device fleet using [Device Update for IoT Hub](understand-device-update.md). Package-based updates are targeted updates that alter only a specific component or application on the device. They lead to lower consumption of bandwidth and help reduce the time to download and install the update. Package-based updates also typically allow for less downtime of devices when you apply an update and avoid the overhead of creating images.
## Prerequisites You need an Azure account with an [IoT Hub](../iot-hub/iot-concepts-and-iot-hub.md) and Microsoft Azure Portal or Azure CLI to interact with devices via your IoT Hub. Follow the next steps to get started: - Create a Device Update account and instance in your IoT Hub. See [how to create it](create-device-update-account.md).-- Install the [IoT Hub Identity Service](https://azure.github.io/iot-identity-service/installation.html) (or skip if [IoT Edge 1.2](https://docs.microsoft.com/azure/iot-edge/how-to-provision-single-device-linux-symmetric?view=iotedge-2020-11&preserve-view=true&tabs=azure-portal%2Cubuntu#install-iot-edge) or higher is already installed on the device).
+- Install the [IoT Hub Identity Service](https://azure.github.io/iot-identity-service/installation.html) (or skip if [IoT Edge 1.2](/azure/iot-edge/how-to-provision-single-device-linux-symmetric?view=iotedge-2020-11&preserve-view=true&tabs=azure-portal%2Cubuntu#install-iot-edge) or higher is already installed on the device).
- Install the Device Update agent on the device. See [how to](device-update-ubuntu-agent.md#manually-prepare-a-device).-- Install the OSConfig agent on the device. See [how to](https://docs.microsoft.com/azure/osconfig/howto-install?tabs=package#step-11-connect-a-device-to-packagesmicrosoftcom).-- Now that both the agent and IoT Hub Identity Service are present on the device, the next step is to configure the device with an identity so it can connect to Azure. See example [here](https://docs.microsoft.com/azure/osconfig/howto-install?tabs=package#job-2--connect-to-azure)
+- Install the OSConfig agent on the device. See [how to](/azure/osconfig/howto-install?tabs=package#step-11-connect-a-device-to-packagesmicrosoftcom).
+- Now that both the agent and IoT Hub Identity Service are present on the device, the next step is to configure the device with an identity so it can connect to Azure. See example [here](/azure/osconfig/howto-install?tabs=package#job-2--connect-to-azure)
## How to configure package repository for package updates Follow the below steps to update Azure IoT Edge on Ubuntu Server 18.04 x64 by configuring a source repository. The tools and concepts in this tutorial still apply even if you plan to use a different OS platform configuration.
-1. Configure the package repository of your choice with the OSConfigΓÇÖs configure package repo module. See [how to](https://docs.microsoft.com/azure/osconfig/howto-pmc?tabs=portal%2Csingle#example-1--specify-desired-package-sources). This repository should be the location where you wish to store packages to be downloaded to the device.
+1. Configure the package repository of your choice with the OSConfigΓÇÖs configure package repo module. See [how to](/azure/osconfig/howto-pmc?tabs=portal%2Csingle#example-1--specify-desired-package-sources). This repository should be the location where you wish to store packages to be downloaded to the device.
2. Upload your packages to the above configured repository. 3. Create an [APT manifest](device-update-apt-manifest.md) to provide the Device Update agent with the information it needs to download and install the packages (and their dependencies) from the repository. 4. Follow steps from [here](device-update-ubuntu-agent.md#prerequisites) to do a package update with Device Update. Device Update is used to deploy package updates to a large number of devices and at scale.
iot-hub Iot Hub Bulk Identity Mgmt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-bulk-identity-mgmt.md
The **ImportDevicesAsync** method takes two parameters:
SharedAccessBlobPermissions.Read ```
-* A *string* that contains a URI of an [Azure Storage](https://azure.microsoft.com/documentation/services/storage/) blob container to use as *output* from the job. The job creates a block blob in this container to store any error information from the completed import **Job**. The SAS token must include these permissions:
+* A *string* that contains a URI of an [Azure Storage](/azure/storage/) blob container to use as *output* from the job. The job creates a block blob in this container to store any error information from the completed import **Job**. The SAS token must include these permissions:
```csharp SharedAccessBlobPermissions.Write | SharedAccessBlobPermissions.Read
To further explore the capabilities of IoT Hub, see:
To explore using the IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning, see:
-* [Azure IoT Hub Device Provisioning Service](../iot-dps/index.yml)
+* [Azure IoT Hub Device Provisioning Service](../iot-dps/index.yml)
iot-hub Iot Hub Devguide Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-endpoints.md
The following list describes the endpoints:
* **Service endpoints**. Each IoT hub exposes a set of endpoints for your solution back end to communicate with your devices. With one exception, these endpoints are only exposed using the [AMQP](https://www.amqp.org/) and AMQP over WebSockets protocols. The direct method invocation endpoint is exposed over the HTTPS protocol.
- * *Receive device-to-cloud messages*. This endpoint is compatible with [Azure Event Hubs](https://azure.microsoft.com/documentation/services/event-hubs/). A back-end service can use it to read the [device-to-cloud messages](iot-hub-devguide-messages-d2c.md) sent by your devices. You can create custom endpoints on your IoT hub in addition to this built-in endpoint.
+ * *Receive device-to-cloud messages*. This endpoint is compatible with [Azure Event Hubs](/azure/event-hubs/). A back-end service can use it to read the [device-to-cloud messages](iot-hub-devguide-messages-d2c.md) sent by your devices. You can create custom endpoints on your IoT hub in addition to this built-in endpoint.
* *Send cloud-to-device messages and receive delivery acknowledgments*. These endpoints enable your solution back end to send reliable [cloud-to-device messages](iot-hub-devguide-messages-c2d.md), and to receive the corresponding delivery or expiration acknowledgments.
iot-hub Iot Hub Devguide Identity Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-identity-registry.md
Device identities can also be exported and imported from an IoT Hub via the Serv
The device data that a given IoT solution stores depends on the specific requirements of that solution. But, as a minimum, a solution must store device identities and authentication keys. Azure IoT Hub includes an identity registry that can store values for each device such as IDs, authentication keys, and status codes. A solution can use other Azure services such as table storage, blob storage, or Cosmos DB to store any additional device data.
-*Device provisioning* is the process of adding the initial device data to the stores in your solution. To enable a new device to connect to your hub, you must add a device ID and keys to the IoT Hub identity registry. As part of the provisioning process, you might need to initialize device-specific data in other solution stores. You can also use the Azure IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning to one or more IoT hubs without requiring human intervention. To learn more, see the [provisioning service documentation](https://azure.microsoft.com/documentation/services/iot-dps).
+*Device provisioning* is the process of adding the initial device data to the stores in your solution. To enable a new device to connect to your hub, you must add a device ID and keys to the IoT Hub identity registry. As part of the provisioning process, you might need to initialize device-specific data in other solution stores. You can also use the Azure IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning to one or more IoT hubs without requiring human intervention. To learn more, see the [provisioning service documentation](/azure/iot-dps).
## Device heartbeat
To try out some of the concepts described in this article, see the following IoT
To explore using the IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning, see:
-* [Azure IoT Hub Device Provisioning Service](https://azure.microsoft.com/documentation/services/iot-dps)
+* [Azure IoT Hub Device Provisioning Service](/azure/iot-dps)
iot-hub Iot Hub Devguide Messages Read Builtin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-read-builtin.md
# Read device-to-cloud messages from the built-in endpoint
-By default, messages are routed to the built-in service-facing endpoint (**messages/events**) that is compatible with [Event Hubs](https://azure.microsoft.com/documentation/services/event-hubs/). This endpoint is currently only exposed using the [AMQP](https://www.amqp.org/) protocol on port 5671 and [AMQP over WebSockets](http://docs.oasis-open.org/amqp-bindmap/amqp-wsb/v1.0/cs01/amqp-wsb-v1.0-cs01.html) on port 443. An IoT hub exposes the following properties to enable you to control the built-in Event Hub-compatible messaging endpoint **messages/events**.
+By default, messages are routed to the built-in service-facing endpoint (**messages/events**) that is compatible with [Event Hubs](/azure/event-hubs/). This endpoint is currently only exposed using the [AMQP](https://www.amqp.org/) protocol on port 5671 and [AMQP over WebSockets](http://docs.oasis-open.org/amqp-bindmap/amqp-wsb/v1.0/cs01/amqp-wsb-v1.0-cs01.html) on port 443. An IoT hub exposes the following properties to enable you to control the built-in Event Hub-compatible messaging endpoint **messages/events**.
| Property | Description | | - | -- |
iot-hub Iot Hub Mqtt 5 Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-mqtt-5-reference.md
description: Learn about IoT Hub's MQTT 5 API reference
-
+
Last updated 11/19/2020
iot-hub Iot Hub Mqtt 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-mqtt-5.md
description: Learn about IoT Hub's MQTT 5 support
-
+
Last updated 11/19/2020
iot-hub Iot Hub Preview Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-preview-mode.md
description: Learn how to turn on preview mode for IoT Hub, why you would want to, and some warnings
-
+
Last updated 11/24/2020
iot-hub Iot Hub Tls Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-tls-support.md
description: Learn about using secure TLS connections for devices and services communicating with IoT Hub
-
+
Last updated 06/29/2021
iot-hub Virtual Network Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/virtual-network-support.md
description: How to use virtual networks connectivity pattern with IoT Hub
-
+
Last updated 10/20/2021
load-balancer Load Balancer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-overview.md
Key scenarios that you can accomplish using Azure Standard Load Balancer include
- Increase availability by distributing resources **[within](./tutorial-load-balancer-standard-public-zonal-portal.md)** and **[across](./quickstart-load-balancer-standard-public-portal.md)** zones. -- Configure **[outbound connectivity ](./load-balancer-outbound-connections.md)** for Azure virtual machines.
+- Configure **[outbound connectivity](./load-balancer-outbound-connections.md)** for Azure virtual machines.
- Use **[health probes](./load-balancer-custom-probe-overview.md)** to monitor load-balanced resources.
machine-learning Concept Responsible Ai Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-responsible-ai-dashboard.md
Title: Assess AI systems and make data-driven decisions with Azure Machine Learning Responsible AI dashboard
-description: The Responsible AI dashboard is a comprehensive UI and set of SDK/YAML components to help data scientists debug their machine learning models and make data-driven decisions.
+description: Learn how to use the comprehensive UI and SDK/YAML components in the Responsible AI dashboard to debug your machine learning models and make data-driven decisions.
Last updated 08/17/2022
-# Assess AI systems and make data-driven decisions with Azure Machine Learning Responsible AI dashboard (preview)
+# Assess AI systems by using the Responsible AI dashboard (preview)
-Implementing Responsible AI in practice requires rigorous engineering. Rigorous engineering, however, can be tedious, manual, and time-consuming without the right tooling and infrastructure. Machine learning professionals need tools to implement responsible AI in practice effectively and efficiently.
+Implementing Responsible AI in practice requires rigorous engineering. But rigorous engineering can be tedious, manual, and time-consuming without the right tooling and infrastructure.
-The Responsible AI dashboard provides a single pane of glass that brings together several mature Responsible AI tools in the areas of model [performance and fairness assessment](http://fairlearn.org/), data exploration, [machine learning interpretability](https://interpret.ml/), [error analysis](https://erroranalysis.ai/), [counterfactual analysis and perturbations](https://github.com/interpretml/DiCE), and [causal inference](https://github.com/microsoft/EconML) for a holistic assessment and debugging of models and making informed data-driven decisions. Having access to all of these tools in one interface empowers you to:
+The Responsible AI dashboard provides a single interface to help you implement Responsible AI in practice effectively and efficiently. It brings together several mature Responsible AI tools in the areas of:
-1. Evaluate and debug your machine learning models by identifying model errors and fairness issues, diagnosing why those errors are happening, and informing your mitigation steps.
-2. Boost your data-driven decision-making abilities by addressing questions such as *ΓÇ£what is the minimum change the end user could apply to their features to get a different outcome from the model?ΓÇ¥ and/or ΓÇ£what is the causal effect of reducing or increasing a feature (for example, red meat consumption) on a real-world outcome (for example, diabetes progression)?ΓÇ¥*
+- [Model performance and fairness assessment](http://fairlearn.org/)
+- Data exploration
+- [Machine learning interpretability](https://interpret.ml/)
+- [Error analysis](https://erroranalysis.ai/)
+- [Counterfactual analysis and perturbations](https://github.com/interpretml/DiCE)
+- [Causal inference](https://github.com/microsoft/EconML)
-The dashboard could be customized to include the only subset of tools that are relevant to your use case.
+The dashboard offers a holistic assessment and debugging of models so you can make informed data-driven decisions. Having access to all of these tools in one interface empowers you to:
-Responsible AI dashboard is also accompanied by a [PDF scorecard](how-to-responsible-ai-scorecard.md), which enables you to export Responsible AI metadata and insights of your data and models for sharing offline with the product and compliance stakeholders.
+- Evaluate and debug your machine learning models by identifying model errors and fairness issues, diagnosing why those errors are happening, and informing your mitigation steps.
+- Boost your data-driven decision-making abilities by addressing questions such as:
+
+ "What is the minimum change that users can apply to their features to get a different outcome from the model?"
+
+ "What is the causal effect of reducing or increasing a feature (for example, red meat consumption) on a real-world outcome (for example, diabetes progression)?"
+
+You can customize the dashboard to include only the subset of tools that are relevant to your use case.
+
+The Responsible AI dashboard is accompanied by a [PDF scorecard](how-to-responsible-ai-scorecard.md). The scorecard enables you to export Responsible AI metadata and insights into your data and models. You can then share them offline with the product and compliance stakeholders.
## Responsible AI dashboard components
-The Responsible AI dashboard brings together, in a comprehensive view, various new and pre-existing tools, integrating them with the Azure Machine Learning [CLIv2, Python SDKv2](concept-v2.md) and [studio](overview-what-is-machine-learning-studio.md). These tools include:
+The Responsible AI dashboard brings together, in a comprehensive view, various new and pre-existing tools. The dashboard integrates these tools with [Azure Machine Learning CLI v2, Azure Machine Learning Python SDK v2](concept-v2.md), and [Azure Machine Learning studio](overview-what-is-machine-learning-studio.md). The tools include:
-1. [Data explorer](concept-data-analysis.md) to understand and explore your dataset distributions and statistics.
-2. [Model overview and fairness assessment](concept-fairness-ml.md) to evaluate the performance of your model and evaluate your modelΓÇÖs group fairness issues (how diverse groups of people are impacted by your modelΓÇÖs predictions).
-3. [Error Analysis](concept-error-analysis.md) to view and understand how errors are distributed in your dataset.
-4. [Model interpretability](how-to-machine-learning-interpretability.md) (aggregate/individual feature importance values) to understand your modelΓÇÖs predictions and how those overall and individual predictions are made.
-5. [Counterfactual What-If](concept-counterfactual-analysis.md) to observe how feature perturbations would impact your model predictions while providing you with the closest data points with opposing or different model predictions.
-6. [Causal analysis](concept-causal-inference.md) to use historical data to view the causal effects of treatment features on real-world outcomes.
+- [Data explorer](concept-data-analysis.md), to understand and explore your dataset distributions and statistics.
+- [Model overview and fairness assessment](concept-fairness-ml.md), to evaluate the performance of your model and evaluate your model's group fairness issues (how your model's predictions affect diverse groups of people).
+- [Error analysis](concept-error-analysis.md), to view and understand how errors are distributed in your dataset.
+- [Model interpretability](how-to-machine-learning-interpretability.md) (importance values for aggregate and individual features), to understand your model's predictions and how those overall and individual predictions are made.
+- [Counterfactual what-if](concept-counterfactual-analysis.md), to observe how feature perturbations would affect your model predictions while providing the closest data points with opposing or different model predictions.
+- [Causal analysis](concept-causal-inference.md), to use historical data to view the causal effects of treatment features on real-world outcomes.
-Together, these components will enable you to debug machine learning models, while informing your data-driven and model-driven business decisions. The following diagram and two sections explain how these tools could be incorporated into your AI lifecycle to achieve improved models and solid data insights.
+Together, these tools will help you debug machine learning models, while informing your data-driven and model-driven business decisions. The following diagram shows how you can incorporate them into your AI lifecycle to improve your models and get solid data insights.
### Model debugging Assessing and debugging machine learning models is critical for model reliability, interpretability, fairness, and compliance. It helps determine how and why AI systems behave the way they do. You can then use this knowledge to improve model performance. Conceptually, model debugging consists of three stages: -- **Identify**, to understand and recognize model errors and/or fairness issues by addressing the following questions:
- - *What kinds of errors does my model have?*
- - *In what areas are errors most prevalent?*
-- **Diagnose**, to explore the reasons behind the identified errors by addressing:
- - *What are the causes of these errors?*
- - *Where should I focus my resources to improve my model?*
-- **Mitigate**, to use the identification and diagnosis insights from previous stages to take targeted mitigation steps and address questions such as:
- - *How can I improve my model?*
- - *What social or technical solutions exist for these issues?*
+1. **Identify**, to understand and recognize model errors and/or fairness issues by addressing the following questions:
+
+ "What kinds of errors does my model have?"
+
+ "In what areas are errors most prevalent?"
+1. **Diagnose**, to explore the reasons behind the identified errors by addressing:
+
+ "What are the causes of these errors?"
+
+ "Where should I focus my resources to improve my model?"
+1. **Mitigate**, to use the identification and diagnosis insights from previous stages to take targeted mitigation steps and address questions such as:
+ "How can I improve my model?"
-Below are the components of the Responsible AI dashboard supporting model debugging:
+ "What social or technical solutions exist for these issues?"
++
+The following table describes when to use Responsible AI dashboard components to support model debugging:
| Stage | Component | Description | |-|--|-|
-| Identify | Error Analysis | The Error Analysis component provides machine learning practitioners with a deeper understanding of model failure distribution and assists you with quickly identifying erroneous cohorts of data. <br><br> The capabilities of this component in the dashboard are founded by the [Error Analysis](https://erroranalysis.ai/) package.|
-| Identify | Fairness Analysis | The Fairness component assesses how different groups, defined in terms of sensitive attributes such as sex, race, age, etc., are affected by your model predictions and how the observed disparities may be mitigated. It evaluates the performance of your model by exploring the distribution of your prediction values and the values of your model performance metrics across different sensitive subgroups. The capabilities of this component in the dashboard are founded by the [Fairlearn](https://fairlearn.org/) package. |
-| Identify | Model Overview | The Model Overview component aggregates various model assessment metrics, showing a high-level view of model prediction distribution for better investigation of its performance. It also enables group fairness assessment, highlighting the breakdown of model performance across different sensitive groups. |
-| Diagnose | Data Explorer | The Data Explorer component helps to visualize datasets based on predicted and actual outcomes, error groups, and specific features. This helps to identify issues of over- and underrepresentation and to see how data is clustered in the dataset. |
-| Diagnose | Model Interpretability | The Interpretability component generates human-understandable explanations of the predictions of a machine learning model. It provides multiple views into a modelΓÇÖs behavior: global explanations (for example, which features affect the overall behavior of a loan allocation model) and local explanations (for example, why an applicantΓÇÖs loan application was approved or rejected). <br><br> The capabilities of this component in the dashboard are founded by the [InterpretML](https://interpret.ml/) package. |
-| Diagnose | Counterfactual Analysis and What-If| The Counterfactual Analysis and what-if component consist of two functionalities for better error diagnosis: <br> - Generating a set of examples with minimal changes to a given point such that those changes alter the model's prediction (showing the closest data points with opposite model predictions). <br> - Enabling interactive and custom what-if perturbations for individual data points to understand how the model reacts to feature changes. <br> <br> The capabilities of this component in the dashboard are founded by the [DiCE](https://github.com/interpretml/DiCE) package. |
+| Identify | Error analysis | The error analysis component helps you get a deeper understanding of model failure distribution and quickly identify erroneous cohorts (subgroups) of data. <br><br> The capabilities of this component in the dashboard come from the [Error Analysis](https://erroranalysis.ai/) package.|
+| Identify | Fairness analysis | The fairness component defines groups in terms of sensitive attributes such as sex, race, and age. It then assesses how your model predictions affect these groups and how you can mitigate disparities. It evaluates the performance of your model by exploring the distribution of your prediction values and the values of your model performance metrics across the groups. <br><br>The capabilities of this component in the dashboard come from the [Fairlearn](https://fairlearn.org/) package. |
+| Identify | Model overview | The model overview component aggregates model assessment metrics in a high-level view of model prediction distribution for better investigation of its performance. This component also enables group fairness assessment by highlighting the breakdown of model performance across sensitive groups. |
+| Diagnose | Data explorer | The data explorer visualizes datasets based on predicted and actual outcomes, error groups, and specific features. You can then identify issues of overrepresentation and underrepresentation, along with seeing how data is clustered in the dataset. |
+| Diagnose | Model interpretability | The interpretability component generates human-understandable explanations of the predictions of a machine learning model. It provides multiple views into a model's behavior: <br> - Global explanations (for example, which features affect the overall behavior of a loan allocation model) <br> - Local explanations (for example, why an applicant's loan application was approved or rejected) <br><br> The capabilities of this component in the dashboard come from the [InterpretML](https://interpret.ml/) package. |
+| Diagnose | Counterfactual analysis and what-if| This component consists of two functionalities for better error diagnosis: <br> - Generating a set of examples in which minimal changes to a particular point alter the model's prediction. That is, the examples show the closest data points with opposite model predictions. <br> - Enabling interactive and custom what-if perturbations for individual data points to understand how the model reacts to feature changes. <br> <br> The capabilities of this component in the dashboard come from the [DiCE](https://github.com/interpretml/DiCE) package. |
-Mitigation steps are available via standalone tools such as [Fairlearn](https://fairlearn.org/) (see [unfairness mitigation algorithms](https://fairlearn.org/v0.7.0/user_guide/mitigation.html)).
+Mitigation steps are available via standalone tools such as [Fairlearn](https://fairlearn.org/). For more information, see the [unfairness mitigation algorithms](https://fairlearn.org/v0.7.0/user_guide/mitigation.html).
### Responsible decision-making
-Decision-making is one of the biggest promises of machine learning. The Responsible AI dashboard helps you inform your model-driven and data-driven business decisions.
+Decision-making is one of the biggest promises of machine learning. The Responsible AI dashboard can help you make informed business decisions through:
+
+- Data-driven insights, to further understand causal treatment effects on an outcome by using historical data only. For example:
-- Data-driven insights to further understand causal treatment effects on an outcome, using historic data only. For example, *ΓÇ£how would a medicine impact a patientΓÇÖs blood pressure?"* or *ΓÇ£how would providing promotional values to certain customers impact revenue?"*. Such insights are provided through the [Causal inference](concept-causal-inference.md) component of the dashboard.-- Model-driven insights, to answer end-users questions such as *ΓÇ£what can I do to get a different outcome from your AI next time?ΓÇ¥* to inform their actions. Such insights are provided to data scientists through the [Counterfactual What-If](concept-counterfactual-analysis.md) component described above.
+ "How would a medicine affect a patient's blood pressure?"
+
+ "How would providing promotional values to certain customers affect revenue?"
+
+ These insights are provided through the [causal inference](concept-causal-inference.md) component of the dashboard.
+- Model-driven insights, to answer users' questions (such as "What can I do to get a different outcome from your AI next time?") so they can take action. These insights are provided to data scientists through the [counterfactual what-if](concept-counterfactual-analysis.md) component.
-Exploratory data analysis, counterfactual analysis, and causal inference capabilities can assist you to make informed model-driven and data-driven decisions responsibly.
+Exploratory data analysis, causal inference, and counterfactual analysis capabilities can help you make informed model-driven and data-driven decisions responsibly.
-Below are the components of the Responsible AI dashboard supporting responsible decision-making:
+These components of the Responsible AI dashboard support responsible decision-making:
-- **Data Explorer**
- - The component could be reused here to understand data distributions and identify over- and underrepresentation. Data exploration is a critical part of decision making as one can conclude that it isn't feasible to make informed decisions about a cohort that is underrepresented within data.
-- **Causal Inference**
- - The Causal Inference component estimates how a real-world outcome changes in the presence of an intervention. It also helps to construct promising interventions by simulating different feature responses to various interventions and creating rules to determine which population cohorts would benefit from a particular intervention. Collectively, these functionalities allow you to apply new policies and effect real-world change.
- - The capabilities of this component are founded by the [EconML](https://github.com/Microsoft/EconML) package, which estimates heterogeneous treatment effects from observational data via machine learning.
-- **Counterfactual Analysis**
- - The Counterfactual Analysis component described above could be reused here to help data scientists generate minimum changes applied to a data point's features leading to opposite model predictions (Taylor would have gotten the loan approval from the AI if they earned 10,000 more annual income and had two fewer credit cards open). Providing such information to the end users informs their perspective, educating them on how they can take action to get the desired outcome from the AI in the future.
- - The capabilities of this component are founded by the [DiCE](https://github.com/interpretml/DiCE) package.
+- **Data explorer**: You can reuse the data explorer component here to understand data distributions and to identify overrepresentation and underrepresentation. Data exploration is a critical part of decision making, because it isn't feasible to make informed decisions about a cohort that's underrepresented in the data.
+- **Causal inference**: The causal inference component estimates how a real-world outcome changes in the presence of an intervention. It also helps construct promising interventions by simulating feature responses to various interventions and creating rules to determine which population cohorts would benefit from a particular intervention. Collectively, these functionalities allow you to apply new policies and effect real-world change.
+
+ The capabilities of this component come from the [EconML](https://github.com/Microsoft/EconML) package, which estimates heterogeneous treatment effects from observational data via machine learning.
+- **Counterfactual analysis**: You can reuse the counterfactual analysis component here to generate minimum changes applied to a data point's features that lead to opposite model predictions. For example: Taylor would have obtained the loan approval from the AI if they earned $10,000 more in annual income and had two fewer credit cards open.
-## Why should you use the Responsible AI dashboard?
+ Providing this information to users informs their perspective. It educates them on how they can take action to get the desired outcome from the AI in the future.
+
+ The capabilities of this component come from the [DiCE](https://github.com/interpretml/DiCE) package.
-### Challenges with the status quo
+## Reasons for using the Responsible AI dashboard
-While progress has been made on individual tools for specific areas of Responsible AI, data scientists often need to use various tools (for example, performance assessment and model interpretability and fairness assessment) together, to holistically evaluate their models and data. For example, if a data scientist discovers a fairness issue with one tool, they then need to jump to a different tool to understand what data or model factors lie at the root of the issue before taking any steps on mitigation. This highly challenging process is further complicated for the following reasons.
+Although progress has been made on individual tools for specific areas of Responsible AI, data scientists often need to use various tools to holistically evaluate their models and data. For example: they might have to use model interpretability and fairness assessment together.
-- First, there's no central location to discover and learn about the tools, extending the time it takes to research and learn new techniques. -- Second, the different tools don't exactly communicate with each other. Data scientists must wrangle the datasets, models, and other metadata as they pass them between the different tools. - Third, the metrics and visualizations aren't easily comparable, and the results are hard to share.
+If data scientists discover a fairness issue with one tool, they then need to jump to a different tool to understand what data or model factors lie at the root of the issue before taking any steps on mitigation. The following factors further complicate this challenging process:
-### Responsible AI dashboard challenging the status quo
+- There's no central location to discover and learn about the tools, extending the time it takes to research and learn new techniques.
+- The different tools don't communicate with each other. Data scientists must wrangle the datasets, models, and other metadata as they pass them between the tools.
+- The metrics and visualizations aren't easily comparable, and the results are hard to share.
-The Responsible AI dashboard is the first comprehensive yet customizable tool, bringing together fragmented experiences under one roof, enabling you to seamlessly onboard to a single customizable framework for model debugging and data-driven decision making.
+The Responsible AI dashboard challenges this status quo. It's a comprehensive yet customizable tool that brings together fragmented experiences in one place. It enables you to seamlessly onboard to a single customizable framework for model debugging and data-driven decision-making.
-Using the Responsible AI dashboard, you can create dataset cohorts (subgroups of data), pass those cohorts to all of the supported components (for example, model interpretability, data explorer, model performance, etc.) and observe your model health for your identified cohorts. You can further compare insights from all supported components across a variety of pre-built cohorts to perform disaggregated analysis and find the blind spots of your model.
+By using the Responsible AI dashboard, you can create dataset cohorts, pass those cohorts to all of the supported components, and observe your model health for your identified cohorts. You can further compare insights from all supported components across a variety of prebuilt cohorts to perform disaggregated analysis and find the blind spots of your model.
-Whenever you're ready to share those insights with other stakeholders, you can extract them easily via our [Responsible AI PDF scorecard](how-to-responsible-ai-scorecard.md)) and attach the PDF report to your compliance reports or share it with other colleagues to build trust and get their approval.
+When you're ready to share those insights with other stakeholders, you can extract them easily by using the [Responsible AI PDF scorecard](how-to-responsible-ai-scorecard.md). Attach the PDF report to your compliance reports, or share it with colleagues to build trust and get their approval.
+## Ways to customize the Responsible AI dashboard
-## How to customize the Responsible AI dashboard?
+The Responsible AI dashboard's strength lies in its customizability. It empowers users to design tailored, end-to-end model debugging and decision-making workflows that address their particular needs.
-The Responsible AI dashboardΓÇÖs strength lies in its customizability. It empowers users to design tailored, end-to-end model debugging and decision-making workflows that address their particular needs. Need some inspiration? Here are some examples of how its components can be put together to analyze scenarios in diverse ways:
+Need some inspiration? Here are some examples of how the dashboard's components can be put together to analyze scenarios in diverse ways:
-| Responsible AI Dashboard Flow | Use Case |
+| Responsible AI dashboard flow | Use case |
|-|-|
-| Model Overview -> Error Analysis -> Data Explorer | To identify model errors and diagnose them by understanding the underlying data distribution |
-| Model Overview -> Fairness Assessment -> Data Explorer | To identify model fairness issues and diagnose them by understanding the underlying data distribution |
-| Model Overview -> Error Analysis -> Counterfactuals Analysis and What-If | To diagnose errors in individual instances with counterfactual analysis (minimum change to lead to a different model prediction) |
-| Model Overview -> Data Explorer | To understand the root cause of errors and fairness issues introduced via data imbalances or lack of representation of a particular data cohort |
-| Model Overview -> Interpretability | To diagnose model errors through understanding how the model has made its predictions |
-| Data Explorer -> Causal Inference | To distinguish between correlations and causations in the data or decide the best treatments to apply to see a positive outcome |
-| Interpretability -> Causal Inference | To learn whether the factors that the model has used for prediction making have any causal effect on the real-world outcome|
-| Data Explorer -> Counterfactuals Analysis and What-If | To address customer questions about what they can do next time to get a different outcome from an AI|
-
-## Who should use the Responsible AI dashboard?
-
-The Responsible AI dashboard, and its corresponding [Responsible AI scorecard](how-to-responsible-ai-scorecard.md), could be incorporated by the following personas to build trust with AI systems.
--- Machine learning professionals and data scientists who are interested in debugging and improving their machine learning models pre-deployment.-- Machine learning professionals and data scientists who are interested in sharing their model health records with product managers and business stakeholders to build trust and receive deployment permissions.-- Product managers and business stakeholders who are reviewing machine learning models pre-deployment.-- Risk officers who are reviewing machine learning models for understanding fairness and reliability issues.-- Providers of solutions to end users who would like to explain model decisions to the end users and/or help them improve the outcome next time.-- Those professionals in heavily regulated spaces who need to review machine learning models with regulators and auditors.
+| Model overview > error analysis > data explorer | To identify model errors and diagnose them by understanding the underlying data distribution |
+| Model overview > fairness assessment > data explorer | To identify model fairness issues and diagnose them by understanding the underlying data distribution |
+| Model overview > error analysis > counterfactuals analysis and what-if | To diagnose errors in individual instances with counterfactual analysis (minimum change to lead to a different model prediction) |
+| Model overview > data explorer | To understand the root cause of errors and fairness issues introduced via data imbalances or lack of representation of a particular data cohort |
+| Model overview > interpretability | To diagnose model errors through understanding how the model has made its predictions |
+| Data explorer > causal inference | To distinguish between correlations and causations in the data or decide the best treatments to apply to get a positive outcome |
+| Interpretability > causal inference | To learn whether the factors that the model has used for prediction-making have any causal effect on the real-world outcome|
+| Data explorer > counterfactuals analysis and what-if | To address customers' questions about what they can do next time to get a different outcome from an AI system|
+
+## People who should use the Responsible AI dashboard
+
+The following people can use the Responsible AI dashboard, and its corresponding [Responsible AI scorecard](how-to-responsible-ai-scorecard.md), to build trust with AI systems:
+
+- Machine learning professionals and data scientists who are interested in debugging and improving their machine learning models before deployment
+- Machine learning professionals and data scientists who are interested in sharing their model health records with product managers and business stakeholders to build trust and receive deployment permissions
+- Product managers and business stakeholders who are reviewing machine learning models before deployment
+- Risk officers who are reviewing machine learning models to understand fairness and reliability issues
+- Providers of AI solutions who want to explain model decisions to users or help them improve the outcome
+- Professionals in heavily regulated spaces who need to review machine learning models with regulators and auditors
## Supported scenarios and limitations - The Responsible AI dashboard currently supports regression and classification (binary and multi-class) models trained on tabular structured data. -- The Responsible AI dashboard currently supports MLFlow models that are registered in the Azure Machine Learning with a sklearn flavor only. The scikit-learn models should implement `predict()/predict_proba()` methods or the model should be wrapped within a class, which implements `predict()/predict_proba()` methods. The models must be loadable in the component environment and must be pickleable.-- The Responsible AI dashboard currently visualizes up to 5K of your data points in the dashboard UI. You should downsample your dataset to 5K or less before passing it to the dashboard.-- The dataset inputs to the Responsible AI dashboard must be pandas DataFrames in Parquet format. Numpy and Scipy sparse data are currently not supported. -- The Responsible AI dashboard currently supports numeric or categorical features. For categorical features, currently the user has to explicitly specify the feature names.
+- The Responsible AI dashboard currently supports MLflow models that are registered in Azure Machine Learning with a sklearn (scikit-learn) flavor only. The scikit-learn models should implement `predict()/predict_proba()` methods, or the model should be wrapped within a class that implements `predict()/predict_proba()` methods. The models must be loadable in the component environment and must be pickleable.
+- The Responsible AI dashboard currently visualizes up to 5K of your data points on the dashboard UI. You should downsample your dataset to 5K or less before passing it to the dashboard.
+- The dataset inputs to the Responsible AI dashboard must be pandas DataFrames in Parquet format. NumPy and SciPy sparse data is currently not supported.
+- The Responsible AI dashboard currently supports numeric or categorical features. For categorical features, the user has to explicitly specify the feature names.
- The Responsible AI dashboard currently doesn't support datasets with more than 10K columns. ## Next steps -- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI](how-to-responsible-ai-dashboard-ui.md).-- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.
+- Learn how to generate the Responsible AI dashboard via [CLI and SDK](how-to-responsible-ai-dashboard-sdk-cli.md) or [Azure Machine Learning studio UI](how-to-responsible-ai-dashboard-ui.md).
+- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed on the Responsible AI dashboard.
machine-learning Concept Responsible Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-responsible-ml.md
Title: What is responsible AI (preview)
+ Title: What is Responsible AI (preview)
-description: Learn what responsible AI is and how to use it with Azure Machine Learning to understand models, protect data and control the model lifecycle.
+description: Learn what Responsible AI is and how to use it with Azure Machine Learning to understand models, protect data, and control the model lifecycle.
Last updated 08/05/2022
-#Customer intent: As a data scientist, I want to learn what responsible AI is and how I can use it in Azure Machine Learning.
+#Customer intent: As a data scientist, I want to learn what Responsible AI is and how I can use it in Azure Machine Learning.
-# What is Responsible AI? (preview)
+# What is Responsible AI (preview)?
[!INCLUDE [dev v1](../../includes/machine-learning-dev-v1.md)] [!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-Responsible Artificial Intelligence (Responsible AI) is an approach to developing, assessing, and deploying AI systems in a safe, trustworthy and ethical manner. AI systems are the product of many different decisions made by those who develop and deploy them. From system purpose to how people interact with AI systems, we need to proactively guide these decisions toward more beneficial and equitable outcomes. That means keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability, and transparency.
+Responsible Artificial Intelligence (Responsible AI) is an approach to developing, assessing, and deploying AI systems in a safe, trustworthy, and ethical way. AI systems are the product of many decisions made by those who develop and deploy them. From system purpose to how people interact with AI systems, Responsible AI can help proactively guide these decisions toward more beneficial and equitable outcomes. That means keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability, and transparency.
-At Microsoft, we've developed a [Responsible AI Standard](https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf), a framework to guide how we build AI systems, according to our six principles of fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. For us, these principles are the cornerstone of a responsible and trustworthy approach to AI, especially as intelligent technology becomes more prevalent in the products and services we use every day.
+Microsoft has developed a [Responsible AI Standard](https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf). It's a framework for building AI systems according to six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. For Microsoft, these principles are the cornerstone of a responsible and trustworthy approach to AI, especially as intelligent technology becomes more prevalent in products and services that people use every day.
+This article demonstrates how Azure Machine Learning supports tools for enabling developers and data scientists to implement and operationalize the six principles.
-This article explains the six principles and demonstrates how Azure Machine Learning supports tools for making it seamless for ML developers and data scientists to implement and operationalize them in practice.
- ## Fairness and inclusiveness
-AI systems should treat everyone fairly and avoid affecting similarly situated groups of people in different ways. For example, when AI systems provide guidance on medical treatment, loan applications, or employment, they should make the same recommendations to everyone with similar symptoms, financial circumstances, or professional qualifications.
+AI systems should treat everyone fairly and avoid affecting similarly situated groups of people in different ways. For example, when AI systems provide guidance on medical treatment, loan applications, or employment, they should make the same recommendations to everyone who has similar symptoms, financial circumstances, or professional qualifications.
-**Fairness and inclusiveness in Azure Machine Learning**: Azure Machine LearningΓÇÖs [fairness assessment component](./concept-fairness-ml.md) of the [Responsible AI dashboard](./concept-responsible-ai-dashboard.md) enables data scientists and ML developers to assess model fairness across sensitive groups defined in terms of gender, ethnicity, age, etc.
+**Fairness and inclusiveness in Azure Machine Learning**: The [fairness assessment](./concept-fairness-ml.md) component of the [Responsible AI dashboard](./concept-responsible-ai-dashboard.md) enables data scientists and developers to assess model fairness across sensitive groups defined in terms of gender, ethnicity, age, and other characteristics.
## Reliability and safety
-To build trust, it's critical that AI systems operate reliably, safely, and consistently under normal circumstances and in unexpected conditions. These systems should be able to operate as they were originally designed, respond safely to unanticipated conditions, and resist harmful manipulation. It's also important to be able to verify that these systems are behaving as intended under actual operating conditions. How they behave and the variety of conditions they can handle reliably and safely largely reflects the range of situations and circumstances that developers anticipate during design and testing.
+To build trust, it's critical that AI systems operate reliably, safely, and consistently. These systems should be able to operate as they were originally designed, respond safely to unanticipated conditions, and resist harmful manipulation. How they behave and the variety of conditions they can handle reflect the range of situations and circumstances that developers anticipated during design and testing.
+
+**Reliability and safety in Azure Machine Learning**: The [error analysis](./concept-error-analysis.md) component of the [Responsible AI dashboard](./concept-responsible-ai-dashboard.md) enables data scientists and developers to:
-**Reliability and safety in Azure Machine Learning**: Azure Machine LearningΓÇÖs [Error Analysis](./concept-error-analysis.md) component of the [Responsible AI dashboard](./concept-responsible-ai-dashboard.md) enables data scientists and ML developers to get a deep understanding of how failure is distributed for a model, identify cohorts of data with higher error rate than the overall benchmark. These discrepancies might occur when the system or model underperforms for specific demographic groups or infrequently observed input conditions in the training data.
+- Get a deep understanding of how failure is distributed for a model.
+- Identify cohorts (subsets) of data with a higher error rate than the overall benchmark.
+
+These discrepancies might occur when the system or model underperforms for specific demographic groups or for infrequently observed input conditions in the training data.
## Transparency
-When AI systems are used to help inform decisions that have tremendous impacts on people's lives, it's critical that people understand how those decisions were made. For example, a bank might use an AI system to decide whether a person is creditworthy, or a company might use an AI system to determine the most qualified candidates to hire.
+When AI systems help inform decisions that have tremendous impacts on people's lives, it's critical that people understand how those decisions were made. For example, a bank might use an AI system to decide whether a person is creditworthy. A company might use an AI system to determine the most qualified candidates to hire.
+
+A crucial part of transparency is *interpretability*: the useful explanation of the behavior of AI systems and their components. Improving interpretability requires stakeholders to comprehend how and why AI systems function the way they do. The stakeholders can then identify potential performance issues, fairness issues, exclusionary practices, or unintended outcomes.
+
+**Transparency in Azure Machine Learning**: The [model interpretability](how-to-machine-learning-interpretability.md) and [counterfactual what-if](./concept-counterfactual-analysis.md) components of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) enable data scientists and developers to generate human-understandable descriptions of the predictions of a model.
+
+The model interpretability component provides multiple views into a model's behavior:
+
+- *Global explanations*. For example, what features affect the overall behavior of a loan allocation model?
+- *Local explanations*. For example, why was a customer's loan application approved or rejected?
+- *Model explanations for a selected cohort of data points*. For example, what features affect the overall behavior of a loan allocation model for low-income applicants?
-A crucial part of transparency is what we refer to as interpretability or the useful explanation of the behavior of AI systems and their components. Improving interpretability requires that stakeholders comprehend how and why AI systems function the way they do so that they can identify potential performance issues, fairness issues, exclusionary practices, or unintended outcomes.
+The counterfactual what-if component enables understanding and debugging a machine learning model in terms of how it reacts to feature changes and perturbations.
-**Transparency in Azure Machine Learning**: Azure Machine LearningΓÇÖs [Model Interpretability](how-to-machine-learning-interpretability.md) and [Counterfactual What-If](./concept-counterfactual-analysis.md) components of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) enable data scientists and ML developers to generate human-understandable descriptions of the predictions of a model. The Model Interpretability component provides multiple views into their modelΓÇÖs behavior: global explanations (for example, what features affect the overall behavior of a loan allocation model?) and local explanations (for example, why a customerΓÇÖs loan application was approved or rejected?). One can also observe model explanations for a selected cohort of data points (for example, what features affect the overall behavior of a loan allocation model for low-income applicants?). Moreover, the Counterfactual What-If component enables understanding and debugging a machine learning model in terms of how it reacts to feature changes and perturbations. Azure Machine Learning also supports a [Responsible AI scorecard](./how-to-responsible-ai-scorecard.md), a customizable PDF report that machine learning developers can easily configure, generate, download, and share with their technical and non-technical stakeholders to educate them about their datasets and models health, achieve compliance, and build trust. This scorecard could also be used in audit reviews to uncover the characteristics of machine learning models.
+Azure Machine Learning also supports a [Responsible AI scorecard](./how-to-responsible-ai-scorecard.md). The scorecard is a customizable PDF report that developers can easily configure, generate, download, and share with their technical and non-technical stakeholders to educate them about their datasets and models health, achieve compliance, and build trust. This scorecard can also be used in audit reviews to uncover the characteristics of machine learning models.
-## Privacy and Security
+## Privacy and security
-As AI becomes more prevalent, protecting the privacy and securing important personal and business information is becoming more critical and complex. With AI, privacy and data security issues require especially close attention because access to data is essential for AI systems to make accurate and informed predictions and decisions about people. AI systems must comply with privacy laws that require transparency about the collection, use, and storage of data and mandate that consumers have appropriate controls to choose how their data is used.
+As AI becomes more prevalent, protecting privacy and securing personal and business information are becoming more important and complex. With AI, privacy and data security require close attention because access to data is essential for AI systems to make accurate and informed predictions and decisions about people. AI systems must comply with privacy laws that:
-**Privacy and Security in Azure Machine Learning**: Azure Machine Learning is enabling administrators, DevOps, and MLOps developers to [create a secure configuration that is compliant](concept-enterprise-security.md) with their company's policies. With Azure Machine Learning and the Azure platform, users can:
+- Require transparency about the collection, use, and storage of data.
+- Mandate that consumers have appropriate controls to choose how their data is used.
-- Restrict access to resources and operations by user account or groups-- Restrict incoming and outgoing network communications-- Encrypt data in transit and at rest-- Scan for vulnerabilities-- Apply and audit configuration policies
+**Privacy and security in Azure Machine Learning**: Azure Machine Learning enables administrators and developers to [create a secure configuration that complies](concept-enterprise-security.md) with their companies' policies. With Azure Machine Learning and the Azure platform, users can:
-Microsoft has also created two open source packages that could enable further implementation of privacy and security principles:
+- Restrict access to resources and operations by user account or group.
+- Restrict incoming and outgoing network communications.
+- Encrypt data in transit and at rest.
+- Scan for vulnerabilities.
+- Apply and audit configuration policies.
-- SmartNoise: Differential privacy is a set of systems and practices that help keep the data of individuals safe and private. In machine learning solutions, differential privacy may be required for regulatory compliance. Implementing differentially private systems, however, is difficult. [SmartNoise](https://github.com/opendifferentialprivacy/smartnoise-core) is an open-source project (co-developed by Microsoft) that contains different components for building global differentially private systems.
+Microsoft has also created two open-source packages that can enable further implementation of privacy and security principles:
+- [SmartNoise](https://github.com/opendifferentialprivacy/smartnoise-core): Differential privacy is a set of systems and practices that help keep the data of individuals safe and private. In machine learning solutions, differential privacy might be required for regulatory compliance. SmartNoise is an open-source project (co-developed by Microsoft) that contains components for building differentially private systems that are global.
-- Counterfit: [Counterfit](https://github.com/Azure/counterfit/) is an open-source project that comprises a command-line tool and generic automation layer to allow developers to simulate cyber-attacks against AI systems. Anyone can download the tool and deploy it through Azure Shell, to run in-browser, or locally in an Anaconda Python environment. It can assess AI models hosted in various cloud environments, on-premises, or in the edge. The tool is agnostic to AI models and supports various data types, including text, images, or generic input.
+- [Counterfit](https://github.com/Azure/counterfit/): Counterfit is an open-source project that comprises a command-line tool and generic automation layer to allow developers to simulate cyberattacks against AI systems. Anyone can download the tool and deploy it through Azure Cloud Shell to run in a browser, or deploy it locally in an Anaconda Python environment. It can assess AI models hosted in various cloud environments, on-premises, or in the edge. The tool is agnostic to AI models and supports various data types, including text, images, or generic input.
## Accountability
-The people who design and deploy AI systems must be accountable for how their systems operate. Organizations should draw upon industry standards to develop accountability norms. These norms can ensure that AI systems aren't the final authority on any decision that impacts people's lives and that humans maintain meaningful control over otherwise highly autonomous AI systems.
+The people who design and deploy AI systems must be accountable for how their systems operate. Organizations should draw upon industry standards to develop accountability norms. These norms can ensure that AI systems aren't the final authority on any decision that affects people's lives. They can also ensure that humans maintain meaningful control over otherwise highly autonomous AI systems.
-**Accountability in Azure Machine Learning**: Azure Machine LearningΓÇÖs [Machine Learning Operations (MLOps)](concept-model-management-and-deployment.md) is based on DevOps principles and practices that increase the efficiency of AI workflows. Azure Machine Learning provides the following MLOps capabilities for better accountability of your AI systems:
+**Accountability in Azure Machine Learning**: [Machine learning operations (MLOps)](concept-model-management-and-deployment.md) is based on DevOps principles and practices that increase the efficiency of AI workflows. Azure Machine Learning provides the following MLOps capabilities for better accountability of your AI systems:
-- Register, package, and deploy models from anywhere. You can also track the associated metadata required to use the model.-- Capture the governance data for the end-to-end ML lifecycle. The logged lineage information can include who is publishing models, why changes were made, and when models were deployed or used in production.-- Notify and alert on events in the ML lifecycle. For example, experiment completion, model registration, model deployment, and data drift detection.-- Monitor ML applications for operational and ML-related issues. Compare model inputs between training and inference, explore model-specific metrics, and provide monitoring and alerts on your ML infrastructure.
+- Register, package, and deploy models from anywhere. You can also track the associated metadata that's required to use the model.
+- Capture the governance data for the end-to-end machine learning lifecycle. The logged lineage information can include who is publishing models, why changes were made, and when models were deployed or used in production.
+- Notify and alert on events in the machine learning lifecycle. Examples include experiment completion, model registration, model deployment, and data drift detection.
+- Monitor applications for operational issues and issues related to machine learning. Compare model inputs between training and inference, explore model-specific metrics, and provide monitoring and alerts on your machine learning infrastructure.
-Besides the MLOps capabilities, Azure Machine LearningΓÇÖs [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) creates accountability by enabling cross-stakeholders communications and by empowering machine learning developers to easily configure, download, and share their model health insights with their technical and non-technical stakeholders to educate them about their AI's data and model health, and build trust.
+Besides the MLOps capabilities, the [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) in Azure Machine Learning creates accountability by enabling cross-stakeholder communications. The scorecard also creates accountability by empowering developers to configure, download, and share their model health insights with their technical and non-technical stakeholders about AI data and model health. Sharing these insights can help build trust.
-The ML platform also enables decision-making by informing model-driven and data-driven business decisions:
+The machine learning platform also enables decision-making by informing business decisions through:
-- Data-driven insights to help stakeholders understand causal treatment effects on an outcome, using historic data only. For example, ΓÇ£how would a medicine impact a patientΓÇÖs blood pressure?". Such insights are provided through the [Causal Inference](concept-causal-inference.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).-- Model-driven insights, to answer end-users questions such as ΓÇ£what can I do to get a different outcome from your AI next time?ΓÇ¥ to inform their actions. Such insights are provided to data scientists through the [Counterfactual What-If](concept-counterfactual-analysis.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
+- Data-driven insights, to help stakeholders understand causal treatment effects on an outcome, by using historical data only. For example, "How would a medicine affect a patient's blood pressure?" These insights are provided through the [causal inference](concept-causal-inference.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
+- Model-driven insights, to answer users' questions (such as "What can I do to get a different outcome from your AI next time?") so they can take action. Such insights are provided to data scientists through the [counterfactual what-if](concept-counterfactual-analysis.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
## Next steps - For more information on how to implement Responsible AI in Azure Machine Learning, see [Responsible AI dashboard](concept-responsible-ai-dashboard.md).-- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [Azure Machine Learning studio UI](how-to-responsible-ai-dashboard-ui.md).
+- Learn how to generate the Responsible AI dashboard via [CLI and SDK](how-to-responsible-ai-dashboard-sdk-cli.md) or [Azure Machine Learning studio UI](how-to-responsible-ai-dashboard-ui.md).
- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in your Responsible AI dashboard.-- Learn about Microsoft's [Responsible AI Standard](https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf), a framework to guide how to build AI systems, according to Microsoft's six principles of fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
+- Learn about the [Responsible AI Standard](https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf) for building AI systems according to six key principles.
machine-learning Reference Ubuntu Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/reference-ubuntu-vm.md
Azure Machine Learning is a fully managed cloud service that enables you to buil
After you sign in to Azure Machine Learning studio, you can use an experimentation canvas to build a logical flow for the machine learning algorithms. You also have access to a Jupyter notebook that is hosted on Azure Machine Learning and can work seamlessly with the experiments in Azure Machine Learning studio.
-Operationalize the machine learning models that you have built by wrapping them in a web service interface. Operationalizing machine learning models enables clients written in any language to invoke predictions from those models. For more information, see the [Machine Learning documentation](https://azure.microsoft.com/documentation/services/machine-learning/).
+Operationalize the machine learning models that you have built by wrapping them in a web service interface. Operationalizing machine learning models enables clients written in any language to invoke predictions from those models. For more information, see the [Machine Learning documentation](/azure/machine-learning/).
You can also build your models in R or Python on the VM, and then deploy them in production on Azure Machine Learning. We have installed libraries in R (**AzureML**) and Python (**azureml**) to enable this functionality.
machine-learning How To Github Actions Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-github-actions-machine-learning.md
Get started with [GitHub Actions](https://docs.github.com/en/actions) to train a model on Azure Machine Learning. > [!NOTE]
-> GitHub Actions for Azure Machine Learning are provided as-is, and are not fully supported by Microsoft. If you encounter problems with a specific action, open an issue in the repository for the action. For example, if you encounter a problem with the aml-deploy action, report the problem in the [https://github.com/Azure/aml-deploy]( https://github.com/Azure/aml-deploy) repo.
+> GitHub Actions for Azure Machine Learning are provided as-is, and are not fully supported by Microsoft. If you encounter problems with a specific action, open an issue in the repository for the action. For example, if you encounter a problem with the aml-deploy action, report the problem in the [https://github.com/Azure/aml-deploy](https://github.com/Azure/aml-deploy) repo.
## Prerequisites
machine-learning Samples Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/samples-notebooks.md
Title: Example Jupyter Notebooks
+ Title: Example Jupyter Notebooks (v2)
-description: Learn how to find and use the Juypter Notebooks designed to help you explore the SDK and serve as models for your own machine learning projects.
+description: Learn how to find and use the Juypter Notebooks designed to help you explore the SDK (v2) and serve as models for your own machine learning projects.
Previously updated : 12/27/2021 Last updated : 08/30/2022 #Customer intent: As a professional data scientist, I find and run example Jupyter Notebooks for Azure Machine Learning. # Explore Azure Machine Learning with Jupyter Notebooks
-The [Azure Machine Learning Notebooks repository](https://github.com/azure/machinelearningnotebooks) includes the latest Azure Machine Learning Python SDK samples. These Jupyter notebooks are designed to help you explore the SDK and serve as models for your own machine learning projects. In this repository, you'll find tutorial notebooks in the **tutorials** folder and feature-specific notebooks in the **how-to-use-azureml** folder.
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning version you are using:"]
+> * [v1](<v1/samples-notebooks-v1.md>)
+> * [v2 (preview)](samples-notebooks.md)
-Also explore the community-driven repository of [AzureML-Examples](https://github.com/Azure/azureml-examples). This repository includes notebooks and [CLI (v2)](how-to-configure-cli.md) examples. For information on the various example types, see the [readme](https://github.com/Azure/azureml-examples#azure-machine-learning-examples).
+The [AzureML-Examples](https://github.com/Azure/azureml-examples) repository includes the latest (v2) Azure Machine Learning Python CLI and SDK samples. For information on the various example types, see the [readme](https://github.com/Azure/azureml-examples#azure-machine-learning-examples).
-This article shows you how to access the repositories from the following environments:
+This article shows you how to access the repository from the following environments:
-- [Azure Machine Learning compute instance](#notebookvm)-- [Bring your own notebook server](#byo)-- [Data Science Virtual Machine](#dsvm)
+- Azure Machine Learning compute instance
+- Your own compute resource
+- Data Science Virtual Machine
-<a name="notebookvm"></a>
-## Get samples on Azure Machine Learning compute instance
+## Option 1: Access on Azure Machine Learning compute instance (recommended)
The easiest way to get started with the samples is to complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md). Once completed, you'll have a dedicated notebook server pre-loaded with the SDK and the Azure Machine Learning Notebooks repository. No downloads or installation necessary.
To add the community-driven repository, [use a compute instance terminal](how-to
git clone https://github.com/Azure/azureml-examples.git --depth 1 ```
-<a name="byo"></a>
-
-## Get samples on your notebook server
+## Option 2: Access on your own notebook server
If you'd like to bring your own notebook server for local development, follow these steps on your computer. +
+These instructions install the base SDK packages necessary for the quickstart and tutorial notebooks. Other sample notebooks may require you to install extra components. For more information, see [Install the Azure Machine Learning SDK for Python](https://aka.ms/sdk-v2-install).
-These instructions install the base SDK packages necessary for the quickstart and tutorial notebooks. Other sample notebooks may require you to install extra components. For more information, see [Install the Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/install).
-<a name="dsvm"></a>
-## Get samples on DSVM
+## Option 3: Access on a DSVM
The Data Science Virtual Machine (DSVM) is a customized VM image built specifically for doing data science. If you [create a DSVM](how-to-configure-environment.md#dsvm), the SDK and notebook server are installed and configured for you. However, you'll still need to create a workspace and clone the sample repository. ## Next steps
machine-learning How To Create Attach Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-attach-kubernetes.md
If you already have AKS cluster in your Azure subscription, you can use it with
For more information on creating an AKS cluster using the Azure CLI or portal, see the following articles:
-* [Create an AKS cluster (CLI)](/cli/azure/aks?bc=%2fazure%2fbread%2ftoc.json&toc=%2fazure%2faks%2fTOC.json#az-aks-create)
+* [Create an AKS cluster (CLI)](/cli/azure/aks?bc=/azure/bread/toc.json&toc=/azure/aks/TOC.json#az-aks-create)
* [Create an AKS cluster (portal)](../../aks/learn/quick-kubernetes-deploy-portal.md) * [Create an AKS cluster (ARM Template on Azure Quickstart templates)](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aks-azml-targetcompute)
machine-learning Samples Notebooks V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/samples-notebooks-v1.md
+
+ Title: Example Jupyter Notebooks (v1)
+
+description: Learn how to find and use the Juypter Notebooks designed to help you explore the SDK (v1) and serve as models for your own machine learning projects.
++++++++ Last updated : 12/27/2021+
+#Customer intent: As a professional data scientist, I find and run example Jupyter Notebooks for Azure Machine Learning.
++
+# Explore Azure Machine Learning with Jupyter Notebooks (v1)
+
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning version you are using:"]
+> * [v1](<samples-notebooks-v1.md>)
+> * [v2 (preview)](../samples-notebooks.md)
+
+The [Azure Machine Learning Notebooks repository](https://github.com/azure/machinelearningnotebooks) includes Azure Machine Learning Python SDK (v1) samples. These Jupyter notebooks are designed to help you explore the SDK and serve as models for your own machine learning projects. In this repository, you'll find tutorial notebooks in the **tutorials** folder and feature-specific notebooks in the **how-to-use-azureml** folder.
+
+This article shows you how to access the repositories from the following environments:
+
+- Azure Machine Learning compute instance
+- Bring your own notebook server
+- Data Science Virtual Machine
++
+## Option 1: Access on Azure Machine Learning compute instance (recommended)
+
+The easiest way to get started with the samples is to complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md). Once completed, you'll have a dedicated notebook server pre-loaded with the SDK and the Azure Machine Learning Notebooks repository. No downloads or installation necessary.
+
+## Option 2: Access on your own notebook server
+
+If you'd like to bring your own notebook server for local development, follow these steps on your computer.
++
+These instructions install the base SDK packages necessary for the quickstart and tutorial notebooks. Other sample notebooks may require you to install extra components. For more information, see [Install the Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/install).
+
+## Option 3: Access on a DSVM
+
+The Data Science Virtual Machine (DSVM) is a customized VM image built specifically for doing data science. If you [create a DSVM](../how-to-configure-environment.md#dsvm), the SDK and notebook server are installed and configured for you. However, you'll still need to create a workspace and clone the sample repository.
++
+## Next steps
+
+Explore the [MachineLearningNotebooks](https://github.com/Azure/MachineLearningNotebooks) repository to discover what Azure Machine Learning can do.
+
+For more GitHub sample projects and examples, see these repos:
++ [Microsoft/MLOps](https://github.com/Microsoft/MLOps)++ [Microsoft/MLOpsPython](https://github.com/microsoft/MLOpsPython)+
managed-grafana How To Deterministic Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-deterministic-ip.md
+
+ Title: How to set up and use deterministic outbound APIs in Azure Managed Grafana
+description: Learn how to set up and use deterministic outbound APIs in Azure Managed Grafana
++++ Last updated : 08/24/2022
+
+
+# Use deterministic outbound IPs
+
+In this guide, learn how to activate deterministic outbound IP support used by Azure Managed Grafana to communicate with its data sources, disable public access and set up a firewall rule to allow inbound requests from your Grafana instance.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- A data source. For example, an [Azure Data Explorer database](/azure/data-explorer/create-cluster-database-portal).
+
+## Enable deterministic outbound IPs
+
+Deterministic outbound IP support is disabled by default in Azure Managed Grafana. You can enable this feature during the creation of the instance, or you can activate it on an instance that's already been created.
+
+### Create an Azure Managed Grafana workspace with deterministic outbound IPs enabled
+
+#### [Portal](#tab/portal)
+
+When creating an instance, in the **Advanced** tab, set **Deterministic outbound IP** to **Enable**.
+
+For more information about creating a new instance, go to [Quickstart: Create an Azure Managed Grafana instance](quickstart-managed-grafana-portal.md).
+
+#### [Azure CLI](#tab/azure-cli)
+
+Run the [az grafana create](/cli/azure/grafana#az-grafana-create) command to create an Azure Managed Grafana instance with deterministic outbound IPs enabled. Replace `<azure-managed-grafana-name>` and `<resource-group>` with the name of the new Azure Managed Grafana instance and a resource group.
+
+```azurecli-interactive
+az grafana create --name <azure-managed-grafana-name> --resource-group <resource-group> --deterministic-outbound-ip Enabled
+```
+++
+### Activate deterministic outbound IPs on an existing Azure Managed Grafana instance
+
+#### [Portal](#tab/portal)
+
+ 1. In the Azure portal, under **Settings** select **Configuration**, and then under **Deterministic outbound IP**, select **Enable**.
+
+ :::image type="content" source="media/deterministic-ips/enable-deterministic-ip-addresses.png" alt-text="Screenshot of the Azure platform. Enable deterministic IPs.":::
+ 1. Select **Save** to confirm the activation of deterministic outbound IP addresses.
+ 1. Select **Refresh** to display the list of IP addresses under **Static IP address**.
+
+#### [Azure CLI](#tab/azure-cli)
+
+Run the [az grafana update](/cli/azure/grafana#az-grafana-update) command to update your Azure Managed Grafana instance and enable deterministic outbound IPs. Replace `<azure-managed-grafana-name>` with the name of your Azure Managed Grafana instance.
+
+```azurecli-interactive
+az grafana update --name <azure-managed-grafana-name> --deterministic-outbound-ip Enabled
+```
+
+The deterministic outbound IPs are listed under `outboundIPs` in the output of the Azure CLI.
+++
+## Disable public access to a data source and allow Azure Managed Grafana IP addresses
+
+This example demonstrates how to disable public access to Azure Data Explorer and set up private endpoints. This process is similar for other Azure data sources.
+
+1. Open an Azure Data Explorer Cluster instance in the Azure portal, and under **Settings**, select **Networking**.
+1. In the **Public Access** tab, select **Disabled** to disable public access to the data source.
+1. Under **Firewall**, check the box **Add your client IP address ('88.126.99.17')** and under **Address range**, enter the IP addresses found in your Azure Managed Grafana workspace.
+1. Select **Save** to finish adding the Azure Managed Grafana outbound IP addresses to the allowlist.
+
+ :::image type="content" source="media/deterministic-ips/add-ip-data-source-firewall.png" alt-text="Screenshot of the Azure platform. Add Azure Managed Grafana outbound IPs to datasource firewall allowlist.":::
+
+You have limited access to your data source by disabling public access, activating a firewall and allowing access from Azure Managed Grafana IP addresses.
+
+## Check access to the data source
+
+Check if the Azure Managed Grafana endpoint can still access your data source.
+
+### [Portal](#tab/portal)
+
+1. In the Azure portal, go to your instance's **Overview** page and select the **Endpoint** URL.
+
+1. Go to **Configuration > Data Source > Azure Data Explorer Datasource > Settings** and at the bottom of the page, select **Save & test**:
+ - If the message "Success" is displayed, Azure Managed Grafana can access your data source.
+ - If the following error message is displayed, Azure Managed Grafana can't access the data source: `Post "https://<Azure-Data-Explorer-URI>/v1/rest/query": dial tcp 13.90.24.175:443: i/o timeout`. Make sure that you've entered the IP addresses correctly in the data source firewall allowlist.
+
+### [Azure CLI](#tab/azure-cli)
+
+Run the [az grafana data-source query](/cli/azure/grafana/data-source#az-grafana-data-source-query) command to query the data source. Replace `<azure-managed-grafana-name>` and `<data-source-name>` with the name of your Azure Managed Grafana instance and the name of your data source.
+
+```azurecli-interactive
+az grafana data-source query --name <azure-managed-grafana-name> --data-source <data-source-name> --output table
+```
+
+If the following error message is displayed, Azure Managed Grafana can't access the data source: `"error": "Post \\"https://<Azure-Data-Explorer-URI>/v1/rest/query\\": dial tcp 13.90.24.175:443: i/o timeout"`. Make sure that you've entered the IP addresses correctly in the data source firewall allowlist.
+
+> [!TIP]
+> You can get the name of your data sources by running `az grafana data-source list --name <azure-managed-grafana-instance-name> --output table`
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Call Grafana APIs](how-to-api-calls.md)
marketplace Update Existing Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/update-existing-offer.md
Previously updated : 06/01/2022 Last updated : 08/29/2022
Remember to republish your offer after making updates for the changes to take ef
## Stop distribution of an offer or plan
-You can remove offer listings and plans from the Microsoft commercial marketplace, which will prevent new customers from finding and purchasing them. Any customers who previously acquired the offer or plan can still use it, and they can download it again if needed. However, they won't get updates if you decide to republish the offer or plan at a later time.
+You can remove offer listings and plans from the Microsoft commercial marketplace, which will prevent new customers from finding and purchasing them. Any customers who previously acquired the offer or plan can still use it but they canΓÇÖt re-download or redeploy. Also, they won't get updates if you decide to republish the offer or plan at a later time.
- To stop distribution of an offer after you've published it, select **Stop distribution** from the **Offer overview** page. Within a few hours of your confirmation, the offer will no longer be visible in the commercial marketplace.
migrate How To Create Azure Vmware Solution Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-azure-vmware-solution-assessment.md
ms. Previously updated : 06/26/2020 Last updated : 04/06/2022 # Create an Azure VMware Solution assessment
-This article describes how to create an Azure VMware Solution assessment for on-premises servers in VMware environment with Azure Migrate: Discovery and assessment.
+This article describes how to create an Azure VMware Solution assessment for on-premises VMs in a VMware vSphere environment with Azure Migrate: Discovery and assessment.
[Azure Migrate](migrate-services-overview.md) helps you to migrate to Azure. Azure Migrate provides a centralized hub to track discovery, assessment, and migration of on-premises infrastructure, applications, and data to Azure. The hub provides Azure tools for assessment and migration, as well as third-party independent software vendor (ISV) offerings.
This article describes how to create an Azure VMware Solution assessment for on-
- Make sure you've [created](./create-manage-projects.md) an Azure Migrate project. - If you've already created a project, make sure you've [added](how-to-assess.md) the Azure Migrate: Discovery and assessment tool.-- To create an assessment, you need to set up an Azure Migrate appliance for [VMware](how-to-set-up-appliance-vmware.md), which discovers the on-premises servers, and sends metadata and performance data to Azure Migrate: Discovery and assessment. [Learn more](migrate-appliance.md).
+- To create an assessment, you need to set up an Azure Migrate appliance for [VMware vSphere](how-to-set-up-appliance-vmware.md), which discovers the on-premises servers, and sends metadata and performance data to Azure Migrate: Discovery and assessment. [Learn more](migrate-appliance.md).
- You could also [import the server metadata](./tutorial-discover-import.md) in comma-separated values (CSV) format.
There are three types of assessments you can create using Azure Migrate: Discove
***Assessment Type** | **Details** |
-**Azure VM** | Assessments to migrate your on-premises servers to Azure virtual machines. You can assess your on-premises servers in [VMware](how-to-set-up-appliance-vmware.md) and [Hyper-V](how-to-set-up-appliance-hyper-v.md) environment, and [physical servers](how-to-set-up-appliance-physical.md) for migration to Azure VMs using this assessment type.
+**Azure VM** | Assessments to migrate your on-premises servers to Azure virtual machines. You can assess your on-premises VMs in [VMware vSphere](how-to-set-up-appliance-vmware.md) and [Hyper-V](how-to-set-up-appliance-hyper-v.md) environment, and [physical servers](how-to-set-up-appliance-physical.md) for migration to Azure VMs using this assessment type.
**Azure SQL** | Assessments to migrate your on-premises SQL servers from your VMware environment to Azure SQL Database or Azure SQL Managed Instance.
-**Azure App Service** | Assessments to migrate your on-premises ASP.NET web apps, running on IIS web servers, from your VMware environment to Azure App Service.
-**Azure VMware Solution (AVS)** | Assessments to migrate your on-premises servers to [Azure VMware Solution (AVS)](../azure-vmware/introduction.md). You can assess your on-premises servers in [VMware environment](how-to-set-up-appliance-vmware.md) for migration to Azure VMware Solution (AVS) using this assessment type. [Learn more](concepts-azure-vmware-solution-assessment-calculation.md)
+**Azure App Service** | Assessments to migrate your on-premises ASP.NET web apps, running on IIS web servers, from your VMware vSphere environment to Azure App Service.
+**Azure VMware Solution (AVS)** | Assessments to migrate your on-premises servers to [Azure VMware Solution (AVS)](../azure-vmware/introduction.md). You can assess your on-premises VMs in [VMware vSphere environment](how-to-set-up-appliance-vmware.md) for migration to Azure VMware Solution (AVS) using this assessment type. [Learn more](concepts-azure-vmware-solution-assessment-calculation.md)
> [!NOTE]
-> Azure VMware Solution (AVS) assessment can be created for servers in VMware environment only.
+> Azure VMware Solution (AVS) assessment can be created for virtual machines in VMware vSphere environment only.
There are two types of sizing criteria that you can use to create Azure VMware Solution (AVS) assessments:
There are two types of sizing criteria that you can use to create Azure VMware S
- In **Target location**, specify the Azure region to which you want to migrate. - Size and cost recommendations are based on the location that you specify.
- - The **Storage type** is defaulted to **vSAN**. This is the default storage type for an AVS private cloud.
+ - The **Storage type** is defaulted to **vSAN**. This is the default storage type for an Azure VMware Solution private cloud.
- In **Reserved Instances**, specify whether you want to use reserve instances for Azure VMware Solution nodes when you migrate your VMs. - If you select to use a reserved instance, you can't specify '**Discount (%)** - [Learn more](../azure-vmware/reserved-instance.md) 1. In **VM Size**:
- - The **Node type** is defaulted to **AV36**. Azure Migrate recommends the node of nodes needed to migrate the servers to AVS.
+ - The **Node type** is defaulted to **AV36**. Azure Migrate recommends the node of nodes needed to migrate the servers to Azure VMware Solution.
- In **FTT setting, RAID level**, select the Failure to Tolerate and RAID combination. The selected FTT option, combined with the on-premises server disk requirement, determines the total vSAN storage required in AVS. - In **CPU Oversubscription**, specify the ratio of virtual cores associated with one physical core in the AVS node. Oversubscription of greater than 4:1 might cause performance degradation, but can be used for web server type workloads. - In **Memory overcommit factor**, specify the ratio of memory over commit on the cluster. A value of 1 represents 100% memory use, 0.5 for example is 50%, and 2 would be using 200% of available memory. You can only add values from 0.5 to 10 up to one decimal place.
- - In **Dedupe and compression factor**, specify the anticipated dedupe and compression factor for your workloads. Actual value can be obtained from on-premises vSAN or storage config and this may vary by workload. A value of 3 would mean 3x so for 300GB disk only 100GB storage would be used. A value of 1 would mean no dedupe or compression. You can only add values from 1 to 10 up to one decimal place.
+ - In **Dedupe and compression factor**, specify the anticipated deduplication and compression factor for your workloads. Actual value can be obtained from on-premises vSAN or storage config and this may vary by workload. A value of 3 would mean 3x so for 300GB disk only 100GB storage would be used. A value of 1 would mean no dedupe or compression. You can only add values from 1 to 10 up to one decimal place.
1. In **Node Size**: - In **Sizing criterion**, select if you want to base the assessment on static metadata, or on performance-based data. If you use performance data: - In **Performance history**, indicate the data duration on which you want to base the assessment
There are two types of sizing criteria that you can use to create Azure VMware S
An Azure VMware Solution (AVS) assessment describes: -- **Azure VMware Solution (AVS) readiness**: Whether the on-premises servers are suitable for migration to Azure VMware Solution (AVS).
+- **Azure VMware Solution (AVS) readiness**: Whether the on-premises VMs are suitable for migration to Azure VMware Solution (AVS).
- **Number of Azure VMware Solution nodes**: Estimated number of Azure VMware Solution nodes required to run the servers. - **Utilization across AVS nodes**: Projected CPU, memory, and storage utilization across all nodes. - Utilization includes up front factoring in the following cluster management overheads such as the vCenter Server, NSX Manager (large),
You can click on **Sizing assumptions** to understand the assumptions that went
2. Review the server status: - **Ready for AVS**: The server can be migrated as-is to Azure (AVS) without any changes. It will start in AVS with full AVS support. - **Ready with conditions**: There might be some compatibility issues example internet protocol or deprecated OS in VMware and need to be remediated before migrating to Azure VMware Solution. To fix any readiness problems, follow the remediation guidance the assessment suggests.
- - **Not ready for AVS**: The VM will not start in AVS. For example, if the on-premises VMware VM has an external device attached such as a cd-rom the VMware VMotion operation will fail (if using VMware VMotion).
+ - **Not ready for AVS**: The VM will not start in AVS. For example, if the on-premises VMware VM has an external device attached such as a cd-rom the VMware vMotion operation will fail (if using VMware vMotion).
- **Readiness unknown**: Azure Migrate couldn't determine the readiness of the server because of insufficient metadata collected from the on-premises environment. 3. Review the Suggested tool:
- - **VMware HCX or Enterprise**: For VMware servers, VMware Hybrid Cloud Extension (HCX) solution is the suggested migration tool to migrate your on-premises workload to your Azure VMware Solution (AVS) private cloud. [Learn More](../azure-vmware/configure-vmware-hcx.md).
- - **Unknown**: For servers imported via a CSV file, the default migration tool is unknown. Though for VMware servers, it is suggested to use the VMware Hybrid Cloud Extension (HCX) solution.
+ - **VMware HCX Advanced or Enterprise**: For VMware vSphere VMs, VMware Hybrid Cloud Extension (HCX) solution is the suggested migration tool to migrate your on-premises workload to your Azure VMware Solution (AVS) private cloud. [Learn More](../azure-vmware/configure-vmware-hcx.md).
+ - **Unknown**: For servers imported via a CSV file, the default migration tool is unknown. Though for VMware vSphere VMs, it is suggested to use the VMware Hybrid Cloud Extension (HCX) solution.
4. Click on an **AVS readiness** status. You can view VM readiness details, and drill down to see VM details, including compute, storage, and network settings.
Confidence ratings for an assessment are as follows.
## Next steps - Learn how to use [dependency mapping](how-to-create-group-machine-dependencies.md) to create high confidence groups.-- [Learn more](concepts-azure-vmware-solution-assessment-calculation.md) about how AVS assessments are calculated.
+- [Learn more](concepts-azure-vmware-solution-assessment-calculation.md) about how Azure VMware Solution assessments are calculated.
mysql Concepts Service Tiers Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-service-tiers-storage.md
You can create an Azure Database for MySQL Flexible Server in one of three diffe
| Resource / Tier | **Burstable** | **General Purpose** | **Business Critical** | |:|:-|:--|:|
-| VM series| [B-series](https://docs.microsoft.com/azure/virtual-machines/sizes-b-series-burstable) | [Ddsv4-series](https://docs.microsoft.com/azure/virtual-machines/ddv4-ddsv4-series#ddsv4-series) | [Edsv4](https://docs.microsoft.com/azure/virtual-machines/edv4-edsv4-series#edsv4-series)/[Edsv5-series](https://docs.microsoft.com/azure/virtual-machines/edv5-edsv5-series#edsv5-series)*|
+| VM series| [B-series](/azure/virtual-machines/sizes-b-series-burstable) | [Ddsv4-series](/azure/virtual-machines/ddv4-ddsv4-series#ddsv4-series) | [Edsv4](/azure/virtual-machines/edv4-edsv4-series#edsv4-series)/[Edsv5-series](/azure/virtual-machines/edv5-edsv5-series#edsv5-series)*|
| vCores | 1, 2, 4, 8, 12, 16, 20 | 2, 4, 8, 16, 32, 48, 64 | 2, 4, 8, 16, 32, 48, 64, 80, 96 | | Memory per vCore | Variable | 4 GiB | 8 GiB * | | Storage size | 20 GiB to 16 TiB | 20 GiB to 16 TiB | 20 GiB to 16 TiB |
network-watcher Network Watcher Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-monitoring-overview.md
For information about analyzing traffic from a network security group, see [Netw
Endpoints can be another virtual machine (VM), a fully qualified domain name (FQDN), a uniform resource identifier (URI), or IPv4 address. The *connection monitor* capability monitors communication at a regular interval and informs you of reachability, latency, and network topology changes between the VM and the endpoint. For example, you might have a web server VM that communicates with a database server VM. Someone in your organization may, unknown to you, apply a custom route or network security rule to the web server or database server VM or subnet.
-If an endpoint becomes unreachable, connection troubleshoot informs you of the reason. Potential reasons are a DNS name resolution problem, the CPU, memory, or firewall within the operating system of a VM, or the hop type of a custom route, or security rule for the VM or subnet of the outbound connection. Learn more about [security rules](../virtual-network/network-security-groups-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json#security-rules) and [route hop types](../virtual-network/virtual-networks-udr-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) in Azure.
+If an endpoint becomes unreachable, connection troubleshoot informs you of the reason. Potential reasons are a DNS name resolution problem, the CPU, memory, or firewall within the operating system of a VM, or the hop type of a custom route, or security rule for the VM or subnet of the outbound connection. Learn more about [security rules](../virtual-network/network-security-groups-overview.md?toc=/azure/network-watcher/toc.json#security-rules) and [route hop types](../virtual-network/virtual-networks-udr-overview.md?toc=/azure/network-watcher/toc.json) in Azure.
Connection monitor also provides the minimum, average, and maximum latency observed over time. After learning the latency for a connection, you may find that you're able to decrease the latency by moving your Azure resources to different Azure regions. Learn more about determining [relative latencies between Azure regions and internet service providers](#determine-relative-latencies-between-azure-regions-and-internet-service-providers) and how to monitor communication between a VM and an endpoint with [connection monitor](connection-monitor.md). If you'd rather test a connection at a point in time, rather than monitor the connection over time, like you do with connection monitor, use the [connection troubleshoot](#connection-troubleshoot) capability.
-Network performance monitor is a cloud-based hybrid network monitoring solution that helps you monitor network performance between various points in your network infrastructure. It also helps you monitor network connectivity to service and application endpoints and monitor the performance of Azure ExpressRoute. Network performance monitor detects network issues like traffic blackholing, routing errors, and issues that conventional network monitoring methods aren't able to detect. The solution generates alerts and notifies you when a threshold is breached for a network link. It also ensures timely detection of network performance issues and localizes the source of the problem to a particular network segment or device. Learn more about [network performance monitor](../azure-monitor/insights/network-performance-monitor.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).
+Network performance monitor is a cloud-based hybrid network monitoring solution that helps you monitor network performance between various points in your network infrastructure. It also helps you monitor network connectivity to service and application endpoints and monitor the performance of Azure ExpressRoute. Network performance monitor detects network issues like traffic blackholing, routing errors, and issues that conventional network monitoring methods aren't able to detect. The solution generates alerts and notifies you when a threshold is breached for a network link. It also ensures timely detection of network performance issues and localizes the source of the problem to a particular network segment or device. Learn more about [network performance monitor](../azure-monitor/insights/network-performance-monitor.md?toc=/azure/network-watcher/toc.json).
### View resources in a virtual network and their relationships
The effective security rules for a network interface are a combination of all se
## Metrics
-There are [limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json#azure-resource-manager-virtual-networking-limits) to the number of network resources that you can create within an Azure subscription and region. If you meet the limits, you're unable to create more resources within the subscription or region. The *network subscription limit* capability provides a summary of how many of each network resource you have deployed in a subscription and region, and what the limit is for the resource. The following picture shows the partial output for network resources deployed in the East US region for an example subscription:
+There are [limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=/azure/network-watcher/toc.json#azure-resource-manager-virtual-networking-limits) to the number of network resources that you can create within an Azure subscription and region. If you meet the limits, you're unable to create more resources within the subscription or region. The *network subscription limit* capability provides a summary of how many of each network resource you have deployed in a subscription and region, and what the limit is for the resource. The following picture shows the partial output for network resources deployed in the East US region for an example subscription:
![Subscription limits](./media/network-watcher-monitoring-overview/subscription-limit.png)
Learn more about NSG flow logs by completing the [Log network traffic to and fro
### View diagnostic logs for network resources
-You can enable diagnostic logging for Azure networking resources such as network security groups, public IP addresses, load balancers, virtual network gateways, and application gateways. The *Diagnostic logs* capability provides a single interface to enable and disable network resource diagnostic logs for any existing network resource that generates a diagnostic log. You can view diagnostic logs using tools such as Microsoft Power BI and Azure Monitor logs. To learn more about analyzing Azure network diagnostic logs, see [Azure network solutions in Azure Monitor logs](../azure-monitor/insights/azure-networking-analytics.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).
+You can enable diagnostic logging for Azure networking resources such as network security groups, public IP addresses, load balancers, virtual network gateways, and application gateways. The *Diagnostic logs* capability provides a single interface to enable and disable network resource diagnostic logs for any existing network resource that generates a diagnostic log. You can view diagnostic logs using tools such as Microsoft Power BI and Azure Monitor logs. To learn more about analyzing Azure network diagnostic logs, see [Azure network solutions in Azure Monitor logs](../azure-monitor/insights/azure-networking-analytics.md?toc=/azure/network-watcher/toc.json).
## Network Watcher automatic enablement When you create or update a virtual network in your subscription, Network Watcher will be enabled automatically in your Virtual Network's region. There is no impact to your resources or associated charge for automatically enabling Network Watcher. For more information, see [Network Watcher create](network-watcher-create.md).
networking Networking Partners Msp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/networking-partners-msp.md
Use the links in this section for more information about managed cloud networkin
|[Netfosys](https://www.netfosys.com/services/azure-networking-services/)|||[Netfosys Managed Services for Azure vWAN](https://azuremarketplace.microsoft.com/en-ca/marketplace/apps/netfosys1637934664103.azure-vwan?tab=Overview)||| |[Nokia](https://www.nokia.com/networks/services/managed-services/)|||[NBConsult Nokia Nuage SDWAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nbconsult1588859334197.nbconsult-nokia-nuage?tab=Overview); [Nuage SD-WAN 2.0 Azure Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nokiaofamericacorporation1591716055441.nuage_sd-wan_2-0_azure_virtual_wan?tab=Overview)|[Nokia 4G & 5G Private Wireless (NDAC)](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nokiaofamericacorporation1591716055441.ndac_5g-ready_private_wireless?tab=Overview)| |[NTT Ltd](https://www.nttglobal.net/)|[Azure Cloud Discovery: 2-Week Workshop](https://azuremarketplace.microsoft.com/marketplace/apps/capside.replica-azure-cloud-governance-capside?tab=Overview)|NTT Managed ExpressRoute Service;NTT Managed IP VPN Service|NTT Managed SD-WAN Service|||
-|[NTT Data](https://us.nttdata.com/en/digital/cloud-transformation)|[Managed
+|[NTT Data](https://www.nttdata.com/global/en/services/cloud)|[Managed
|[Oncore Cloud Services]( https://www.oncore.cloud/services/ue-for-expressroute/)|[Enterprise Cloud Foundations: Workshop (~10 days)](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/oncore_cloud_services-4944214.oncore_cloud_onboard_201810)|[UniversalEdge for Azure ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/oncore_cloud_services-4944214.universaledge_for_expressroute?tab=Overview)|||| |[OpenSystems](https://open-systems.com/solutions/microsoft-azure-virtual-wan)|||[Managed secure SD-WAN using Microsoft Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/open_systems_ag.sdwan_0820?tab=Overview)|| |[Orange Business Services](https://www.orange-business.com/en/partners/orange-business-services-become-microsoft-azure-networking-managed-services-provider)||[ExpressRoute Network Study : 3-week implementation](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/orangebusinessservicessa1603182943272.expressroute_study_obs_connectivity)|||
notification-hubs Notification Hubs Push Notification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-push-notification-overview.md
Get started with creating and using a notification hub by following the [Tutoria
[1]: ./media/notification-hubs-overview/notification-hub-diagram.png [How customers are using Notification Hubs]: https://azure.microsoft.com/services/notification-hubs
-[Notification Hubs tutorials and guides]: https://azure.microsoft.com/documentation/services/notification-hubs
+[Notification Hubs tutorials and guides]: /azure/notification-hubs
[iOS]: ./notification-hubs-push-notification-fixer.md [Android]: ./notification-hubs-android-push-notification-google-gcm-get-started.md [Windows Universal]: ./notification-hubs-windows-store-dotnet-get-started-wns-push-notification.md
openshift Howto Enable Nsg Flowlogs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-enable-nsg-flowlogs.md
+
+ Title: Enabling Network Security Group flow logs for Azure Red Hat OpenShift
+description: In this article, learn how to enable flow logs to analyze traffic for Network Security Groups.
++++ Last updated : 08/30/2022
+topic: how-to
+recommendations: true
+keywords: azure, openshift, aro, red hat, azure CLI
+#Customer intent: I need to create and use an Azure service principal to restrict permissions to my Azure Red Hat OpenShift cluster.
++
+# Enable Network Security Group flow logs
+
+Flow logs allow you to analyze traffic for Network Security Groups in specific regions that have Azure Network Watcher configured.
+
+## Prerequisites
+
+You must have an existing Azure Red Hat OpenShift cluster. Follow [this guide](tutorial-create-cluster.md) to create a private Azure Red Hat OpenShift cluster.
+
+## Configure Azure Network Watcher
+
+Make sure an Azure Network Watcher exists in the applicable region or use the one existing by convention. For example, for the eastus region:
+```
+"subscriptions/{subscriptionID}/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/networkWatchers/NetworkWatcher_eastus"
+```
+See [Enable Azure Network Watcher](../network-watcher/enable-network-watcher-flow-log-settings.md)for more information.
+
+## Create storage account
+
+[Create a storage account](../storage/common/storage-account-create.md) (or use an existing storage account) for storing the actual flow logs. It must be in the same region as where the flow logs are going to be created. It cannot be in the same resource group as the cluster's resources.
+
+## Configure service principal
+
+The service principal used by the cluster needs the [proper permissions](../network-watcher/required-rbac-permissions.md) in order to create the necessary resources for the flow logs, and to access the storage account. The easiest way to achieve that is by assigning it the network administrator and storage account contributor roles at the subscription level. Alternatively, you can create a custom role containing the required actions from the page linked above and assign it to the service principal.
+
+To get the service principal ID, run the following command:
+```
+az aro show -g {ResourceGroupName} -n {ClusterName} --query servicePrincipalProfile.clientId -o tsv
+```
+Use the output of the above command to get the object ID:
+```
+az ad sp show --id XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX --query id --out tsv
+```
+To assign network admin, run the following command:
+```
+az role assignment create --assignee "{servicePrincipalObjectID}" --role "4d97b98b-1d4f-4787-a291-c67834d212e7" --subscription "{subscriptionID}" --resource-group "{networkWatcherResourceGroup}"
+```
+To assign storage account contributor, run the following command:
+```
+az role assignment create --role "17d1049b-9a84-46fb-8f53-869881c3d3ab" --assignee-object-id "{servicePrincipalObjectID}"
+```
+See [Azure built-in roles](../role-based-access-control/built-in-roles.md) for IDs of built-in roles.
+
+Create a manifest as in the following example, or update the existing object to contain `spec.nsgFlowLogs` in case you are already using another preview feature:
+```
+apiVersion: "preview.aro.openshift.io/v1alpha1"
+kind: PreviewFeature
+metadata:
+ name: cluster
+spec:
+ azEnvironment: "AzurePublicCloud"
+ resourceId: "subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.RedHatOpenShift/openShiftClusters/{clusterID}"
+ nsgFlowLogs:
+ enabled: true
+ networkWatcherID: "subscriptions/{subscriptionID}/resourceGroups/{networkWatcherRG}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}"
+ flowLogName: "{flowlogName}"
+ retentionDays: {retentionDays}
+ storageAccountResourceId: "subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{storageAccountName}"
+ version: {version}
+```
+See [Tutorial: Log network traffic to and from a virtual machine using the Azure portal](../network-watcher/network-watcher-nsg-flow-logging-portal.md) for possible values for `version` and `retentionDays`.
+
+The cluster will create flow logs for each Network Security Group in the cluster resource group.
openshift Howto Secure Openshift With Front Door https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-secure-openshift-with-front-door.md
Previously updated : 12/07/2021 Last updated : 12/07/2021 keywords: azure, openshift, red hat, front, door #Customer intent: I need to understand how to secure access to Azure Red Hat OpenShift applications with Azure Front Door.
This article explains how to use Azure Front Door Premium to secure access to Az
The following prerequisites are required: -- You have an existing Azure Red Hat OpenShift cluster. Follow this guide to to [create a private Azure Red Hat OpenShift cluster](howto-create-private-cluster-4x.md).
+- You have an existing Azure Red Hat OpenShift cluster. Follow this guide to [create a private Azure Red Hat OpenShift cluster](howto-create-private-cluster-4x.md).
- The cluster is configured with private ingress visibility.
Because Azure Front Door is a global service, the application can take up to 30
## Next steps
-Create a Azure Web Application Firewall on Azure Front Door using the Azure portal:
+Create an Azure Web Application Firewall on Azure Front Door using the Azure portal:
> [!div class="nextstepaction"] > [Tutorial: Create a Web Application Firewall policy on Azure Front Door using the Azure portal](../web-application-firewall/afds/waf-front-door-create-portal.md)
postgresql Concepts Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-upgrade.md
Previously updated : 08/02/2022 Last updated : 08/29/2022 # Hyperscale (Citus) server group upgrades
higher.
## Upgrade precautions
-Upgrading a major version of Citus can introduce changes in behavior.
-It's best to familiarize yourself with new product features and changes
-to avoid surprises.
+Upgrades require some downtime in the database cluster. The exact time depends
+on the source and destination versions of the upgrade. To prepare for the
+production cluster upgrade, we recommend [testing the
+upgrade](howto-upgrade.md#test-the-upgrade-first), and measure downtime during
+the test.
+
+Also, upgrading a major version of Citus can introduce changes in behavior.
+It's best to familiarize yourself with new product features and changes to
+avoid surprises.
Noteworthy Citus 11 changes:
postgresql Howto Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-upgrade.md
Previously updated : 08/02/2022 Last updated : 08/29/2022 # Upgrade Hyperscale (Citus) server group
on all server group nodes.
Upgrading PostgreSQL causes more changes than you might imagine, because Hyperscale (Citus) will also upgrade the [database
-extensions](reference-extensions.md), including the Citus extension.
+extensions](reference-extensions.md), including the Citus extension. Upgrades
+also require downtime in the database cluster.
We strongly recommend you to test your application with the new PostgreSQL and
-Citus version before you upgrade your production environment. Also, please see
+Citus version before you upgrade your production environment. Also, see
our list of [upgrade precautions](concepts-upgrade.md). A convenient way to test is to make a copy of your server group using
purview How To Receive Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-receive-share.md
For an overview of how data sharing works, watch this short [demo](https://aka.m
> - Redundancy options: LRS, GRS, RA-GRS * You'll need the **Contributor** or **Owner** or **Storage Blob Data Owner** or **Storage Blob Data Contributor** role on the target storage account. You can find more details on the [ADLS Gen2](register-scan-adls-gen2.md#data-sharing) or [Blob storage](register-scan-azure-blob-storage-source.md#data-sharing) data source pages.
-* If the target storage account is in a different Azure subscription than the one for Microsoft Purview account, [register the Microsoft.Purview resource provider](../azure-resource-manager/management/resource-providers-and-types.md) in the Azure subscription where the Azure data store is located.
+* If the target storage account is in a different Azure subscription than the one for Microsoft Purview account, the Microsoft.Purview resource provider needs to be registered in the Azure subscription where the Storage account is located. It is automatically registered at the time of share consumer mapping the asset and if the user has permission to do the `/register/action` operation and therefore, Contributor or Owner roles to the subscription where the Storage account is located.
+This registration is only needed the first time when sharing or receiving data into a storage account in the Azure subscription.
* A storage account needs to be registered in the collection where you'll receive the share. For instructions to register, see the [ADLS Gen2](register-scan-adls-gen2.md) or [Blob storage](register-scan-azure-blob-storage-source.md) data source pages. * Latest version of the storage SDK, PowerShell, CLI and Azure Storage Explorer. Storage REST API version must be February 2020 or later.
purview How To Share Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-share-data.md
For an overview of how data sharing works, watch this short [demo](https://aka.m
> - Redundancy options: LRS, GRS, RA-GRS * You need the **Owner** or **Storage Blob Data Owner** role on the source storage account to be able to share data. You can find more details on the [ADLS Gen2](register-scan-adls-gen2.md#data-sharing) or [Blob storage](register-scan-azure-blob-storage-source.md#data-sharing) data source page.
-* If the source storage account is in a different Azure subscription than the one for Microsoft Purview account, [register the Microsoft.Purview resource provider](../azure-resource-manager/management/resource-providers-and-types.md) in the Azure subscription where the Azure data store is located.
+* If the source storage account is in a different Azure subscription than the one for Microsoft Purview account, the Microsoft.Purview resource provider needs to be registered in the Azure subscription where the Storage account is located. It is automatically registered at the time of share provider adding an asset if the user has permission to do the `/register/action` operation and therefore, Contributor or Owner roles to the subscription where the Storage account is located.
+This registration is only needed the first time when sharing or receiving data into a storage account in the Azure subscription.
## Create a share
search Cognitive Search Concept Image Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-image-scenarios.md
Previously updated : 06/24/2022 Last updated : 08/29/2022
Metadata adjustments are captured in a complex type created for each image. You
+ `"generateNormalizedImages"` to generate an array of normalized images as part of document cracking.
- + `"generateNormalizedImagePerPage"` (applies to PDF only) to generate an array of normalized images where each page in the PDF is rendered to one output image. For non-PDF files, the behavior of this parameter is same as if you had set "generateNormalizedImages".
+ + `"generateNormalizedImagePerPage"` (applies to PDF only) to generate an array of normalized images where each page in the PDF is rendered to one output image. For non-PDF files, the behavior of this parameter is similar as if you had set "generateNormalizedImages". However, note that setting "generateNormalizedImagePerPage" can make indexing operation less performant by design (especially for big documents) since several images would have to be generated.
1. Optionally, adjust the width or height of the generated normalized images:
search Search Get Started Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-dotnet.md
# Quickstart: Create a search index using the Azure.Search.Documents client library
-Use the [Azure.Search.Documents (version 11) client library](/dotnet/api/overview/azure/search.documents-readme) to create a .NET Core console application in C# that creates, loads, and queries a search index.
+Learn how to use the [Azure.Search.Documents (version 11) client library](/dotnet/api/overview/azure/search.documents-readme) to create a .NET Core console application in C# that creates, loads, and queries a search index.
You can [download the source code](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/quickstart/v11) to start with a finished project or follow the steps in this article to create your own.
Before you begin, have the following tools and
When setting up your project, you'll download the [Azure.Search.Documents NuGet package](https://www.nuget.org/packages/Azure.Search.Documents/).
-Azure SDK for .NET conforms to [.NET Standard 2.0](/dotnet/standard/net-standard#net-implementation-support), which means .NET Framework 4.6.1 and .NET Core 2.0 as minimum requirements.
-
+Azure SDK for .NET conforms to [.NET Standard 2.0](/dotnet/standard/net-standard#net-implementation-support), which means .NET Framework 4.6.1 and .NET Core 2.1 as minimum requirements.
## Set up your project
-Assemble service connection information, and then start Visual Studio to create a new Console App project that can run on .NET Core.
+Assemble service connection information, and then start Visual Studio to create a new Console App project that can run on. Select NET Core 3.1 for the run time.
<a name="get-service-info"></a>
After the project is created, add the client library. The [Azure.Search.Document
1. Create two clients: [SearchIndexClient](/dotnet/api/azure.search.documents.indexes.searchindexclient) creates the index, and [SearchClient](/dotnet/api/azure.search.documents.searchclient) loads and queries an existing index. Both need the service endpoint and an admin API key for authentication with create/delete rights. ```csharp
- static void Main(string[] args)
- {
- string serviceName = "<YOUR-SERVICE-NAME>";
- string indexName = "hotels-quickstart";
- string apiKey = "<YOUR-ADMIN-API-KEY>";
+ static void Main(string[] args)
+ {
+ string serviceName = "<your-search-service-name>";
+ string apiKey = "<your-search-service-admin-api-key>";
+ string indexName = "hotels-quickstart";
// Create a SearchIndexClient to send create/delete index commands Uri serviceEndpoint = new Uri($"https://{serviceName}.search.windows.net/");
After the project is created, add the client library. The [Azure.Search.Document
// Create a SearchClient to load and query documents SearchClient srchclient = new SearchClient(serviceEndpoint, indexName, credential);
+ . . .
+ }
``` ## 1 - Create an index
The [SearchResults](/dotnet/api/azure.search.documents.models.searchresults-1) c
Console.WriteLine(); }+
+ private static void WriteDocuments(AutocompleteResults autoResults)
+ {
+ foreach (AutocompleteItem result in autoResults.Results)
+ {
+ Console.WriteLine(result.Text);
+ }
+
+ Console.WriteLine();
+ }
``` 1. Create a **RunQueries** method to execute queries and return results. Results are Hotel objects. This sample shows the method signature and the first query. This query demonstrates the Select parameter that lets you compose the result using selected fields from the document.
The [SearchResults](/dotnet/api/azure.search.documents.models.searchresults-1) c
{ SearchOptions options; SearchResults<Hotel> response;-
+
+ // Query 1
Console.WriteLine("Query #1: Search on empty term '*' to return all documents, showing a subset of fields...\n"); options = new SearchOptions()
The [SearchResults](/dotnet/api/azure.search.documents.models.searchresults-1) c
1. In the second query, search on a term, add a filter that selects documents where Rating is greater than 4, and then sort by Rating in descending order. Filter is a boolean expression that is evaluated over [IsFilterable](/dotnet/api/azure.search.documents.indexes.models.searchfield.isfilterable) fields in an index. Filter queries either include or exclude values. As such, there's no relevance score associated with a filter query. ```csharp
+ // Query 2
Console.WriteLine("Query #2: Search on 'hotels', filter on 'Rating gt 4', sort by Rating in descending order...\n"); options = new SearchOptions()
The [SearchResults](/dotnet/api/azure.search.documents.models.searchresults-1) c
1. The third query demonstrates searchFields, used to scope a full text search operation to specific fields. ```csharp
+ // Query 3
Console.WriteLine("Query #3: Limit search to specific fields (pool in Tags field)...\n"); options = new SearchOptions()
The [SearchResults](/dotnet/api/azure.search.documents.models.searchresults-1) c
1. The fourth query demonstrates facets, which can be used to structure a faceted navigation structure. ```csharp
+ // Query 4
Console.WriteLine("Query #4: Facet on 'Category'...\n"); options = new SearchOptions()
The [SearchResults](/dotnet/api/azure.search.documents.models.searchresults-1) c
1. In the fifth query, return a specific document. A document lookup is a typical response to OnClick event in a result set. ```csharp
+ // Query 5
Console.WriteLine("Query #5: Look up a specific document...\n"); Response<Hotel> lookupResponse;
The [SearchResults](/dotnet/api/azure.search.documents.models.searchresults-1) c
1. The last query shows the syntax for autocomplete, simulating a partial user input of "sa" that resolves to two possible matches in the sourceFields associated with the suggester you defined in the index. ```csharp
+ // Query 6
Console.WriteLine("Query #6: Call Autocomplete on HotelName that starts with 'sa'...\n"); var autoresponse = srchclient.Autocomplete("sa", "sg"); WriteDocuments(autoresponse); ```- 1. Add **RunQueries** to Main(). ```csharp
search Search Get Started Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-java.md
Title: 'Quickstart: Create a search index in Javas'
+ Title: 'Quickstart: Create a search index in Java'
description: In this Java quickstart, learn how to create an index, load data, and run queries using the Azure Cognitive Search client library for Java.
search Search Get Started Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-python.md
> * [Portal](search-get-started-portal.md) >
-Build a notebook that creates, loads, and queries an Azure Cognitive Search index using Python and the [azure-search-documents library](/python/api/overview/azure/search-documents-readme) in the Azure SDK for Python. This article explains how to build a notebook step by step. Alternatively, you can [download and run a finished Jupyter Python notebook](https://github.com/Azure-Samples/azure-search-python-samples).
+Build a Jupyter Notebook that creates, loads, and queries an Azure Cognitive Search index using Python and the [azure-search-documents library](/python/api/overview/azure/search-documents-readme) in the Azure SDK for Python. This article explains how to build a notebook step by step. Alternatively, you can [download and run a finished Jupyter Python notebook](https://github.com/Azure-Samples/azure-search-python-samples).
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
In this task, start Jupyter Notebook and verify that you can connect to Azure Co
1. In the first cell, load the libraries from the Azure SDK for Python, including [azure-search-documents](/python/api/azure-search-documents). ```python
- !pip install azure-search-documents --pre
- !pip show azure-search-documents
+ %pip install azure-search-documents --pre
+ %pip show azure-search-documents
import os from azure.core.credentials import AzureKeyCredential
This step shows you how to query an index using the **search** method of the [se
print(" {}".format(facet)) ```
-1. In this example, look up a specific document based on its key. You would typically want to return a document when a user select on a document in a search result.
+1. In this example, look up a specific document based on its key. You would typically want to return a document when a user selects a document in a search result.
```python result = search_client.get_document(key="3")
This step shows you how to query an index using the **search** method of the [se
1. In this example, we'll use the autocomplete function. Autocomplete is typically used in a search box to provide potential matches as the user types into the search box.
- When the index was created, a suggester named "sg" was also created as part of the request. A suggester definition specifies which fields can be used to find potential matches to suggester requests. In this example, those fields are 'Tags', 'Address/City', 'Address/Country'. To simulate auto-complete, pass in the letters "sa" as a partial string. The autocomplete method of [SearchClient](/python/api/azure-search-documents/azure.search.documents.searchclient) sends back potential term matches.
+ When the index was created, a suggester named `sg` was also created as part of the request. A suggester definition specifies which fields can be used to find potential matches to suggester requests. In this example, those fields are 'Tags', 'Address/City', 'Address/Country'. To simulate auto-complete, pass in the letters "sa" as a partial string. The autocomplete method of [SearchClient](/python/api/azure-search-documents/azure.search.documents.searchclient) sends back potential term matches.
```python search_suggestion = 'sa'
search Search Normalizers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-normalizers.md
Title: Text normalization for filters, facets, sort
description: Specify normalizers to text fields in an index to customize the strict keyword matching behavior in filtering, faceting and sorting. -+ -+ Last updated 07/14/2022
search Tutorial Multiple Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-multiple-data-sources.md
This sample uses two small sets of data that describe seven fictional hotels. On
1. Select **Data Explorer** and then select **New Database**.
- :::image type="content" source="media/tutorial-multiple-data-sources/cosmos-newdb.png" alt-text="Create a new database" border="false":::
+ :::image type="content" source="media/tutorial-multiple-data-sources/cosmos-newdb.png" alt-text="Create a new database" border="true":::
1. Enter the name **hotel-rooms-db**. Accept default values for the remaining settings.
- :::image type="content" source="media/tutorial-multiple-data-sources/cosmos-dbname.png" alt-text="Configure database" border="false":::
+ :::image type="content" source="media/tutorial-multiple-data-sources/cosmos-dbname.png" alt-text="Configure database" border="true":::
1. Create a new container. Use the existing database you just created. Enter **hotels** for the container name, and use **/HotelId** for the Partition key.
- :::image type="content" source="media/tutorial-multiple-data-sources/cosmos-add-container.png" alt-text="Add container" border="false":::
+ :::image type="content" source="media/tutorial-multiple-data-sources/cosmos-add-container.png" alt-text="Add container" border="true":::
1. Select **Items** under **hotels**, and then select **Upload Item** on the command bar. Navigate to and then select the file **cosmosdb/HotelsDataSubset_CosmosDb.json** in the project folder.
- :::image type="content" source="media/tutorial-multiple-data-sources/cosmos-upload.png" alt-text="Upload to Azure Cosmos DB collection" border="false":::
+ :::image type="content" source="media/tutorial-multiple-data-sources/cosmos-upload.png" alt-text="Upload to Azure Cosmos DB collection" border="true":::
1. Use the Refresh button to refresh your view of the items in the hotels collection. You should see seven new database documents listed.
This sample uses two small sets of data that describe seven fictional hotels. On
1. [Create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md) named **hotel-rooms** to store the sample hotel room JSON files. You can set the Public Access Level to any of its valid values.
- :::image type="content" source="media/tutorial-multiple-data-sources/blob-add-container.png" alt-text="Create a blob container" border="false":::
+ :::image type="content" source="media/tutorial-multiple-data-sources/blob-add-container.png" alt-text="Create a blob container" border="true":::
1. After the container is created, open it and select **Upload** on the command bar. Navigate to the folder containing the sample files. Select all of them and then select **Upload**.
security Secure Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-design.md
GitHub, Docker Hub, and other sources.
Azure offers other services that you can use to host websites and web applications. For most scenarios, Web Apps is the best choice. For a micro service architecture, consider [Azure Service
-Fabric](https://azure.microsoft.com/documentation/services/service-fabric).
+Fabric](/azure/service-fabric).
If you need more control over the VMs that your code runs on, consider [Azure Virtual
-Machines](https://azure.microsoft.com/documentation/services/virtual-machines/).
+Machines](/azure/virtual-machines/).
For more information about how to choose between these Azure services, see a [comparison of Azure App Service, Virtual Machines, Service Fabric, and Cloud
security Threat Modeling Tool Releases 71509112 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-71509112.md
Yes, you can! The [Azure stencil set is available on GitHub](https://github.com/
## Documentation and feedback -- Documentation for the Threat Modeling Tool is located on [docs.microsoft.com](threat-modeling-tool.md), and includes information [about using the tool](threat-modeling-tool-getting-started.md).
+- Documentation for the Threat Modeling Tool is [located](threat-modeling-tool.md), and includes information [about using the tool](threat-modeling-tool-getting-started.md).
## Next steps
security Threat Modeling Tool Releases 71510231 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-71510231.md
As originally noted in the [GA release notes](threat-modeling-tool-releases-7150
## Documentation and feedback -- Documentation for the Threat Modeling Tool is located on [docs.microsoft.com](threat-modeling-tool.md), and includes information [about using the tool](threat-modeling-tool-getting-started.md).
+- Documentation for the Threat Modeling Tool is [located](threat-modeling-tool.md), and includes information [about using the tool](threat-modeling-tool-getting-started.md).
## Next steps
security Threat Modeling Tool Releases 71601261 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-71601261.md
Users of Windows 10 Enterprise LTSB (version 1507) that have installed the lates
## Documentation and feedback -- Documentation for the Threat Modeling Tool is located on [docs.microsoft.com](threat-modeling-tool.md), and includes information [about using the tool](threat-modeling-tool-getting-started.md).
+- Documentation for the Threat Modeling Tool is [located](threat-modeling-tool.md), and includes information [about using the tool](threat-modeling-tool-getting-started.md).
## Next steps
security Threat Modeling Tool Releases 71604081 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-71604081.md
All support links within the tool have been updated to direct users to [tmtextsu
## Documentation and feedback -- Documentation for the Threat Modeling Tool is located on [docs.microsoft.com](threat-modeling-tool.md), and includes information [about using the tool](threat-modeling-tool-getting-started.md).
+- Documentation for the Threat Modeling Tool is [located](threat-modeling-tool.md), and includes information [about using the tool](threat-modeling-tool-getting-started.md).
## Next steps
security Threat Modeling Tool Releases 71607021 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-71607021.md
A stencil set for modeling medical devices has been contributed by the open-sour
## Documentation and feedback -- Documentation for the Threat Modeling Tool is located on [docs.microsoft.com](threat-modeling-tool.md), and includes information [about using the tool](threat-modeling-tool-getting-started.md).
+- Documentation for the Threat Modeling Tool is [located](threat-modeling-tool.md), and includes information [about using the tool](threat-modeling-tool-getting-started.md).
## Next steps
security Threat Modeling Tool Releases 71610151 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-71610151.md
This issue is under investigation
## Documentation and feedback -- Documentation for the Threat Modeling Tool is located on [docs.microsoft.com](./threat-modeling-tool.md), and includes information [about using the tool](./threat-modeling-tool-getting-started.md).
+- Documentation for the Threat Modeling Tool is [located](./threat-modeling-tool.md), and includes information [about using the tool](./threat-modeling-tool-getting-started.md).
## Next steps
-Download the latest version of the [Microsoft Threat Modeling Tool](https://aka.ms/threatmodelingtool).
+Download the latest version of the [Microsoft Threat Modeling Tool](https://aka.ms/threatmodelingtool).
security Threat Modeling Tool Releases 73002061 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-73002061.md
This issue has been resolved in this release.
## Documentation and feedback -- Documentation for the Threat Modeling Tool is located on [docs.microsoft.com](./threat-modeling-tool.md), and includes information [about using the tool](./threat-modeling-tool-getting-started.md).
+- Documentation for the Threat Modeling Tool is [located](./threat-modeling-tool.md), and includes information [about using the tool](./threat-modeling-tool-getting-started.md).
## Next steps
-Download the latest version of the [Microsoft Threat Modeling Tool](https://aka.ms/threatmodelingtool).
+Download the latest version of the [Microsoft Threat Modeling Tool](https://aka.ms/threatmodelingtool).
security Threat Modeling Tool Releases 73003161 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-73003161.md
A new DiagramReader feature has been added in the main menu while a model is ope
## Documentation and feedback -- Documentation for the Threat Modeling Tool is located on [docs.microsoft.com](./threat-modeling-tool.md), and includes information [about using the tool](./threat-modeling-tool-getting-started.md).
+- Documentation for the Threat Modeling Tool is [located](./threat-modeling-tool.md), and includes information [about using the tool](./threat-modeling-tool-getting-started.md).
## Next steps
-Download the latest version of the [Microsoft Threat Modeling Tool](https://aka.ms/threatmodelingtool).
+Download the latest version of the [Microsoft Threat Modeling Tool](https://aka.ms/threatmodelingtool).
security Threat Modeling Tool Releases 73007142 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-73007142.md
Version 7.3.00714.2 of the Microsoft Threat Modeling Tool (TMT) was released on
## Documentation and feedback -- Documentation for the Threat Modeling Tool is located on [docs.microsoft.com](./threat-modeling-tool.md), and includes information [about using the tool](./threat-modeling-tool-getting-started.md).
+- Documentation for the Threat Modeling Tool is [located](./threat-modeling-tool.md), and includes information [about using the tool](./threat-modeling-tool-getting-started.md).
## Next steps
-Download the latest version of the [Microsoft Threat Modeling Tool](https://aka.ms/threatmodelingtool).
+Download the latest version of the [Microsoft Threat Modeling Tool](https://aka.ms/threatmodelingtool).
security Threat Modeling Tool Releases 73007291 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-73007291.md
This error will continue to appear if the Threat Modeling Tool is launched by do
## Documentation and feedback -- Documentation for the Threat Modeling Tool is located on [docs.microsoft.com](./threat-modeling-tool.md), and includes information [about using the tool](./threat-modeling-tool-getting-started.md).
+- Documentation for the Threat Modeling Tool is [located](./threat-modeling-tool.md), and includes information [about using the tool](./threat-modeling-tool-getting-started.md).
## Next steps
-Download the latest version of the [Microsoft Threat Modeling Tool](https://aka.ms/threatmodelingtool).
+Download the latest version of the [Microsoft Threat Modeling Tool](https://aka.ms/threatmodelingtool).
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
The following tables display the current Microsoft Sentinel feature availability
| - [Microsoft Defender for Cloud](../../sentinel/connect-azure-security-center.md) | GA | GA | | - [Microsoft Defender for IoT](../../sentinel/data-connectors-reference.md#microsoft-defender-for-iot) | GA | GA | | - [Microsoft Insider Risk Management](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Not Available |
-| - [Azure Firewall ](../../sentinel/data-connectors-reference.md#azure-firewall) | GA | GA |
+| - [Azure Firewall](../../sentinel/data-connectors-reference.md#azure-firewall) | GA | GA |
| - [Azure Information Protection](../../sentinel/data-connectors-reference.md#azure-information-protection-preview) | Public Preview | Not Available |
-| - [Azure Key Vault ](../../sentinel/data-connectors-reference.md#azure-key-vault) | Public Preview | Not Available |
+| - [Azure Key Vault](../../sentinel/data-connectors-reference.md#azure-key-vault) | Public Preview | Not Available |
| - [Azure Kubernetes Services (AKS)](../../sentinel/data-connectors-reference.md#azure-kubernetes-service-aks) | Public Preview | Not Available | | - [Azure SQL Databases](../../sentinel/data-connectors-reference.md#azure-sql-databases) | GA | GA | | - [Azure WAF](../../sentinel/data-connectors-reference.md#azure-web-application-firewall-waf) | GA | GA |
The following tables display the current Microsoft Sentinel feature availability
| - [ESET Enterprise Inspector](../../sentinel/connect-data-sources.md) | Public Preview | Not Available | | - [Eset Security Management Center](../../sentinel/connect-data-sources.md) | Public Preview | Not Available | | - [ExtraHop Reveal(x)](../../sentinel/data-connectors-reference.md#extrahop-revealx) | GA | GA |
-| - [F5 BIG-IP ](../../sentinel/data-connectors-reference.md#f5-big-ip) | GA | GA |
+| - [F5 BIG-IP](../../sentinel/data-connectors-reference.md#f5-big-ip) | GA | GA |
| - [F5 Networks](../../sentinel/data-connectors-reference.md#f5-networks-asm) | GA | GA | | - [FireEye NX (Network Security)](../../sentinel/sentinel-solutions-catalog.md#fireeye-nx-network-security) | Public Preview | Not Available | | - [Flare Systems Firework](../../sentinel/sentinel-solutions-catalog.md) | Public Preview | Not Available | | - [Forcepoint NGFW](../../sentinel/data-connectors-reference.md#forcepoint-cloud-access-security-broker-casb-preview) | Public Preview | Public Preview | | - [Forcepoint CASB](../../sentinel/data-connectors-reference.md#forcepoint-cloud-access-security-broker-casb-preview) | Public Preview | Public Preview |
-| - [Forcepoint DLP ](../../sentinel/data-connectors-reference.md#forcepoint-data-loss-prevention-dlp-preview) | Public Preview | Not Available |
+| - [Forcepoint DLP](../../sentinel/data-connectors-reference.md#forcepoint-data-loss-prevention-dlp-preview) | Public Preview | Not Available |
| - [Forescout](../../sentinel/sentinel-solutions-catalog.md#forescout) | Public Preview | Not Available | | - [ForgeRock Common Audit for CEF](../../sentinel/connect-data-sources.md) | Public Preview | Public Preview | | - [Fortinet](../../sentinel/data-connectors-reference.md#fortinet) | GA | GA | | - [Google Cloud Platform DNS](../../sentinel/sentinel-solutions-catalog.md#google) | Public Preview | Not Available | | - [Google Cloud Platform](../../sentinel/sentinel-solutions-catalog.md#google) | Public Preview | Not Available |
-| - [Google Workspace (G Suite) ](../../sentinel/data-connectors-reference.md#google-workspace-g-suite-preview) | Public Preview | Not Available |
+| - [Google Workspace (G Suite)](../../sentinel/data-connectors-reference.md#google-workspace-g-suite-preview) | Public Preview | Not Available |
| - [Illusive Attack Management System](../../sentinel/data-connectors-reference.md#illusive-attack-management-system-ams-preview) | Public Preview | Public Preview | | - [Imperva WAF Gateway](../../sentinel/data-connectors-reference.md#imperva-waf-gateway-preview) | Public Preview | Public Preview | | - [InfoBlox Cloud](../../sentinel/sentinel-solutions-catalog.md#infoblox) | Public Preview | Not Available |
The following tables display the current Microsoft Sentinel feature availability
| - [Semperis](../../sentinel/sentinel-solutions-catalog.md#semperis) | Public Preview | Not Available | | - [Senserva Pro](../../sentinel/sentinel-solutions-catalog.md#senserva-pro) | Public Preview | Not Available | | - [Slack Audit](../../sentinel/sentinel-solutions-catalog.md#slack) | Public Preview | Not Available |
-| - [SonicWall Firewall ](../../sentinel/data-connectors-reference.md#sophos-cloud-optix-preview) | Public Preview | Public Preview |
+| - [SonicWall Firewall](../../sentinel/data-connectors-reference.md#sophos-cloud-optix-preview) | Public Preview | Public Preview |
| - [Sonrai Security](../../sentinel/sentinel-solutions-catalog.md#sonrai-security) | Public Preview | Not Available | | - [Sophos Cloud Optix](../../sentinel/data-connectors-reference.md#sophos-cloud-optix-preview) | Public Preview | Not Available | | - [Sophos XG Firewall](../../sentinel/data-connectors-reference.md#sophos-xg-firewall-preview) | Public Preview | Public Preview |
security Identity Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/identity-management-overview.md
Azure AD Multi-Factor Authentication is a method of authentication that requires
Learn more:
-* [Multi-Factor Authentication](https://azure.microsoft.com/documentation/services/multi-factor-authentication/)
+* [Multi-Factor Authentication](/azure/multi-factor-authentication/)
* [What is Azure AD Multi-Factor Authentication?](../../active-directory/authentication/concept-mfa-howitworks.md) * [How Azure AD Multi-Factor Authentication works](../../active-directory/authentication/concept-mfa-howitworks.md)
MicrosoftΓÇÖs identity solutions span on-premises and cloud-based capabilities,
Learn more: * [Hybrid identity white paper](https://download.microsoft.com/download/D/B/A/DBA9E313-B833-48EE-998A-240AA799A8AB/Hybrid_Identity_White_Paper.pdf)
-* [Azure Active Directory](https://azure.microsoft.com/documentation/services/active-directory/)
+* [Azure Active Directory](/azure/active-directory/)
* [Azure AD team blog](https://blogs.technet.microsoft.com/ad/) ## Azure AD access reviews
Azure Active Directory (Azure AD) access reviews enable organizations to efficie
Learn more: * [Azure AD access reviews](../../active-directory/governance/access-reviews-overview.md)
-* [Manage user access with Azure AD access reviews](../../active-directory/governance/access-reviews-overview.md)
+* [Manage user access with Azure AD access reviews](../../active-directory/governance/access-reviews-overview.md)
security Management Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/management-monitoring-overview.md
Multi-Factor Authentication helps safeguard access to data and applications whil
Learn more:
-* [Multi-Factor Authentication](https://azure.microsoft.com/documentation/services/multi-factor-authentication/)
+* [Multi-Factor Authentication](/azure/multi-factor-authentication/)
* [What is Azure AD Multi-Factor Authentication?](../../active-directory/authentication/concept-mfa-howitworks.md) * [How Azure AD Multi-Factor Authentication works](../../active-directory/authentication/concept-mfa-howitworks.md)
security Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/management.md
A Remote Desktop Gateway is a policy-based RDP proxy service that enforces secur
* Provision an [Azure management certificate](/previous-versions/azure/gg551722(v=azure.100)) on the RD Gateway so that it is the only host allowed to access the Azure portal. * Join the RD Gateway to the same [management domain](/previous-versions/windows/it-pro/windows-2000-server/bb727085(v=technet.10)) as the administrator workstations. This is necessary when you are using a site-to-site IPsec VPN or ExpressRoute within a domain that has a one-way trust to Azure AD, or if you are federating credentials between your on-premises AD DS instance and Azure AD. * Configure a [client connection authorization policy](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc753324(v=ws.11)) to let the RD Gateway verify that the client machine name is valid (domain joined) and allowed to access the Azure portal.
-* Use IPsec for [Azure VPN](https://azure.microsoft.com/documentation/services/vpn-gateway/) to further protect management traffic from eavesdropping and token theft, or consider an isolated Internet link via [Azure ExpressRoute](https://azure.microsoft.com/documentation/services/expressroute/).
+* Use IPsec for [Azure VPN](/azure/vpn-gateway/) to further protect management traffic from eavesdropping and token theft, or consider an isolated Internet link via [Azure ExpressRoute](/azure/expressroute/).
* Enable multi-factor authentication (via [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md)) or smart-card authentication for administrators who log on through RD Gateway. * Configure source [IP address restrictions](https://azure.microsoft.com/blog/2013/08/27/confirming-dynamic-ip-address-restrictions-in-windows-azure-web-sites/) or [Network Security Groups](../../virtual-network/network-security-groups-overview.md) in Azure to minimize the number of permitted management endpoints.
The following resources are available to provide more general information about
* [Securing Privileged Access](/windows-server/identity/securing-privileged-access/securing-privileged-access) ΓÇô get the technical details for designing and building a secure administrative workstation for Azure management * [Microsoft Trust Center](https://microsoft.com/en-us/trustcenter/cloudservices/azure) - learn about Azure platform capabilities that protect the Azure fabric and the workloads that run on Azure
-* [Microsoft Security Response Center](https://www.microsoft.com/msrc) -- where Microsoft security vulnerabilities, including issues with Azure, can be reported or via email to [secure@microsoft.com](mailto:secure@microsoft.com)
+* [Microsoft Security Response Center](https://www.microsoft.com/msrc) -- where Microsoft security vulnerabilities, including issues with Azure, can be reported or via email to [secure@microsoft.com](mailto:secure@microsoft.com)
security Network Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/network-best-practices.md
Point-to-site VPN is more secure than direct RDP or SSH connections because the
**Option**: A [site-to-site VPN](../../vpn-gateway/vpn-gateway-howto-site-to-site-classic-portal.md) connects an entire network to another network over the internet. You can use a site-to-site VPN to connect your on-premises network to an Azure virtual network. Users on your on-premises network connect by using the RDP or SSH protocol over the site-to-site VPN connection. You don't have to allow direct RDP or SSH access over the internet. **Scenario**: Use a dedicated WAN link to provide functionality similar to the site-to-site VPN.
-**Option**: Use [ExpressRoute](https://azure.microsoft.com/documentation/services/expressroute/). It provides functionality similar to the site-to-site VPN. The main differences are:
+**Option**: Use [ExpressRoute](/azure/expressroute/). It provides functionality similar to the site-to-site VPN. The main differences are:
- The dedicated WAN link doesn't traverse the internet. - Dedicated WAN links are typically more stable and perform better.
security Operational Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/operational-overview.md
For more information, see the [Azure Backup components table](../../backup/backu
### Site Recovery
-[Azure Site Recovery](https://azure.microsoft.com/documentation/services/site-recovery) provides business continuity by orchestrating the replication of on-premises virtual and physical machines to Azure, or to a secondary site. If your primary site is unavailable, you fail over to the secondary location so that users can keep working. You fail back when systems return to working order. Use Microsoft Defender for Cloud to perform more intelligent and effective threat detection.
+[Azure Site Recovery](/azure/site-recovery) provides business continuity by orchestrating the replication of on-premises virtual and physical machines to Azure, or to a secondary site. If your primary site is unavailable, you fail over to the secondary location so that users can keep working. You fail back when systems return to working order. Use Microsoft Defender for Cloud to perform more intelligent and effective threat detection.
## Azure Active Directory
security Operational Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/operational-security.md
The core functionality of Azure Monitor logs is provided by a set of services th
### Azure Monitor logs
-[Azure Monitor logs](https://azure.microsoft.com/documentation/services/log-analytics) provides monitoring services by collecting data from managed resources into a central repository. This data could include events, performance data, or custom data provided through the API. Once collected, the data is available for alerting, analysis, and export.
+[Azure Monitor logs](/azure/log-analytics) provides monitoring services by collecting data from managed resources into a central repository. This data could include events, performance data, or custom data provided through the API. Once collected, the data is available for alerting, analysis, and export.
This method allows you to consolidate data from various sources, so you can combine data from your Azure services with your existing on-premises environment. It also clearly separates the collection of the data from the action taken on that data so that all actions are available to all kinds of data.
The Azure Monitor service manages your cloud-based data securely by using the fo
### Azure Backup
-[Azure Backup](https://azure.microsoft.com/documentation/services/backup) provides data backup and restore services and is part of the Azure Monitor suite of products and services.
+[Azure Backup](/azure/backup) provides data backup and restore services and is part of the Azure Monitor suite of products and services.
It protects your application data and retains it for years without any capital investment and with minimal operating costs. It can back up data from physical and virtual Windows servers in addition to application workloads such as SQL Server and SharePoint. It can also be used by [System Center Data Protection Manager (DPM)](https://en.wikipedia.org/wiki/System_Center_Data_Protection_Manager) to replicate protected data to Azure for redundancy and long-term storage.
security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/overview.md
The section provides additional information regarding key features in this area
With Azure IaaS, you can use antimalware software from security vendors such as Microsoft, Symantec, Trend Micro, McAfee, and Kaspersky to protect your virtual machines from malicious files, adware, and other threats. [Microsoft Antimalware](antimalware.md) for Azure Cloud Services and Virtual Machines is a protection capability that helps identify and remove viruses, spyware, and other malicious software. Microsoft Antimalware provides configurable alerts when known malicious or unwanted software attempts to install itself or run on your Azure systems. Microsoft Antimalware can also be deployed using Microsoft Defender for Cloud ### Hardware Security Module
-Encryption and authentication do not improve security unless the keys themselves are protected. You can simplify the management and security of your critical secrets and keys by storing them in [Azure Key Vault](../../key-vault/general/overview.md). Key Vault provides the option to store your keys in hardware Security modules (HSMs) certified to FIPS 140-2 Level 2 standards. Your SQL Server encryption keys for backup or [transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption) can all be stored in Key Vault with any keys or secrets from your applications. Permissions and access to these protected items are managed through [Azure Active Directory](https://azure.microsoft.com/documentation/services/active-directory/).
+Encryption and authentication do not improve security unless the keys themselves are protected. You can simplify the management and security of your critical secrets and keys by storing them in [Azure Key Vault](../../key-vault/general/overview.md). Key Vault provides the option to store your keys in hardware Security modules (HSMs) certified to FIPS 140-2 Level 2 standards. Your SQL Server encryption keys for backup or [transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption) can all be stored in Key Vault with any keys or secrets from your applications. Permissions and access to these protected items are managed through [Azure Active Directory](/azure/active-directory/).
### Virtual machine backup [Azure Backup](../../backup/backup-overview.md) is a solution that protects your application data with zero capital investment and minimal operating costs. Application errors can corrupt your data, and human errors can introduce bugs into your applications that can lead to security issues. With Azure Backup, your virtual machines running Windows and Linux are protected.
security Paas Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/paas-deployments.md
Web applications are increasingly targets of malicious attacks that exploit comm
## Monitor the performance of your applications Monitoring is the act of collecting and analyzing data to determine the performance, health, and availability of your application. An effective monitoring strategy helps you understand the detailed operation of the components of your application. It helps you increase your uptime by notifying you of critical issues so that you can resolve them before they become problems. It also helps you detect anomalies that might be security related.
-Use [Azure Application Insights](https://azure.microsoft.com/documentation/services/application-insights) to monitor availability, performance, and usage of your application, whether it's hosted in the cloud or on-premises. By using Application Insights, you can quickly identify and diagnose errors in your application without waiting for a user to report them. With the information that you collect, you can make informed choices on your application's maintenance and improvements.
+Use [Azure Application Insights](/azure/application-insights) to monitor availability, performance, and usage of your application, whether it's hosted in the cloud or on-premises. By using Application Insights, you can quickly identify and diagnose errors in your application without waiting for a user to report them. With the information that you collect, you can make informed choices on your application's maintenance and improvements.
Application Insights has extensive tools for interacting with the data that it collects. Application Insights stores its data in a common repository. It can take advantage of shared functionality such as alerts, dashboards, and deep analysis with the Kusto query language.
security Technical Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/technical-capabilities.md
With Azure Monitor, you can manage any instance in any cloud, including on-premi
### Azure Monitor logs
-[Azure Monitor logs](https://azure.microsoft.com/documentation/services/log-analytics) provides monitoring services by collecting data from managed resources into a central repository. This data could include events, performance data, or custom data provided through the API. Once collected, the data is available for alerting, analysis, and export.
+[Azure Monitor logs](/azure/log-analytics) provides monitoring services by collecting data from managed resources into a central repository. This data could include events, performance data, or custom data provided through the API. Once collected, the data is available for alerting, analysis, and export.
![Azure Monitor logs](./media/technical-capabilities/azure-security-technical-capabilities-fig9.png)
security Virtual Machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/virtual-machines-overview.md
Learn more:
Improving key security can enhance encryption and authentication protections. You can simplify the management and security of your critical secrets and keys by storing them in Azure Key Vault.
-Key Vault provides the option to store your keys in hardware security modules (HSMs) certified to FIPS 140-2 Level 2 standards. Your SQL Server encryption keys for backup or [transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption) can all be stored in Key Vault with any keys or secrets from your applications. Permissions and access to these protected items are managed through [Azure Active Directory](https://azure.microsoft.com/documentation/services/active-directory/).
+Key Vault provides the option to store your keys in hardware security modules (HSMs) certified to FIPS 140-2 Level 2 standards. Your SQL Server encryption keys for backup or [transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption) can all be stored in Key Vault with any keys or secrets from your applications. Permissions and access to these protected items are managed through [Azure Active Directory](/azure/active-directory/).
Learn more:
sentinel Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing.md
The following data sources are free with Microsoft Sentinel:
Although alerts are free, the raw logs for some Microsoft 365 Defender, Defender for Cloud Apps, Azure Active Directory (Azure AD), and Azure Information Protection (AIP) data types are paid.
-The following table lists the free data sources you can enable in Microsoft Sentinel. Some of the data connectors, such as Microsoft 365 Defender and Defender for Cloud Apps, include both free and paid data types.
-
-| Microsoft Sentinel Data Connector | Data type | Free or paid |
-|-|--||
-| **Azure Activity Logs** | AzureActivity | Free |
-| **Azure AD Identity Protection** | SecurityAlert (IPC) | Free |
-| **Office 365** | OfficeActivity (SharePoint) | Free|
-|| OfficeActivity (Exchange)|Free|
-|| OfficeActivity (Teams) | Free|
-| **Microsoft Defender for Cloud** | SecurityAlert (Defender for Cloud) | Free |
-| **Microsoft Defender for IoT** | SecurityAlert (Defender for IoT) | Free |
-| **Microsoft 365 Defender** | SecurityIncident | Free|
-||SecurityAlert| Free|
-||DeviceEvents | Paid|
-||DeviceFileEvents | Paid|
-||DeviceImageLoadEvents | Paid|
-||DeviceInfo | Paid|
-||DeviceLogonEvents | Paid|
-||DeviceNetworkEvents | Paid|
-||DeviceNetworkInfo | Paid|
-||DeviceProcessEvents | Paid|
-||DeviceRegistryEvents | Paid|
-||DeviceFileCertificateInfo | Paid|
-| **Microsoft Defender for Endpoint** | SecurityAlert (MDATP) | Free |
-| **Microsoft Defender for Identity** | SecurityAlert (AATP) | Free |
-| **Microsoft Defender for Cloud Apps** | SecurityAlert (Defender for Cloud Apps) | Free |
-||MCASShadowITReporting | Paid|
+The following table lists the free data sources you can enable in Microsoft Sentinel.
+
+| Microsoft Sentinel data connector | Free data type |
+|-|--|
+| **Azure Activity Logs** | AzureActivity |
+| **Azure AD Identity Protection** | SecurityAlert (IPC) |
+| **Office 365** | OfficeActivity (SharePoint) |
+|| OfficeActivity (Exchange)|
+|| OfficeActivity (Teams) |
+| **Microsoft Defender for Cloud** | SecurityAlert (Defender for Cloud) |
+| **Microsoft Defender for IoT** | SecurityAlert (Defender for IoT) |
+| **Microsoft 365 Defender** | SecurityIncident |
+||SecurityAlert|
+| **Microsoft Defender for Endpoint** | SecurityAlert (MDATP) |
+| **Microsoft Defender for Identity** | SecurityAlert (AATP) |
+| **Microsoft Defender for Cloud Apps** | SecurityAlert (Defender for Cloud Apps) |
+ For data connectors that include both free and paid data types, you can select which data types you want to enable. Learn more about how to [connect data sources](connect-data-sources.md), including free and paid data sources.
sentinel Connect Threat Intelligence Tip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-threat-intelligence-tip.md
You now have all three pieces of information you need to configure your TIP or c
1. Enter these values in the configuration of your integrated TIP or custom solution where required.
-1. For the target product, specify **Microsoft Sentinel**.
+1. For the target product, specify **Azure Sentinel**. (Specifying "Microsoft Sentinel" will result in an error.)
1. For the action, specify **alert**.
sentinel Data Source Schema Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-source-schema-reference.md
This article lists supported Azure and third-party data source schemas, with lin
| **Azure** | Azure Active Directory | SigninEvents | [Azure AD activity reports sign-in properties](/graph/api/resources/signin#properties) | | **Azure** | Azure Active Directory | AuditLogs | [Azure Monitor AuditLogs reference](/azure/azure-monitor/reference/tables/auditlogs) | | **Azure** | Azure Active Directory | AzureActivity | [Azure Monitor AzureActivity reference](/azure/azure-monitor/reference/tables/azureactivity) |
-| **Azure** | Office | OfficeActivity | Office 365 Management Activity API schemas: <br>- [Common schema ](/office/office-365-management-api/office-365-management-activity-api-schema#common-schema) <br>- [Exchange Admin schema ](/office/office-365-management-api/office-365-management-activity-api-schema#exchange-admin-schema) <br>- [Exchange Mailbox schema](/office/office-365-management-api/office-365-management-activity-api-schema#exchange-mailbox-schema) <br>- [SharePoint Base schema](/office/office-365-management-api/office-365-management-activity-api-schema#sharepoint-base-schema) <br>- [SharePoint file operations](/office/office-365-management-api/office-365-management-activity-api-schema#sharepoint-file-operations) |
+| **Azure** | Office | OfficeActivity | Office 365 Management Activity API schemas: <br>- [Common schema](/office/office-365-management-api/office-365-management-activity-api-schema#common-schema) <br>- [Exchange Admin schema](/office/office-365-management-api/office-365-management-activity-api-schema#exchange-admin-schema) <br>- [Exchange Mailbox schema](/office/office-365-management-api/office-365-management-activity-api-schema#exchange-mailbox-schema) <br>- [SharePoint Base schema](/office/office-365-management-api/office-365-management-activity-api-schema#sharepoint-base-schema) <br>- [SharePoint file operations](/office/office-365-management-api/office-365-management-activity-api-schema#sharepoint-file-operations) |
| **Azure** | Azure Key Vault | AzureDiagnostics | [Azure Monitor AzureDiagnostics reference](/azure/azure-monitor/reference/tables/azurediagnostics) | | **Host** | Linux | Syslog | [Azure Monitor Syslog reference](/azure/azure-monitor/reference/tables/syslog) | | **Network** | IIS Logs | W3CIISLog | [Azure Monitor W3CIISLog reference](/azure/azure-monitor/reference/tables/w3ciislog) |
sentinel Forward Syslog Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/forward-syslog-monitor-agent.md
+
+ Title: Forward syslog data to Microsoft Sentinel and Azure Monitor by using the Azure Monitor agent
+description: Monitor linux-based devices by forwarding syslog data to a Log Analytics workspace.
++++ Last updated : 08/18/2022+
+#Customer intent: As a security-engineer, I want to get syslog data into Microsoft Sentinel so that I can use the data with other data to do attack detection, threat visibility, proactive hunting, and threat response. As an IT administrator, I want to get syslog data into my Log Analytics workspace to monitor my linux-based devices.
++
+# Tutorial: Forward syslog data to a Log Analytics workspace by using the Azure Monitor agent
+
+In this tutorial, you'll configure a Linux virtual machine (VM) to forward syslog data to your workspace by using the Azure Monitor agent. These steps allow you to collect and monitor data from Linux-based devices where you can't install an agent like a firewall network device.
+
+Configure your linux-based device to send data to a Linux VM. The Azure Monitor agent on the VM forwards the syslog data to the Log Analytics workspace. Then use Microsoft Sentinel or Azure Monitor to monitor the device from the data stored in the Log Analytics workspace.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a data collection rule
+> * Verify the Azure Monitor agent is running
+> * Enable log reception on port 514
+> * Verify syslog data is forwarded to your Log Analytics workspace
+
+## Prerequisites
+
+To complete the steps in this tutorial, you must have the following resources and roles.
+
+- Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Azure account with the following roles to deploy the agent and create the data collection rules:
+
+ |Built-in Role |Scope |Reason |
+ ||||
+ |- [Virtual Machine Contributor](/azure/role-based-access-control/built-in-roles)</br>- [Azure Connected Machine Resource Administrator](/azure/role-based-access-control/built-in-roles) | - Virtual machines</br>- Scale sets</br>- Arc-enabled servers | To deploy the agent |
+ |Any role that includes the action Microsoft.Resources/deployments/* | - Subscription and/or</br>- Resource group and/or</br>- An existing data collection rule | To deploy ARM templates |
+ |[Monitoring Contributor ](/azure/role-based-access-control/built-in-roles) |- Subscription and/or </br>- Resource group and/or</br>- An existing data collection rule | To create or edit data collection rules |
+- Log Analytics workspace.
+- Linux server that's running an operating system that supports Azure Monitor agent.
+
+ - [Supported Linux operating systems for Azure Monitor agent](/azure/azure-monitor/agents/agents-overview#linux)
+ - [Create a Linux virtual machine in the Azure portal](/azure/virtual-machines/linux/quick-create-portal) or
+ - Onboard an on-premises Linux server to Azure Arc. See [Quickstart: Connect hybrid machines with Azure Arc-enabled servers](/azure/azure-arc/servers/learn/quick-enable-hybrid-vm)
+
+- Linux-based device that generates event log data like a firewall network device.
+
+## Create a data collection rule
+
+Create a *data collection rule* in the same region as your Microsoft Sentinel workspace.
+A data collection rule is an Azure resource that allows you to define the way data should be handled as it's ingested into Microsoft Sentinel.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for and open **Monitor**.
+1. Under **Settings**, select **Data Collection Rules**.
+1. Select **Create**.
+
+ :::image type="content" source="media/forward-syslog-monitor-agent/create-data-collection-rule.png" alt-text="Screenshot of the data collections rules pane with the create option selected.":::
+
+### Enter basic information
+
+1. On the **Basics** pane, enter the following information:
+
+ |Field |Value |
+ |||
+ |Rule Name | Enter a name like dcr-syslog |
+ |Subscription | Select the appropriate subscription |
+ |Resource group | Select the appropriate resource group |
+ |Region | Select the same region that your Microsoft Sentinel workspace is located |
+ |Platform Type | Linux |
+1. Select **Next: Resources**.
+
+### Add resources
+1. Select **Add resources**.
+1. Use the filters to find the virtual machine that you'll use to collect logs.
+ :::image type="content" source="media/forward-syslog-monitor-agent/create-rule-scope.png" alt-text="Screenshot of the page to select the scope for the data collection rule. ":::
+1. Select the virtual machine.
+1. Select **Apply**.
+1. Select **Next: Collect and deliver**.
+
+### Add data source
+
+1. Select **Add data source**.
+1. For **Data source type**, select **Linux syslog**.
+ :::image type="content" source="media/forward-syslog-monitor-agent/create-rule-data-source.png" alt-text="Screenshot of page to select data source type and minimum log level":::
+1. For **Minimum log level**, leave the default values **LOG_DEBUG**.
+1. Select **Next: Destination**.
+
+### Add destination
+
+1. Select **Add destination**.
+
+ :::image type="content" source="media/forward-syslog-monitor-agent/create-rule-add-destination.png" alt-text="Screenshot of the destination tab with the add destination option selected.":::
+1. Enter the following values:
+
+ |Field |Value |
+ |||
+ |Destination type | Azure Monitor Logs |
+ |Subscription | Select the appropriate subscription |
+ |Account or namespace |Select the appropriate Log Analytics workspace|
+
+1. Select **Add data source**.
+1. Select **Next: Review + create**.
+
+### Create rule
+
+1. Select **Create**.
+1. Wait 20 minutes before moving on to the next section.
+
+If your VM doesn't have the Azure Monitor agent installed, the data collection rule deployment triggers the installation of the agent on the VM.
+
+## Verify the Azure Monitor agent is running
+
+In Microsoft Sentinel or Azure Monitor, verify that the Azure Monitor agent is running on your VM.
+
+1. In the Azure portal, search for and open **Microsoft Sentinel** or **Monitor**.
+1. If you're using Microsoft Sentinel, select the appropriate workspace.
+1. Under **General**, select **Logs**.
+1. Close the **Queries** page so that the **New Query** tab is displayed.
+1. Run the following query where you replace the computer value with the name of your Linux virtual machine.
+
+ ```kusto
+ Heartbeat
+ | where Computer == "vm-ubuntu"
+ | take 10
+ ```
+
+## Enable log reception on port 514
+
+Verify that the VM that's collecting the log data allows reception on port 514 TCP or UDP depending on the syslog source. Then configure the built-in Linux syslog daemon on the VM to listen for syslog messages from your devices. After you complete those steps, configure your linux-based device to send logs to your VM.
+
+The following two sections cover how to add an inbound port rule for an Azure VM and configure the built-in Linux syslog daemon.
+
+### Allow inbound syslog traffic on the VM
+
+If you're forwarding syslogs to an Azure VM, use the following steps to allow reception on port 514.
+
+1. In the Azure portal, search for and select **Virtual Machines**.
+1. Select the VM.
+1. Under **Settings**, select **Networking**.
+1. Select **Add inbound port rule**.
+1. Enter the following values.
+
+ |Field |Value |
+ |||
+ |Destination port ranges | 514 |
+ |Protocol | TCP or UDP depending on syslog source |
+ |Action | Allow |
+ |Name | AllowSyslogInbound |
+
+ Use the default values for the rest of the fields.
+
+1. Select **Add**.
+
+### Configure Linux syslog daemon
+
+Connect to your Linux VM and run the following command to configure the Linux syslog daemon:
+
+```bash
+sudo wget -O Forwarder_AMA_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/Syslog/Forwarder_AMA_installer.py&&sudo python3 Forwarder_AMA_installer.py
+```
+
+This script can make changes for both rsyslog.d and syslog-ng.
+
+## Verify syslog data is forwarded to your Log Analytics workspace
+
+After you configured your linux-based device to send logs to your VM, verify that the Azure Monitor agent is forwarding syslog data to your workspace.
+
+1. In the Azure portal, search for and open **Microsoft Sentinel** or **Azure Monitor**.
+1. If you're using Microsoft Sentinel, select the appropriate workspace.
+1. Under **General**, select **Logs**.
+1. Close the **Queries** page so that the **New Query** tab is displayed.
+1. Run the following query where you replace the computer value with the name of your Linux virtual machine.
+
+ ```kusto
+ Syslog
+ | where Computer == "vm-ubuntu"
+ | summarize by HostName
+ ```
+
+## Clean up resources
+
+Evaluate whether you still need the resources you created like the virtual machine. Resources you leave running can cost you money. Delete the resources you don't need individually. Or delete the resource group to delete all the resources you've created.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Data collection rules in Azure Monitor](/azure/azure-monitor/essentials/data-collection-rule-overview)
sentinel Skill Up Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/skill-up-resources.md
This article walks you through a level 400 training to help you skill up on Micr
The modules listed here are split into five parts following the life cycle of a Security Operation Center (SOC): [Part 1: Overview](#part-1-overview)-- [Module 0: Other learning and support options ](#module-0-other-learning-and-support-options)
+- [Module 0: Other learning and support options](#module-0-other-learning-and-support-options)
- [Module 1: Get started with Microsoft Sentinel](#module-1-get-started-with-microsoft-sentinel) - [Module 2: How is Microsoft Sentinel used?](#module-2-how-is-microsoft-sentinel-used)
service-fabric Service Fabric Sfctl Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-events.md
Last updated 07/11/2022
# sfctl events Retrieve events from the events store (if EventStore service is already installed).
-The EventStore system service can be added through a config upgrade to any SFRP cluster running >=6.4. Please check the following url\: https\://docs.microsoft.com/azure/service-fabric/service-fabric-diagnostics-eventstore.
+The EventStore system service can be added through a config upgrade to any SFRP cluster running >=6.4. To check, see [EventStore overview](/azure/service-fabric/service-fabric-diagnostics-eventstore).
## Commands
The response is list of ServiceEvent objects.
| --output -o | Output format. Allowed values\: json, jsonc, table, tsv. Default\: json. | | --query | JMESPath query string. See http\://jmespath.org/ for more information and examples. | | --verbose | Increase logging verbosity. Use --debug for full debug logs. |-
service-health Resource Health Checks Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-health-checks-resource-types.md
Below is a complete list of all the checks executed through resource health by r
## Microsoft.compute/virtualmachines |Executed Checks| ||
-|<ul><li>Is the server hosting this virtual machine up and running?</li><li>Has the host OS booting completed?</li><li>Is the virtual machine container provisioned and powered up?</li><li>Is there network connectivity between the host and the storage account?</li><li>Has the booting of the guest OS completed?</li><li>Is there ongoing planned maintenance?</li><li>Is the host hardware degraded and predicted to fail soon?</li></ul>|
+|<ul><li>Is the server hosting this virtual machine up and running?</li><li>Is the virtual machine container provisioned and powered up?</li><li>Is there network connectivity between the host and the storage account?</li><li>Is there ongoing planned maintenance?</li><li>Is there heartbeats between Guest and host agent *(if Guest extension is installed)*?</li></ul>|
+
+## Microsoft.compute/virtualmachinescalesets
+|Executed Checks|
+||
+|<ul><li>Is the server hosting this virtual machine up and running?</li><li>Is the virtual machine container provisioned and powered up?</li><li>Is there network connectivity between the host and the storage account?</li><li>Is there ongoing planned maintenance?</li><li>Is there heartbeats between Guest and host agent *(if Guest extension is installed)*?</li></ul>|
+ ## Microsoft.ContainerService/managedClusters |Executed Checks|
Below is a complete list of all the checks executed through resource health by r
|<ul><li>Are any Backup operations on Backup Items configured in this vault failing due to causes beyond user control?</li><li>Are any Restore operations on Backup Items configured in this vault failing due to causes beyond user control?</li></ul> | ## Next Steps-- See [Introduction to Azure Service Health dashboard](service-health-overview.md) and [Introduction to Azure Resource Health](resource-health-overview.md) to understand more about them.
+- See [Introduction to Azure Service Health dashboard](service-health-overview.md) and [Introduction to Azure Resource Health](resource-health-overview.md) to understand more about them.
- [Frequently asked questions about Azure Resource Health](resource-health-faq.yml) - Set up alerts so you are notified of health issues. For more information, see [Configure Alerts for service health events](./alerts-activity-log-service-notifications-portal.md).
site-recovery Azure To Azure How To Enable Zone To Zone Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md
Last updated 03/23/2022 -+ # Enable Azure VM disaster recovery between availability zones This article describes how to replicate, failover, and failback Azure virtual machines from one Availability Zone to another, within the same Azure region.
->[!NOTE]
+> [!NOTE]
+>
+> - Support for Zone to Zone disaster recovery is currently limited to the following regions: Southeast Asia, East Asia, Japan East, Korea Central, Australia East, India Central, China North 3, UK South, West Europe, North Europe, Germany West Central, Norway East, France Central, Switzerland North, Sweden Central (Managed Access), South Africa North, Canada Central, US Gov Virginia, Central US, South Central US, East US, East US 2, West US 2, Brazil South, West US 3 and UAE North.
+>
+>
+> - Site Recovery does not move or store customer data out of the region in which it is deployed when the customer is using Zone to Zone Disaster Recovery. Customers may select a Recovery Services Vault from a different region if they so choose. The Recovery Services Vault contains metadata but no actual customer data.
+>
>
->- Support for Zone to Zone disaster recovery is currently limited to the following regions: Southeast Asia, East Asia, Japan East, Korea Central, Australia East, India Central, China North 3, UK South, West Europe, North Europe, Germany West Central, Norway East, France Central, Switzerland North, Sweden Central (Managed Access), South Africa North, Canada Central, US Gov Virginia, Central US, South Central US, East US, East US 2, West US 2, Brazil South and West US 3.
->- Site Recovery does not move or store customer data out of the region in which it is deployed when the customer is using Zone to Zone Disaster Recovery. Customers may select a Recovery Services Vault from a different region if they so choose. The Recovery Services Vault contains metadata but no actual customer data.
->- Zone to Zone disaster recovery is not supported for VMs having ZRS managed disks.
+> - Zone to Zone disaster recovery is not supported for VMs having ZRS managed disks.
Site Recovery service contributes to your business continuity and disaster recovery strategy by keeping your business apps up and running, during planned and unplanned outages. It is the recommended Disaster Recovery option to keep your applications up and running if there are regional outages.
Availability Zones are unique physical locations within an Azure region. Each zo
If you want to move VMs to an availability zone in a different region, [review this article](../resource-mover/move-region-availability-zone.md).
-## Using Availability Zones for Disaster Recovery
+## Using Availability Zones for Disaster Recovery
Typically, Availability Zones are used to deploy VMs in a High Availability configuration. They may be too close to each other to serve as a Disaster Recovery solution in case of natural disaster. However, in some scenarios, Availability Zones can be leveraged for Disaster Recovery: - Many customers who had a metro Disaster Recovery strategy while hosting applications on-premises sometimes look to mimic this strategy once they migrate applications over to Azure. These customers acknowledge the fact that metro Disaster Recovery strategy may not work in case of a large-scale physical disaster and accept this risk. For such customers, Zone to Zone Disaster Recovery can be used as a Disaster Recovery option.- - Many other customers have complicated networking infrastructure and do not wish to recreate it in a secondary region due to the associated cost and complexity. Zone to Zone Disaster Recovery reduces complexity as it leverages redundant networking concepts across Availability Zones making configuration much simpler. Such customers prefer simplicity and can also use Availability Zones for Disaster Recovery.- - In some regions that do not have a paired region within the same legal jurisdiction (for example, Southeast Asia), Zone to Zone Disaster Recovery can serve as the de-facto Disaster Recovery solution as it helps ensure legal compliance, since your applications and data do not move across national boundaries. - - Zone to Zone Disaster Recovery implies replication of data across shorter distances when compared with Azure to Azure Disaster Recovery and therefore, you may see lower latency and consequently lower RPO. While these are strong advantages, there is a possibility that Zone to Zone Disaster Recovery may fall short of resilience requirements in the event of a region-wide natural disaster.
While these are strong advantages, there is a possibility that Zone to Zone Disa
As mentioned above, Zone to Zone Disaster Recovery reduces complexity as it leverages redundant networking concepts across Availability Zones making configuration much simpler. The behavior of networking components in the Zone to Zone Disaster Recovery scenario is outlined below: - Virtual Network: You may use the same virtual network as the source network for actual failovers. Use a different virtual network to the source virtual network for test failovers.- - Subnet: Failover into the same subnet is supported.- - Private IP address: If you are using static IP addresses, you can use the same IPs in the target zone if you choose to configure them in such a manner.- - Accelerated Networking: Similar to Azure to Azure Disaster Recovery, you may enable Accelerated Networking if the VM SKU supports it.- - Public IP address: You can attach a previously created standard public IP address in the same region to the target VM. Basic public IP addresses do not support Availability Zone related scenarios.- - Load balancer: Standard load balancer is a regional resource and therefore the target VM can be attached to the backend pool of the same load balancer. A new load balancer is not required.- - Network Security Group: You may use the same network security groups as applied to the source VM. ## Pre-requisites
Before deploying Zone to Zone Disaster Recovery for your VMs, it is important to
|Customer-managed keys | Supported | |Proximity placement groups | Supported | |Backup interoperability | File level backup and restore are supported. Disk and VM level backup and restore are not supported. |
-|Hot add/remove | Disks can be added after enabling zone to zone replication. Removal of disks after enabling zone to zone replication is not supported. |
+|Hot add/remove | Disks can be added after enabling zone to zone replication. Removal of disks after enabling zone to zone replication is not supported. |
## Set up Site Recovery Zone to Zone Disaster Recovery
Log in to the Azure portal.
### Enable replication for the zonal Azure virtual machine 1. On the Azure portal menu, select Virtual machines, or search for and select Virtual machines on any page. Select the VM you want to replicate. For zone to zone disaster recovery, this VM must already be in an availability zone.-
-2. In Operations, select Disaster recovery.
-
-3. As shown below, in the Basics tab, select ΓÇÿYesΓÇÖ for ΓÇÿDisaster Recovery between Availability Zones?ΓÇÖ
+1. In Operations, select Disaster recovery.
+1. As shown below, in the Basics tab, select ΓÇÿYesΓÇÖ for ΓÇÿDisaster Recovery between Availability Zones?ΓÇÖ
![Basic Settings page](./media/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery/zonal-disaster-recovery-basic-settings-blade.png)-
-4. If you accept all defaults, click ΓÇÿReview + Start replicationΓÇÖ followed by ΓÇÿStart replicationΓÇÖ.
-
-5. If you want to make changes to the replication settings, click on ΓÇÿNext: Advanced settingsΓÇÖ.
-
-6. Change the settings away from default wherever appropriate. For users of Azure to Azure Disaster Recovery, this page might seem familiar. More details on the options presented on this blade can be found [here](./azure-to-azure-tutorial-enable-replication.md)
+1. If you accept all defaults, click ΓÇÿReview + Start replicationΓÇÖ followed by ΓÇÿStart replicationΓÇÖ.
+1. If you want to make changes to the replication settings, click on ΓÇÿNext: Advanced settingsΓÇÖ.
+1. Change the settings away from default wherever appropriate. For users of Azure to Azure Disaster Recovery, this page might seem familiar. More details on the options presented on this blade can be found [here](./azure-to-azure-tutorial-enable-replication.md)
![Advanced Settings page](./media/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery/zonal-disaster-recovery-advanced-settings-blade.png)-
-7. Click on ΓÇÿNext: Review + Start replicationΓÇÖ and then ΓÇÿStart replicationΓÇÖ.
+1. Click on ΓÇÿNext: Review + Start replicationΓÇÖ and then ΓÇÿStart replicationΓÇÖ.
## FAQs
To perform a Disaster Recovery drill, please follow the steps outlined [here](./
To perform a failover and reprotect VMs in the secondary zone, follow the steps outlined [here](./azure-to-azure-tutorial-failover-failback.md).
-To failback to the primary zone, follow the steps outlined [here](./azure-to-azure-tutorial-failback.md).
+To failback to the primary zone, follow the steps outlined [here](./azure-to-azure-tutorial-failback.md).
site-recovery Vmware Azure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-architecture.md
The following table and graphic provide a high-level view of the components used
For Site Recovery to work as expected, you need to modify outbound network connectivity to allow your environment to replicate. > [!NOTE]
-> Site Recovery doesn't support using an authentication proxy to control network connectivity.
+> Site Recovery of VMware/Physical machines using Classic architecture doesn't support using an authentication proxy to control network connectivity. The same is supported when using the [modernized architecutre](vmware-azure-architecture-preview.md).
### Outbound connectivity for URLs
site-recovery Vmware Azure Set Up Replication Tutorial Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-set-up-replication-tutorial-preview.md
In this tutorial, you learn how to:
VMware to Azure replication includes the following procedures: - Sign in to the [Azure portal](https://portal.azure.com/).-- Prepare Azure account
+- Prepare an Azure account.
+- Prepare an account on the vCenter server or vSphere ESXi host, to automate VM discovery.
- [Create a recovery Services vault](./quickstart-create-vault-template.md?tabs=CLI) - Prepare infrastructure - [deploy an Azure Site Recovery replication appliance](deploy-vmware-azure-replication-appliance-preview.md) - Enable replication
Use the following steps to assign the required permissions:
2. In case the **App registrations** settings is set to *No*, request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the Application Developer role to an account to allow the registration of AAD App.
+## Prepare an account for automatic discovery
+
+Site Recovery needs access to VMware servers to:
+
+- Automatically discover VMs. At least a read-only account is required.
+- Orchestrate replication, failover, and failback. You need an account that can run operations such
+ as creating and removing disks, and powering on VMs.
+
+Create the account as follows:
+
+1. To use a dedicated account, create a role at the vCenter level. Give the role a name such as
+ **Azure_Site_Recovery**.
+2. Assign the role the permissions summarized in the table below.
+3. Create a user on the vCenter server or vSphere host. Assign the role to the user.
+
+### VMware account permissions
+
+**Task** | **Role/Permissions** | **Details**
+ | |
+**VM discovery** | At least a read-only user<br/><br/> Data Center object -> Propagate to Child Object, role=Read-only | User assigned at datacenter level, and has access to all the objects in the datacenter.<br/><br/> To restrict access, assign the **No access** role with the **Propagate to child** object, to the child objects (vSphere hosts, datastores, VMs and networks).
+**Full replication, failover, failback** | Create a role (Azure_Site_Recovery) with the required permissions, and then assign the role to a VMware user or group<br/><br/> Data Center object ΓÇô> Propagate to Child Object, role=Azure_Site_Recovery<br/><br/> Datastore -> Allocate space, browse datastore, low-level file operations, remove file, update virtual machine files<br/><br/> Network -> Network assign<br/><br/> Resource -> Assign VM to resource pool, migrate powered off VM, migrate powered on VM<br/><br/> Tasks -> Create task, update task<br/><br/> Virtual machine -> Configuration<br/><br/> Virtual machine -> Interact -> answer question, device connection, configure CD media, configure floppy media, power off, power on, VMware tools install<br/><br/> Virtual machine -> Inventory -> Create, register, unregister<br/><br/> Virtual machine -> Provisioning -> Allow virtual machine download, allow virtual machine files upload<br/><br/> Virtual machine -> Snapshots -> Remove snapshots, Create snapshot, Revert snapshot.| User assigned at datacenter level, and has access to all the objects in the datacenter.<br/><br/> To restrict access, assign the **No access** role with the **Propagate to child** object, to the child objects (vSphere hosts, datastores, VMs and networks).
+ ## Prepare infrastructure - set up Azure Site Recovery Replication appliance You need to [set up an Azure Site Recovery replication appliance on the on-premises environment](deploy-vmware-azure-replication-appliance-preview.md) to channel mobility agent communications.
site-recovery Vmware Azure Troubleshoot Push Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-troubleshoot-push-install.md
To resolve the error:
* Ensure that the network shared folders on your virtual machine, are accessible from the process server. Check the steps [here](vmware-azure-troubleshoot-push-install.md#check-access-for-network-shared-folders-on-source-machine-errorid-9510595523).
-* From the source server machine command line, use `Telnet` to ping the configuration server or scale-out process server on HTTPS port 135 as shown in the following command. This command checks if there are any network connectivity issues or firewall port blocking issues.
+* From the configuration server or scale-out process server command line, use `Telnet` to ping the source VM on port 135 as shown in the following command. This command checks if there are any network connectivity issues or firewall port blocking issues.
- `telnet <CS/ scale-out PS IP address> <135>`
+ `telnet <Source IP address> <135>`
* Additionally, for a Linux VM: * Check if latest OpenSSH, OpenSSH Server, and OpenSSL packages are installed.
storage Data Lake Storage Supported Open Source Platforms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-supported-open-source-platforms.md
This table lists the open source platforms that support Data Lake Storage Gen2.
| Platform | Supported Version(s) | More Information | | | | |
-| [HDInsight](https://azure.microsoft.com/services/hdinsight/) | 3.6+ | [What are the Apache Hadoop components and versions available with HDInsight?](../../hdinsight/hdinsight-component-versioning.md?bc=%2f2Fazure%2fbread%2ftoc.json&toc=%2fazure%2fhdinsight%2fstorm%2fTOC.json)
+| [HDInsight](https://azure.microsoft.com/services/hdinsight/) | 3.6+ | [What are the Apache Hadoop components and versions available with HDInsight?](../../hdinsight/hdinsight-component-versioning.md?bc=/azure/bread/toc.json&toc=/azure/hdinsight/storm/TOC.json)
| [Hadoop](https://hadoop.apache.org/) | 3.2+ | [Apache Hadoop releases archive](https://hadoop.apache.org/release.html) | | [Cloudera](https://www.cloudera.com/) | 6.1+ | [Cloudera Enterprise 6.x release notes](https://www.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cdh_6_release_notes.html) | | [Azure Databricks](https://azure.microsoft.com/services/databricks/) | 5.1+ | [Databricks Runtime versions](https://docs.databricks.com/release-notes/runtime/databricks-runtime-ver.html) |
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage.md
Get started with any of these guides.
| [Azure Monitor Logs overview](../../azure-monitor/logs/data-platform-logs.md)| The basics of logs and how to collect and analyze them | | [Transition to metrics in Azure Monitor](../common/storage-metrics-migration.md) | Move from Storage Analytics metrics to metrics in Azure Monitor. | | [Azure Blob Storage monitoring data reference](monitor-blob-storage-reference.md) | A reference of the logs and metrics created by Azure Blob Storage |
-| [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)| Common performance issues and guidance about how to troubleshoot them. |
-| [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)| Common availability issues and guidance about how to troubleshoot them.|
-| [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)| Common issues with connecting clients and how to troubleshoot them.|
+| [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=/azure/storage/blobs/toc.json)| Common performance issues and guidance about how to troubleshoot them. |
+| [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=/azure/storage/blobs/toc.json)| Common availability issues and guidance about how to troubleshoot them.|
+| [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=/azure/storage/blobs/toc.json)| Common issues with connecting clients and how to troubleshoot them.|
storage Quickstart Blobs Javascript Browser https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/quickstart-blobs-javascript-browser.md
The [**example code**](https://github.com/Azure-Samples/AzureStorageSnippets/tre
Additional resources:
-[API reference](/javascript/api/@azure/storage-blob) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob) | [Package (npm)](https://www.npmjs.com/package/@azure/storage-blob) | [Samples](../common/storage-samples-javascript.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples)
+[API reference](/javascript/api/@azure/storage-blob) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob) | [Package (npm)](https://www.npmjs.com/package/@azure/storage-blob) | [Samples](../common/storage-samples-javascript.md?toc=/azure/storage/blobs/toc.json#blob-samples)
## Prerequisites
For tutorials, samples, quickstarts, and other documentation, visit:
> [Azure for JavaScript documentation](/azure/developer/javascript/) - To learn more, see the [Azure Blob storage client library for JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/storage/storage-blob).-- To see Blob storage sample apps, continue to [Azure Blob storage client library v12 JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob/samples).
+- To see Blob storage sample apps, continue to [Azure Blob storage client library v12 JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob/samples).
storage Storage Blob Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-javascript-get-started.md
This article shows you how to connect to Azure Blob Storage by using the Azure B
The [sample code snippets](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide) are available in GitHub as runnable Node.js files.
-[Package (npm)](https://www.npmjs.com/package/@azure/storage-blob) | [Samples](../common/storage-samples-javascript.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples) | [API reference](/javascript/api/preview-docs/@azure/storage-blob) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/storage/storage-blob) | [Give Feedback](https://github.com/Azure/azure-sdk-for-js/issues)
+[Package (npm)](https://www.npmjs.com/package/@azure/storage-blob) | [Samples](../common/storage-samples-javascript.md?toc=/azure/storage/blobs/toc.json#blob-samples) | [API reference](/javascript/api/preview-docs/@azure/storage-blob) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/storage/storage-blob) | [Give Feedback](https://github.com/Azure/azure-sdk-for-js/issues)
## SDK Objects for service, container, and blob
To authorize with Azure AD, you'll need to use an Azure credential. Which type o
|--|--|| | Local machine (developing and testing) | User identity or service principal | [Use the Azure Identity library to get an access token for authorization](../common/identity-library-acquire-token.md) | | Azure | Managed identity | [Authorize access to blob data with managed identities for Azure resources](authorize-managed-identity.md) |
-| Servers or clients outside of Azure | Service principal | [Authorize access to blob or queue data from a native or web application](../common/storage-auth-aad-app.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json) |
+| Servers or clients outside of Azure | Service principal | [Authorize access to blob or queue data from a native or web application](../common/storage-auth-aad-app.md?toc=/azure/storage/blobs/toc.json) |
Create a [DefaultAzureCredential](/javascript/api/overview/azure/identity-readme#defaultazurecredential) instance. Use that object to create a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient).
The following guides show you how to use each of these clients to build your app
## See also - [Package (npm)](https://www.npmjs.com/package/@azure/storage-blob)-- [Samples](../common/storage-samples-javascript.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples)
+- [Samples](../common/storage-samples-javascript.md?toc=/azure/storage/blobs/toc.json#blob-samples)
- [API reference](/javascript/api/@azure/storage-blob/) - [Library source code](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-blob)-- [Give Feedback](https://github.com/Azure/azure-sdk-for-js/issues)
+- [Give Feedback](https://github.com/Azure/azure-sdk-for-js/issues)
storage Storage Quickstart Blobs Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-dotnet.md
Additional resources:
- [API reference documentation](/dotnet/api/azure.storage.blobs) - [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Blobs) - [Package (NuGet)](https://www.nuget.org/packages/Azure.Storage.Blobs)-- [Samples](../common/storage-samples-dotnet.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples)
+- [Samples](../common/storage-samples-dotnet.md?toc=/azure/storage/blobs/toc.json#blob-samples)
## Prerequisites
To see Blob storage sample apps, continue to:
> [Azure Blob Storage SDK v12 .NET samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Blobs/samples) - For tutorials, samples, quick starts and other documentation, visit [Azure for .NET and .NET Core developers](/dotnet/azure/).-- To learn more about .NET Core, see [Get started with .NET in 10 minutes](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/intro).
+- To learn more about .NET Core, see [Get started with .NET in 10 minutes](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/intro).
storage Storage C Plus Plus Enumeration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-c-plus-plus-enumeration.md
For more information about Azure Storage and Client Library for C++, see the fol
- [How to use Queue Storage from C++](../queues/storage-c-plus-plus-how-to-use-queues.md) - [Azure Storage Client Library for C++ API documentation.](https://azure.github.io/azure-storage-cpp/) - [Azure Storage Team Blog](/archive/blogs/windowsazurestorage/)-- [Azure Storage Documentation](https://azure.microsoft.com/documentation/services/storage/)
+- [Azure Storage Documentation](/azure/storage/)
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
When planning for disaster recovery during a regional outage, you should create
To enable access from a virtual network that is located in another region over service endpoints, register the `AllowGlobalTagsForStorage` feature in the subscription of the virtual network. All the subnets in the subscription that has the _AllowedGlobalTagsForStorage_ feature enabled will no longer use a public IP address to communicate with any storage account. Instead, all the traffic from these subnets to storage accounts will use a private IP address as a source IP. As a result, any storage accounts that use IP network rules to permit traffic from those subnets will no longer have an effect. > [!NOTE]
-> For updating the existing service endpoints to access a storage account in another region, perform an [update subnet](https://docs.microsoft.com/cli/azure/network/vnet/subnet?view=azure-cli-latest#az-network-vnet-subnet-update) operation on the subnet after registering the subscription with the `AllowGlobalTagsForStorage` feature. Similarly, to go back to the old configuration, perform an [update subnet](https://docs.microsoft.com/cli/azure/network/vnet/subnet?view=azure-cli-latest#az-network-vnet-subnet-update) operation after deregistering the subscription with the `AllowGlobalTagsForStorage` feature.
+> For updating the existing service endpoints to access a storage account in another region, perform an [update subnet](/cli/azure/network/vnet/subnet?view=azure-cli-latest#az-network-vnet-subnet-update) operation on the subnet after registering the subscription with the `AllowGlobalTagsForStorage` feature. Similarly, to go back to the old configuration, perform an [update subnet](/cli/azure/network/vnet/subnet?view=azure-cli-latest#az-network-vnet-subnet-update) operation after deregistering the subscription with the `AllowGlobalTagsForStorage` feature.
#### [Portal](#tab/azure-portal)
storage Troubleshoot Storage Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/troubleshoot-storage-availability.md
The most common cause of this error is a client disconnecting before a timeout e
## See also -- [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)-- [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)-- [Monitor, diagnose, and troubleshoot your Azure Storage](/learn/modules/monitor-diagnose-and-troubleshoot-azure-storage/)
+- [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=/azure/storage/blobs/toc.json)
+- [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=/azure/storage/blobs/toc.json)
+- [Monitor, diagnose, and troubleshoot your Azure Storage](/learn/modules/monitor-diagnose-and-troubleshoot-azure-storage/)
storage Troubleshoot Storage Client Application Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/troubleshoot-storage-client-application-errors.md
You can find a list of common REST API error codes that the storage services ret
- [Monitoring Azure Files](../files/storage-files-monitoring.md) - [Monitoring Azure Queue Storage](../queues/monitor-queue-storage.md) - [Monitoring Azure Table storage](../tables/monitor-table-storage.md)-- [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)-- [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)-- [Monitor, diagnose, and troubleshoot your Azure Storage](/learn/modules/monitor-diagnose-and-troubleshoot-azure-storage/)
+- [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=/azure/storage/blobs/toc.json)
+- [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=/azure/storage/blobs/toc.json)
+- [Monitor, diagnose, and troubleshoot your Azure Storage](/learn/modules/monitor-diagnose-and-troubleshoot-azure-storage/)
storage Troubleshoot Storage Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/troubleshoot-storage-performance.md
If you are experiencing a delay between the time an application adds a message t
## See also -- [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)-- [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)-- [Monitor, diagnose, and troubleshoot your Azure Storage](/learn/modules/monitor-diagnose-and-troubleshoot-azure-storage/)
+- [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=/azure/storage/blobs/toc.json)
+- [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=/azure/storage/blobs/toc.json)
+- [Monitor, diagnose, and troubleshoot your Azure Storage](/learn/modules/monitor-diagnose-and-troubleshoot-azure-storage/)
storage Storage Files Netapp Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-netapp-comparison.md
Previously updated : 05/04/2022 Last updated : 08/30/2022
Most workloads that require cloud file storage work well on either Azure Files o
| Category | Azure Files | Azure NetApp Files | ||||
-| Minimum Share/Volume Size | Premium<br><ul><li>100 GiB</li></ul><br>Standard<br><ul><li>No minimum.</li></ul> | All tiers<br><ul><li>100 GiB (Minimum capacity pool size: 4 TiB)</li></ul> |
+| Minimum Share/Volume Size | Premium<br><ul><li>100 GiB</li></ul><br>Standard<br><ul><li>No minimum (SMB only - NFS requires Premium shares).</li></ul> | All tiers<br><ul><li>100 GiB (Minimum capacity pool size: 4 TiB)</li></ul> |
| Maximum Share/Volume Size | 100 TiB | All tiers<br><ul><li>100 TiB (500 TiB capacity pool limit)</li></ul><br>Up to 12.5 PiB per Azure NetApp account. | | Maximum Share/Volume IOPS | Premium<br><ul><li>Up to 100k</li></ul><br>Standard<br><ul><li>Up to 20k</li></ul> | Ultra and Premium<br><ul><li>Up to 450k </li></ul><br>Standard<br><ul><li>Up to 320k</li></ul> | | Maximum Share/Volume Throughput | Premium<br><ul><li>Up to 10 GiB/s</li></ul><br>Standard<br><ul><li>Up to 300 MiB/s</li></ul> | Ultra and Premium<br><ul><li>Up to 4.5 GiB/s</li></ul><br>Standard<br><ul><li>Up to 3.2GiB/s</li></ul> |
stream-analytics Automation Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/automation-powershell.md
Title: Auto-pause an Azure Stream Analytics with PowerShell description: This article describes how to auto-pause an Azure Stream Analytics job on a schedule with PowerShell --- Last updated 11/03/2021
stream-analytics Cosmos Db Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/cosmos-db-managed-identity.md
Previously updated : 08/09/2022 Last updated : 08/30/2022
For the Stream Analytics job to access your Cosmos DB using managed identity, th
|| |Cosmos DB Built-in Data Contributor|
-1. Select **Access control (IAM)**.
+> [!IMPORTANT]
+> Cosmos DB data plane built-in role-based access control (RBAC) is not exposed through the Azure Portal. To assign the Cosmos DB Built-in Data Contributor role, you must grant permission via Azure Powershell. For more information about role-based access control with Azure Active Directory for your Azure Cosmos DB account please visit the: [Configure role-based access control with Azure Active Directory for your Azure Cosmos DB account documentation.](https://docs.microsoft.com/azure/cosmos-db/how-to-setup-rbac/)
-2. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
+The following command can be used to authenticate your ASA job with Cosmos DB. The `$accountName` and `$resourceGroupName` are for your Cosmos DB account, and the `$principalId` is the value obtained in the previous step, in the Identity tab of your ASA job. You need to have "Contributor" access to your Cosmos DB account for this command to work as intended.
-3. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+```azurecli-interactive
+New-AzCosmosDBSqlRoleAssignment -AccountName $accountName -ResourceGroupName $resourceGroupName -RoleDefinitionId '00000000-0000-0000-0000-000000000002' -Scope "/" -PrincipalId $principalId
- | Setting | Value |
- | | |
- | Role | Cosmos DB Built-in Data Contributor |
- | Assign access to | User, group, or service principal |
- | Members | \<Name of your Stream Analytics job> |
-
- ![Screenshot that shows Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
+```
> [!NOTE] > Due to global replication or caching latency, there may be a delay when permissions are revoked or granted. Changes should be reflected within 8 minutes.
stream-analytics Event Ordering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/event-ordering.md
Previously updated : 08/06/2020 Last updated : 08/26/2022 # Configuring event ordering policies for Azure Stream Analytics
If events arrive late or out-of-order based on the policies you've configured, y
Let us see an example of these policies in action. <br> **Late arrival policy:** 15 seconds
-<br> **Out-of-order policy:** 8 seconds
+<br> **Out-of-order policy:** 5 seconds
| Event No. | Event Time | Arrival Time | System.Timestamp | Explanation | | | | | | | | **1** | 00:10:00 | 00:10:40 | 00:10:25 | Event arrived late and outside tolerance level. So event time gets adjusted to maximum late arrival tolerance. | | **2** | 00:10:30 | 00:10:41 | 00:10:30 | Event arrived late but within tolerance level. So event time doesn't get adjusted. | | **3** | 00:10:42 | 00:10:42 | 00:10:42 | Event arrived on time. No adjustment needed. |
-| **4** | 00:10:38 | 00:10:43 | 00:10:38 | Event arrived out-of-order but within the tolerance of 8 seconds. So, event time doesn't get adjusted. For analytics purposes, this event will be considered as preceding event number 4. |
-| **5** | 00:10:35 | 00:10:45 | 00:10:37 | Event arrived out-of-order and outside tolerance of 8 seconds. So, event time is adjusted to maximum of out-of-order tolerance. |
+| **4** | 00:10:38 | 00:10:43 | 00:10:38 | Event arrived out-of-order but within the tolerance of 5 seconds. So, event time doesn't get adjusted. For analytics purposes, this event will be considered as preceding event number 4 (with considering the total 5 events. The actual order is: 1, 2, 5, 4, 3). |
+| **5** | 00:10:35 | 00:10:45 | 00:10:37 | Event arrived out-of-order and outside tolerance of 5 seconds. So, event time is adjusted to maximum of out-of-order tolerance. |
## Can these settings delay output of my job?
stream-analytics Input Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/input-validation.md
description: "This article describes how to improve the resiliency of Azure Stre
---- Last updated 12/10/2021
stream-analytics No Code Stream Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-stream-processing.md
Previously updated : 05/08/2022 Last updated : 08/26/2022 # No code stream processing using Azure Stream Analytics (Preview)
The following screenshot shows a finished Stream Analytics job. It highlights al
1. **Ribbon** - On the ribbon, sections follow the order of a classic/ analytics process: Event Hubs as input (also known as data source), transformations (streaming ETL operations), outputs, a button to save your progress and a button to start the job. 2. **Diagram view** - A graphical representation of your Stream Analytics job, from input to operations to outputs. 3. **Side pane** - Depending on which component you selected in the diagram view, you'll have settings to modify input, transformation, or output.
-4. **Tabs for data preview, authoring errors, and runtime errors** - For each tile shown, the data preview will show you results for that step (live for inputs and on-demand for transformations and outputs). This section also summarizes any authoring errors or warnings that you might have in your job when it's being developed. Selecting each error or warning will select that transform.
+4. **Tabs for data preview, authoring errors, runtime logs, and metrics** - For each tile shown, the data preview will show you results for that step (live for inputs and on-demand for transformations and outputs). This section also summarizes any authoring errors or warnings that you might have in your job when it's being developed. Selecting each error or warning will select that transform. It also provides the job metrics for you to monitor running job's health.
## Event Hubs as the streaming input Azure Event Hubs is a big-data streaming platform and event ingestion service. It can receive and process millions of events per second. Data sent to an event hub can be transformed and stored by using any real-time analytics provider or batching/storage adapters. To configure an event hub as an input for your job, select the **Event Hub** symbol. A tile appears in the diagram view, including a side pane for its configuration and connection.
-When connecting to your Event Hub in no code editor, it is recommended that you create a new Consumer Group (which is the default option). This helps in avoiding the Event Hub reach the concurrent readersΓÇÖ limit. To understand more about Consumer groups and whether you should select an existing Consumer Group or create a new one, see [Consumer groups](../event-hubs/event-hubs-features.md). If your Event Hub is in Basic tier, you can only use the existing $Default Consumer group. If your Event Hub is in Standard or Premium tiers, you can create a new consumer group.
+When connecting to your event hub in no code editor, it is recommended that you create a new Consumer Group (which is the default option). This helps in avoiding the event hub reach the concurrent readersΓÇÖ limit. To understand more about Consumer groups and whether you should select an existing Consumer Group or create a new one, see [Consumer groups](../event-hubs/event-hubs-features.md). If your event hub is in Basic tier, you can only use the existing $Default Consumer group. If your event hub is in Standard or Premium tiers, you can create a new consumer group.
![Consumer group selection while setting up Event Hub](./media/no-code-stream-processing/consumer-group-nocode.png)
-When connecting to the Event Hub, if you choose ΓÇÿManaged IdentityΓÇÖ as Authentication mode, then the Azure Event Hubs Data owner role will be granted to the Managed Identity for the Stream Analytics job. To learn more about Managed Identity for Event Hub, see [Event Hubs Managed Identity](event-hubs-managed-identity.md).
+When connecting to the Event Hubs, if you choose ΓÇÿManaged IdentityΓÇÖ as Authentication mode, then the Azure Event Hubs Data owner role will be granted to the Managed Identity for the Stream Analytics job. To learn more about Managed Identity for Event Hubs, see [Event Hubs Managed Identity](event-hubs-managed-identity.md).
Managed identities eliminate the limitations of user-based authentication methods, like the need to reauthenticate because of password changes or user token expirations that occur every 90 days. ![Authentication method is selected as Managed Identity](./media/no-code-stream-processing/msi-eh-nocode.png)
-After you set up your Event Hub's details and select **Connect**, you can add fields manually by using **+ Add field** if you know the field names. To instead detect fields and data types automatically based on a sample of the incoming messages, select **Autodetect fields**. Selecting the gear symbol allows you to edit the credentials if needed. When Stream Analytics job detect the fields, you'll see them in the list. You'll also see a live preview of the incoming messages in the **Data Preview** table under the diagram view.
+After you set up your event hub's details and select **Connect**, you can add fields manually by using **+ Add field** if you know the field names. To instead detect fields and data types automatically based on a sample of the incoming messages, select **Autodetect fields**. Selecting the gear symbol allows you to edit the credentials if needed. When Stream Analytics job detect the fields, you'll see them in the list. You'll also see a live preview of the incoming messages in the **Data Preview** table under the diagram view.
You can always edit the field names, or remove or change the data type, by selecting the three dot symbol next to each field. You can also expand, select, and edit any nested fields from the incoming messages, as shown in the following image. The available data types are:
The available data types are:
Reference data is either static or changes slowly over time. It is typically used to enrich incoming streaming and do lookups in your job. For example, you might join data in the data stream input to data in the reference data, much as you would perform a SQL join to look up static values.For more information about reference data inputs, see [Using reference data for lookups in Stream Analytics](stream-analytics-use-reference-data.md).
+No-code editor now supports two reference data sources:
+- Azure Data Lake Storage (ADLS) Gen2
+- Azure SQL database
++ ### ADLS Gen2 as reference data
-Reference data is modeled as a sequence of blobs in ascending order of the date/time specified in the blob name. Blobs can only be added to the end of the sequence by using a date/time greater than the one specified by the last blob in the sequence. Blobs are defined in the input configuration. For more information, see [Use reference data from Blob Storage for a Stream Analytics job](data-protection.md).
+Reference data is modeled as a sequence of blobs in ascending order of the date/time specified in the blob name. Blobs can only be added to the end of the sequence by using a date/time greater than the one specified by the last blob in the sequence. Blobs are defined in the input configuration. For more information, see [Use reference data from Blob Storage for a Stream Analytics job](stream-analytics-use-reference-data.md).
-First, you have to select your ADLS Gen2. To see details about each field, see Azure Blob Storage section in [Azure Blob Storage Reference data input](stream-analytics-use-reference-data.md).
+First, you have to select **Reference ADLS Gen2** under **Inputs** section on the ribbon. To see details about each field, see Azure Blob Storage section in [Azure Blob Storage Reference data input](stream-analytics-use-reference-data.md#azure-blob-storage).
- ![Configure ADLS Gen2 as input in no code editor](./media/no-code-stream-processing/msi-eh-nocode.png)
+ ![Configure ADLS Gen2 as reference data input in no code editor](./media/no-code-stream-processing/blob-referencedata-nocode.png)
-Then, upload a JSON of array file and the fields in the file will be detected. Use this reference data to perform transformation with Streaming input data from Event Hub.
+Then, upload a JSON of array file and the fields in the file will be detected. Use this reference data to perform transformation with Streaming input data from Event Hubs.
![Upload JSON for reference data](./media/no-code-stream-processing/blob-referencedata-upload-nocode.png)
+### SQL Database as reference data
+
+Azure Stream Analytics supports Azure SQL Database as a source of input for reference data as well. For more information, see [Azure SQL Database Reference data input](stream-analytics-use-reference-data.md#azure-sql-database). You can use SQL Database as reference data for your Stream Analytics job in the no-code editor.
+
+To configure SQL database as reference data input, simply select the **Reference SQL Database** under **Inputs** section on the ribbon.
+Then fill in the needed information to connect your reference database and select the table with your needed columns. You can also fetch the reference data from your table by editing the SQL query manually.
+ ## Transformations Streaming data transformations are inherently different from batch data transformations. Almost all streaming data has a time component, which affects any data preparation tasks involved.
-To add a streaming data transformation to your job, select the transformation symbol on the ribbon for that transformation. The respective tile will be dropped in the diagram view. After you select it, you'll see the side pane for that transformation to configure it.
+To add a streaming data transformation to your job, select the transformation symbol under **Operations** section on the ribbon for that transformation. The respective tile will be dropped in the diagram view. After you select it, you'll see the side pane for that transformation to configure it.
+ ### Filter
Expand array is to create a new row for each value within an array.
## Streaming outputs
-The no-code drag-and-drop experience currently supports three outputs to store your processed real time data.
+The no-code drag-and-drop experience currently supports several output sinks to store your processed real time data.
:::image type="content" source="./media/no-code-stream-processing/outputs.png" alt-text="Screenshot showing Streaming output options." lightbox="./media/no-code-stream-processing/outputs.png" :::
The no-code drag-and-drop experience currently supports three outputs to store y
Data Lake Storage Gen2 makes Azure Storage the foundation for building enterprise data lakes on Azure. It's designed from the start to service multiple petabytes of information while sustaining hundreds of gigabits of throughput. It allows you to easily manage massive amounts of data. Azure Blob storage offers a cost-effective and scalable solution for storing large amounts of unstructured data in the cloud.
-Select **ADLS Gen2** as output for your Stream Analytics job and select the container where you want to send the output of the job. For more information about Azure Data Lake Gen2 output for a Stream Analytics job, see [Blob storage and Azure Data Lake Gen2 output from Azure Stream Analytics](blob-storage-azure-data-lake-gen2-output.md).
+Select **ADLS Gen2** under **Outputs** section on the ribbon as output for your Stream Analytics job and select the container where you want to send the output of the job. For more information about Azure Data Lake Gen2 output for a Stream Analytics job, see [Blob storage and Azure Data Lake Gen2 output from Azure Stream Analytics](blob-storage-azure-data-lake-gen2-output.md).
When connecting to ADLS Gen2, if you choose ‘Managed Identity’ as Authentication mode, then the Storage Blob Data Contributor role will be granted to the Managed Identity for the Stream Analytics job. To learn more about Managed Identity for ADLS Gen2, see [Storage Blob Managed Identity](blob-output-managed-identity.md). Managed identities eliminate the limitations of user-based authentication methods, like the need to reauthenticate because of password changes or user token expirations that occur every 90 days.
Azure Stream Analytics jobs can output to a dedicated SQL pool table in Azure Sy
> [!IMPORTANT] > The dedicated SQL pool table must exist before you can add it as output to your Stream Analytics job. The table's schema must match the fields and their types in your job's output.
-Select **Synapse** as output for your Stream Analytics job and select the SQL pool table where you want to send the output of the job. For more information about Synapse output for a Stream Analytics job, see [Azure Synapse Analytics output from Azure Stream Analytics](azure-synapse-analytics-output.md).
+Select **Synapse** under **Outputs** section on the ribbon as output for your Stream Analytics job and select the SQL pool table where you want to send the output of the job. For more information about Synapse output for a Stream Analytics job, see [Azure Synapse Analytics output from Azure Stream Analytics](azure-synapse-analytics-output.md).
### Azure Cosmos DB Azure Cosmos DB is a globally distributed database service that offers limitless elastic scale around the globe, rich query, and automatic indexing over schema-agnostic data models.
-Select **CosmosDB** as output for your Stream Analytics job. For more information about Cosmos DB output for a Stream Analytics job, see [Azure Cosmos DB output from Azure Stream Analytics](azure-cosmos-db-output.md).
+Select **CosmosDB** under **Outputs** section on the ribbon as output for your Stream Analytics job. For more information about Cosmos DB output for a Stream Analytics job, see [Azure Cosmos DB output from Azure Stream Analytics](azure-cosmos-db-output.md).
When connecting to Azure Cosmos DB, if you choose ‘Managed Identity’ as Authentication mode, then the Contributor role will be granted to the Managed Identity for the Stream Analytics job.To learn more about Managed Identity for Cosmos DB, see [Cosmos DB Managed Identity](cosmos-db-managed-identity.md). Managed identities eliminate the limitations of user-based authentication methods, like the need to reauthenticate because of password changes or user token expirations that occur every 90 days. ![Managed identity for Cosmos DB](./media/no-code-stream-processing/msi-cosmosdb-nocode.png)
-## Data preview, errors and metrics
+
+### Azure SQL database
+
+[Azure SQL Database](https://azure.microsoft.com/services/sql-database/) is a fully managed platform as a service (PaaS) database engine that can help you to create a highly available and high-performance data storage layer for the applications and solutions in Azure. Azure Stream Analytics jobs can be configured to write the processed data to an existing table in SQL Database with no-code editor experience.
+
+> [!IMPORTANT]
+> The Azure SQL database table must exist before you can add it as output to your Stream Analytics job. The table's schema must match the fields and their types in your job's output.
+
+To configure SQL database as output, simply select the **SQL Database** under **Outputs** section on the editor ribbon. Then fill in the needed information to connect your SQL database and select the table you want to write data to.
+
+For more information about Azure SQL database output for a Stream Analytics job, see [Azure SQL Database output from Azure Stream Analytics](./sql-database-output.md).
+
+## Data preview, authoring errors, runtime logs, and metrics
The no code drag-and-drop experience provides tools to help you author, troubleshoot, and evaluate the performance of your analytics pipeline for streaming data.
If you have any authoring errors or warnings, the Authoring errors tab will list
:::image type="content" source="./media/no-code-stream-processing/authoring-errors.png" alt-text="Screenshot showing the Authoring errors tab that shows a list of example errors." lightbox="./media/no-code-stream-processing/authoring-errors.png" :::
-### Runtime errors
+### Runtime logs
+
+Runtime logs are Warning/Error/Information level logs when job is running. These logs are helpful when you want to edit your Stream Analytics job topology/configuration for troubleshooting. It is highly recommended to turn on diagnostic logs and sending them to Log Analytics workspace in **Setting** to have more insights of your running jobs for debugging.
+
-Runtime errors are warning/Error/Critical level errors. These errors are helpful when you want to edit your Stream Analytics job topology/configuration for troubleshooting. In the following screenshot example, the user has configured Synapse output with an incorrect table name. The user started the job, but there's a Runtime error stating that the schema definition for the output table can't be found.
+In the following screenshot example, the user has configured SQL database output with a table schema that is not matching with the fields of the job output.
:::image type="content" source="./media/no-code-stream-processing/runtime-errors.png" alt-text="Screenshot showing the Runtime errors tab where you can select a timespan to filter error events." lightbox="./media/no-code-stream-processing/runtime-errors.png" :::
Runtime errors are warning/Error/Critical level errors. These errors are helpful
If the job is running, you can monitor the health of your job by navigating to Metrics tab. The four metrics shown by default are Watermark delay, Input events, Backlogged input events, Output events. You can use these to understand if the events are flowing in & output of the job without any input backlog. You can select more metrics from the list.To understand all the metrics in details, see [Stream Analytics metrics](stream-analytics-job-metrics.md).
- ![Metrics for jobs created from no code editor](./media/no-code-stream-processing/metrics-nocode.png)
## Start a Stream Analytics job
-You can save the job anytime while creating it. Once you have configured the Event Hub, transformations, and Streaming outputs for the job, you can Start the job.
+You can save the job anytime while creating it. Once you have configured the event hub, transformations, and Streaming outputs for the job, you can Start the job.
**Note**: While the no code editor is in Preview, the Azure Stream Analytics service is Generally Available. :::image type="content" source="./media/no-code-stream-processing/no-code-save-start.png" alt-text="Screenshot showing the Save and Start options." lightbox="./media/no-code-stream-processing/no-code-save-start.png" ::: -- Output start time - When you start a job, you select a time for the job to start creating output.
- - Now - Makes the starting point of the output event stream the same as when the job is started.
- - Custom - You can choose the starting point of the output.
- - When last stopped - This option is available when the job was previously started but was stopped manually or failed. When you choose this option, the last output time will be used to restart the job, so no data is lost.
-- Streaming units - Streaming Units represent the amount of compute and memory assigned to the job while running. If you're unsure how many SUs to choose, we recommend that you start with three and adjust as needed.-- Output data error handling ΓÇô Output data error handling policies only apply when the output event produced by a Stream Analytics job doesn't conform to the schema of the target sink. You can configure the policy by choosing either **Retry** or **Drop**. For more information, see [Azure Stream Analytics output error policy](stream-analytics-output-error-policy.md).-- Start ΓÇô Starts the Stream Analytics job.
+- **Output start time** - When you start a job, you select a time for the job to start creating output.
+ - **Now** - Makes the starting point of the output event stream the same as when the job is started.
+ - **Custom** - You can choose the starting point of the output.
+ - **When last stopped** - This option is available when the job was previously started but was stopped manually or failed. When you choose this option, the last output time will be used to restart the job, so no data is lost.
+- **Streaming units** - Streaming Units represent the amount of compute and memory assigned to the job while running. If you're unsure how many SUs to choose, we recommend that you start with three and adjust as needed.
+- **Output data error handling** - Output data error handling policies only apply when the output event produced by a Stream Analytics job doesn't conform to the schema of the target sink. You can configure the policy by choosing either **Retry** or **Drop**. For more information, see [Azure Stream Analytics output error policy](stream-analytics-output-error-policy.md).
+- **Start** - Starts the Stream Analytics job.
:::image type="content" source="./media/no-code-stream-processing/start-job.png" alt-text="Screenshot showing the Start Stream Analytics job window where you review the job configuration and start the job." lightbox="./media/no-code-stream-processing/start-job.png" :::
You can see the list of all Stream Analytics jobs created by no-code drag and dr
:::image type="content" source="./media/no-code-stream-processing/jobs-list.png" alt-text="Screenshot showing the Stream Analytics job list where you review job status." lightbox="./media/no-code-stream-processing/jobs-list.png" ::: -- Filter ΓÇô You can filter the list by job name.-- Refresh ΓÇô The list doesn't auto-refresh currently. Use the option to refresh the list and see the latest status.-- Job name ΓÇô The name you provided in the first step of job creation. You can't edit it. Select the job name to open the job in the no-code drag and drop experience where you can Stop the job, edit it, and Start it again.-- Status ΓÇô The status of the job. Select Refresh on top of the list to see the latest status.-- Streaming units ΓÇô The number of Streaming units selected when you started the job.-- Output watermark - An indicator of liveliness for the data produced by the job. All events before the timestamp are already computed.-- Job monitoring ΓÇô Select **Open metrics** to see the metrics related to this Stream Analytics job. For more information about the metrics you can use to monitor your Stream Analytics job, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md).-- Operations ΓÇô Start, stop, or delete the job.
+- **Filter** - You can filter the list by job name.
+- **Refresh** - The list doesn't auto-refresh currently. Use the option to refresh the list and see the latest status.
+- **Job name** - The name you provided in the first step of job creation. You can't edit it. Select the job name to open the job in the no-code drag and drop experience where you can Stop the job, edit it, and Start it again.
+- **Status** - The status of the job. Select Refresh on top of the list to see the latest status.
+- **Streaming units** - The number of Streaming units selected when you started the job.
+- **Output watermark** - An indicator of liveliness for the data produced by the job. All events before the timestamp are already computed.
+- **Job monitoring** - Select **Open metrics** to see the metrics related to this Stream Analytics job. For more information about the metrics you can use to monitor your Stream Analytics job, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md).
+- **Operations** - Start, stop, or delete the job.
## Next steps
stream-analytics Sql Database Upsert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/sql-database-upsert.md
Title: Update or merge records in Azure SQL Database with Azure Functions description: This article describes how to use Azure Functions to update or merge records from Azure Stream Analytics to Azure SQL Database--- Last updated 12/03/2021
stream-analytics Stream Analytics Javascript User Defined Aggregates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-javascript-user-defined-aggregates.md
Title: JavaScript user-defined aggregates in Azure Stream Analytics description: This article describes how to perform advanced query mechanics with JavaScript user-defined aggregates in Azure Stream Analytics.--- Last updated 10/28/2017
stream-analytics Stream Analytics Javascript User Defined Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-javascript-user-defined-functions.md
Title: Azure Stream Analytics JavaScript user-defined functions description: This article is an introduction to JavaScript user-defined functions in Stream Analytics.--
stream-analytics Stream Analytics Stream Analytics Query Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-stream-analytics-query-patterns.md
Title: Common query patterns in Azure Stream Analytics description: This article describes several common query patterns and designs that are useful in Azure Stream Analytics jobs. --- Last updated 08/29/2022
stream-analytics Visual Studio Code Local Run All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/visual-studio-code-local-run-all.md
Title: Test Azure Stream Analytics queries locally with Visual Studio Code description: This article describes how to test queries locally by using Azure Stream Analytics Tools for Visual Studio Code. --- Last updated 11/26/2021
synapse-analytics Concepts Lake Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/database-designer/concepts-lake-database.md
The new database designer gives you the possibility to create a data model for y
Lake databases use a data lake on the Azure Storage account to store the data of the database. The data can be stored in Parquet, Delta or CSV format and different settings can be used to optimize the storage. Every lake database uses a linked service to define the location of the root data folder. For every entity, separate folders are created by default within this database folder on the data lake. By default all tables within a lake database use the same format but the formats and location of the data can be changed per entity if that is requested.
+> [!NOTE]
+> Publishing a lake database does not create any of the underlying structures or schemas needed to query the data in Spark or SQL. After publishing, load data into your lake database using [pipelines](../data-integration/data-integration-data-lake.md) to begin querying it.
+ ## Database compute
synapse-analytics Distribution Advisor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/distribution-advisor.md
The `dbo.read_dist_recommendation` system stored procedure will return recommend
- Modify queries to run on new tables. - Execute queries on old and new tables to compare for performance improvements.
+> [!NOTE]
+> To help us improve Distribution Advisor, please fill out this [quick survey](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7MrzmOZCYJNjGsSytTeg4VUM1AwTlYyRVdFWFpPV0M1UERKRzU0TlJGUy4u).
+ ## Troubleshooting This section contains common troubleshooting scenarios and common mistakes that you may encounter.
Ensure that you have the most up to date version of the stored procedure from Gi
## Azure Synapse product group feedback
+To help us improve Distribution Advisor, please fill out this [quick survey](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7MrzmOZCYJNjGsSytTeg4VUM1AwTlYyRVdFWFpPV0M1UERKRzU0TlJGUy4u).
+ If you need information not provided in this article, search the [Microsoft Q&A question page for Azure Synapse](/answers/topics/azure-synapse-analytics.html) is a place for you to pose questions to other users and to the Azure Synapse Analytics Product Group. We actively monitor this forum to ensure that your questions are answered either by another user or one of us. If you prefer to ask your questions on Stack Overflow, we also have an [Azure Synapse Analytics Stack Overflow Forum](https://stackoverflow.com/questions/tagged/azure-synapse).
virtual-desktop App Attach File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-file-share.md
To assign session host VMs permissions for the storage account and file share:
9. Grant NTFS permissions on the file share to the AD DS group.
-10. Set up NTFS permissions for the user accounts. You'll need an operating unit (OU) sourced from the AD DS that the accounts in the VM belong to.
+10. Set up NTFS permissions for the user accounts. You'll need an organizational unit (OU) sourced from the AD DS that the accounts in the VM belong to.
Once you've assigned the identity to your storage, follow the instructions in the articles in [Next steps](#next-steps) to grant other required permissions to the identity you've assigned to the VMs.
-You'll also need to make sure your session host VMs have New Technology File System (NTFS) permissions. You must have an operational unit container that's sourced from Active Directory Domain Services (AD DS), and your users must be members of that operational unit to use these permissions.
+You'll also need to make sure your session host VMs have NTFS permissions. You must have an OU container that's sourced from Active Directory Domain Services (AD DS), and your users must be members of that OU to use these permissions.
## Next steps
virtual-desktop Azure Monitor Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-monitor-costs.md
You can also reduce costs by removing performance counters. To learn how to remo
### Manage Windows Event Logs
-Windows Events are unlikely to cause a spike in data ingestion when all hosts are healthy. An unhealthy host can increase the number of events sent to the log, but the information can be critical to fixing the host's issues. We recommend keeping them. To learn more about how to manage Windows Event Logs, see [Configuring Windows Event logs](../azure-monitor/agents/data-sources-windows-events.md#configuring-windows-event-logs).
+Windows Events are unlikely to cause a spike in data ingestion when all hosts are healthy. An unhealthy host can increase the number of events sent to the log, but the information can be critical to fixing the host's issues. We recommend keeping them. To learn more about how to manage Windows Event Logs, see [Configuring Windows Event logs](../azure-monitor/agents/data-sources-windows-events.md#configure-windows-event-logs).
### Manage diagnostics
virtual-desktop Azure Monitor Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-monitor-glossary.md
The following table lists the required Windows Event Logs for Azure Monitor for
| Microsoft-FSLogix-Apps/Operational|Error, Warning, and Information| |Microsoft-FSLogix-Apps/Admin|Error, Warning, and Information|
-To learn more about Windows Event Logs, see [Windows Event records properties](../azure-monitor/agents/data-sources-windows-events.md#configuring-windows-event-logs).
+To learn more about Windows Event Logs, see [Windows Event records properties](../azure-monitor/agents/data-sources-windows-events.md#configure-windows-event-logs).
## Next steps
virtual-desktop Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-monitor.md
To set up performance counters using the configuration workbook:
You'll also need to enable specific Windows Event Logs to collect errors, warnings, and information from the session hosts and send them to the Log Analytics workspace.
-If you've already enabled Windows Event Logs and want to remove them, follow the instructions in [Configuring Windows Event Logs](../azure-monitor/agents/data-sources-windows-events.md#configuring-windows-event-logs). You can add and remove Windows Event Logs in the same location.
+If you've already enabled Windows Event Logs and want to remove them, follow the instructions in [Configuring Windows Event Logs](../azure-monitor/agents/data-sources-windows-events.md#configure-windows-event-logs). You can add and remove Windows Event Logs in the same location.
To set up Windows Event Logs using the configuration workbook:
virtual-desktop Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/management.md
Title: Microsoft Endpoint Manager for Azure Virtual Desktop
-description: Recommended ways for you to manage your Azure Virtual Desktop environment.
+ Title: Manage session hosts with Microsoft Endpoint Manager - Azure Virtual Desktop
+description: Recommended ways for you to manage your Azure Virtual Desktop session hosts.
- Previously updated : 06/29/2022 Last updated : 08/30/2022
-# Microsoft Endpoint Manager for Azure Virtual Desktop
+# Manage session hosts with Microsoft Endpoint Manager
We recommend using [Microsoft Endpoint Manager](https://www.microsoft.com/endpointmanager) to manage your Azure Virtual Desktop environment. Microsoft Endpoint Manager is a unified management platform that includes Microsoft Endpoint Configuration Manager and Microsoft Intune.
virtual-desktop Safe Url List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/safe-url-list.md
Title: Required URLs for Azure Virtual Desktop
description: A list of URLs you must unblock to ensure your Azure Virtual Desktop deployment works as intended. Previously updated : 05/26/2022 Last updated : 08/30/2022 # Required URLs for Azure Virtual Desktop
-In order to deploy and make Azure Virtual Desktop available to your users, you must allow specific URLs that your session host virtual machines (VMs) can access them anytime. Users also need to be able to connect to certain URLs to access their Azure Virtual Desktop resources. This article lists the required URLs you need to allow for your session hosts and users. Azure Virtual Desktop doesn't support deployments that block the URLs listed in this article.
+In order to deploy and make Azure Virtual Desktop available to your users, you must allow specific URLs that your session host virtual machines (VMs) can access them anytime. Users also need to be able to connect to certain URLs to access their Azure Virtual Desktop resources. This article lists the required URLs you need to allow for your session hosts and users. These URLs could be blocked if you're using [Azure Firewall](../firewall/protect-azure-virtual-desktop.md) or a third-party firewall or proxy service. Azure Virtual Desktop doesn't support deployments that block the URLs listed in this article.
You can validate that your session host VMs can connect to these URLs by following the steps to run the [Required URL Check tool](required-url-check-tool.md). The Required URL Check tool will validate each URL and show whether your session host VMs can access them. You can only use for deployments in the Azure public cloud, it does not check access for sovereign clouds. ## Session host virtual machines
-Below is the list of URLs your session host VMs need to access for Azure Virtual Desktop. Select the relevant tab based on which cloud you're using.
+The following table is the list of URLs your session host VMs need to access for Azure Virtual Desktop. Select the relevant tab based on which cloud you're using.
# [Azure cloud](#tab/azure)
-| Address | Outbound TCP port | Purpose | Service Tag |
+| Address | Outbound TCP port | Purpose | Service tag |
|||||
+| `login.microsoftonline.com` | 443 | Authentication to Microsoft Online Services |
| `*.wvd.microsoft.com` | 443 | Service traffic | WindowsVirtualDesktop | | `*.prod.warm.ingest.monitor.core.windows.net` | 443 | Agent traffic | AzureMonitor | | `catalogartifact.azureedge.net` | 443 | Azure Marketplace | AzureFrontDoor.Frontend | | `gcs.prod.monitoring.core.windows.net` | 443 | Agent traffic | AzureCloud | | `kms.core.windows.net` | 1688 | Windows activation | Internet | | `azkms.core.windows.net` | 1688 | Windows activation | Internet |
-| `mrsglobalsteus2prod.blob.core.windows.net` | 443 | Agent and SXS stack updates | AzureCloud |
+| `mrsglobalsteus2prod.blob.core.windows.net` | 443 | Agent and side-by-side (SXS) stack updates | AzureCloud |
| `wvdportalstorageblob.blob.core.windows.net` | 443 | Azure portal support | AzureCloud | | `169.254.169.254` | 80 | [Azure Instance Metadata service endpoint](../virtual-machines/windows/instance-metadata-service.md) | N/A | | `168.63.129.16` | 80 | [Session host health monitoring](../virtual-network/network-security-groups-overview.md#azure-platform-considerations) | N/A |
Below is the list of URLs your session host VMs need to access for Azure Virtual
| `www.microsoft.com` | 80 | Certificates | N/A | > [!IMPORTANT]
-> We have finished transitioning the URLs we use for Agent traffic. We no longer support the URLs below. To avoid your session host VMs from showing *Needs Assistance* related to this, please allow `*.prod.warm.ingest.monitor.core.windows.net` if you have not already. Please remove these URLs if you have previously explicitly allowed them:
+> We've finished transitioning the URLs we use for Agent traffic. We no longer support the following URLs. To prevent your session host VMs from showing a *Needs Assistance* status due to this, you must allow the URL `*.prod.warm.ingest.monitor.core.windows.net` if you haven't already. You should also remove the following URLs if you explicitly allowed them before the change:
>
-> | Address | Outbound TCP port | Purpose | Service Tag |
+> | Address | Outbound TCP port | Purpose | Service tag |
> |--|--|--|--| > | `production.diagnostics.monitoring.core.windows.net` | 443 | Agent traffic | AzureCloud | > | `*xt.blob.core.windows.net` | 443 | Agent traffic | AzureCloud |
The following table lists optional URLs that your session host virtual machines
| Address | Outbound TCP port | Purpose | |--|--|--|
-| `login.microsoftonline.com` | 443 | Authentication to Microsoft Online Services |
| `login.windows.net` | 443 | Sign in to Microsoft Online Services and Microsoft 365 | | `*.events.data.microsoft.com` | 443 | Telemetry Service |
-| `www.msftconnecttest.com` | 443 | Detects if the OS is connected to the internet |
+| `www.msftconnecttest.com` | 443 | Detects if the session host is connected to the internet |
| `*.prod.do.dsp.mp.microsoft.com` | 443 | Windows Update | | `*.sfx.ms` | 443 | Updates for OneDrive client software | | `*.digicert.com` | 443 | Certificate revocation check |
The following table lists optional URLs that your session host virtual machines
# [Azure for US Government](#tab/azure-for-us-government)
-| Address | Outbound TCP port | Purpose | Service Tag |
+| Address | Outbound TCP port | Purpose | Service tag |
|--|--|--|--|
+| `login.microsoftonline.us` | 443 | Authentication to Microsoft Online Services |
| `*.wvd.azure.us` | 443 | Service traffic | WindowsVirtualDesktop | | `*.prod.warm.ingest.monitor.core.usgovcloudapi.net` | 443 | Agent traffic | AzureMonitor | | `gcs.monitoring.core.usgovcloudapi.net` | 443 | Agent traffic | AzureCloud | | `kms.core.usgovcloudapi.net` | 1688 | Windows activation | Internet |
-| `mrsglobalstugviffx.blob.core.usgovcloudapi.net` | 443 | Agent and SXS stack updates | AzureCloud |
+| `mrsglobalstugviffx.blob.core.usgovcloudapi.net` | 443 | Agent and side-by-side (SXS) stack updates | AzureCloud |
| `wvdportalstorageblob.blob.core.usgovcloudapi.net` | 443 | Azure portal support | AzureCloud | | `169.254.169.254` | 80 | [Azure Instance Metadata service endpoint](../virtual-machines/windows/instance-metadata-service.md) | N/A | | `168.63.129.16` | 80 | [Session host health monitoring](../virtual-network/network-security-groups-overview.md#azure-platform-considerations) | N/A | | `ocsp.msocsp.com` | 80 | Certificates | N/A | > [!IMPORTANT]
-> We have finished transitioning the URLs we use for Agent traffic. We no longer support the URLs below. To avoid your session host VMs from showing *Needs Assistance* related to this, please allow `*.prod.warm.ingest.monitor.core.usgovcloudapi.net`, if you have not already. Please remove these URLs if you have previously explicitly allowed them:
+> We've finished transitioning the URLs we use for Agent traffic. We no longer support the following URLs. To prevent your session host VMs from showing a *Needs Assistance* status due to this, you must allow the URL `*.prod.warm.ingest.monitor.core.usgovcloudapi.net`, if you haven't already. You should also remove the following URLs if you explicitly allowed them before the change:
>
-> | Address | Outbound TCP port | Purpose | Service Tag |
+> | Address | Outbound TCP port | Purpose | Service tag |
> |--|--|--|--| > | `monitoring.core.usgovcloudapi.net` | 443 | Agent traffic | AzureCloud | > | `fairfax.warmpath.usgovcloudapi.net` | 443 | Agent traffic | AzureCloud |
The following table lists optional URLs that your session host virtual machines
| Address | Outbound TCP port | Purpose | |--|--|--|
-| `login.microsoftonline.us` | 443 | Authentication to Microsoft Online Services and Microsoft 365 |
| `*.events.data.microsoft.com` | 443 | Telemetry Service |
-| `www.msftconnecttest.com` | 443 | Detects if the OS is connected to the internet |
+| `www.msftconnecttest.com` | 443 | Detects if the session host is connected to the internet |
| `*.prod.do.dsp.mp.microsoft.com` | 443 | Windows Update | | `oneclient.sfx.ms` | 443 | Updates for OneDrive client software | | `*.digicert.com` | 443 | Certificate revocation check |
Azure Virtual Desktop currently doesn't have a list of IP address ranges that yo
## Remote Desktop clients
-Any [Remote Desktop clients](user-documentation/connect-windows-7-10.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json&bc=%2Fazure%2Fvirtual-desktop%2Fbreadcrumb%2Ftoc.json) you use to connect to Azure Virtual Desktop must have access to the URLs below. Select the relevant tab based on which cloud you're using. Opening these URLs is essential for a reliable client experience. Blocking access to these URLs is unsupported and will affect service functionality.
+Any [Remote Desktop clients](user-documentation/connect-windows-7-10.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json&bc=%2Fazure%2Fvirtual-desktop%2Fbreadcrumb%2Ftoc.json) you use to connect to Azure Virtual Desktop must have access to the following URLs. Select the relevant tab based on which cloud you're using. Opening these URLs is essential for a reliable client experience. Blocking access to these URLs is unsupported and will affect service functionality.
# [Azure cloud](#tab/azure) | Address | Outbound TCP port | Purpose | Client(s) | |--|--|--|--|
+| `login.microsoftonline.com` | 443 | Authentication to Microsoft Online Services | All |
| `*.wvd.microsoft.com` | 443 | Service traffic | All | | `*.servicebus.windows.net` | 443 | Troubleshooting data | All | | `go.microsoft.com` | 443 | Microsoft FWLinks | All |
Any [Remote Desktop clients](user-documentation/connect-windows-7-10.md?toc=%2Fa
| Address | Outbound TCP port | Purpose | Client(s) | |--|--|--|--|
+| `login.microsoftonline.us` | 443 | Authentication to Microsoft Online Services | All |
| `*.wvd.azure.us` | 443 | Service traffic | All | | `*.servicebus.usgovcloudapi.net` | 443 | Troubleshooting data | All | | `go.microsoft.com` | 443 | Microsoft FWLinks | All |
Any [Remote Desktop clients](user-documentation/connect-windows-7-10.md?toc=%2Fa
These URLs only correspond to client sites and resources. This list doesn't include URLs for other services like Azure Active Directory or Office 365. Azure Active Directory URLs can be found under IDs 56, 59 and 125 in [Office 365 URLs and IP address ranges](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online).+
+## Next steps
+
+To learn how to unblock these URLs in Azure Firewall for your Azure Virtual Desktop deployment, see [Use Azure Firewall to protect Azure Virtual Desktop](../firewall/protect-azure-virtual-desktop.md).
virtual-desktop Start Virtual Machine Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/start-virtual-machine-connect.md
Title: Set up Start VM on Connect for Azure Virtual Desktop
description: How to set up the Start VM on Connect feature for Azure Virtual Desktop to turn on session host virtual machines only when they're needed. Previously updated : 07/21/2022 Last updated : 08/30/2022
To configure Start VM on Connect using the Azure portal:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+1. In the search bar, enter *Azure Virtual Desktop* and select the matching service entry.
1. Select **Host pools**, then select the name of the host pool where you want to enable the setting.
To configure Start VM on Connect using the Azure portal:
1. In the configuration section, you'll see **Start VM on connect**. Select **Yes** to enable it, or **No** to disable it.
-1. Select **Save**. The new setting is applied.
+2. Select **Save** to apply the settings.
# [PowerShell](#tab/azure-powershell)
You need to make sure you have the names of the resource group and host pool you
+>[!NOTE]
+>In pooled host pools, Start VM on Connect will start a VM every five minutes at most. If other users try to sign in during this five-minute period while there aren't any available resources, Start VM on Connect won't start a new VM. Instead, the users trying to sign in will receive an error message that says, "No resources available."
+ ## Troubleshooting If you run into any issues with Start VM On Connect, we recommend you use the Azure Virtual Desktop [diagnostics feature](diagnostics-log-analytics.md) to check for problems. If you receive an error message, make sure to pay close attention to the message content and make a note of the error name for reference. You can also use [Azure Monitor for Azure Virtual Desktop](azure-monitor.md) to get suggestions for how to resolve issues.
virtual-desktop Troubleshoot Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-azure-monitor.md
Learn more about data terms at the [Azure Monitor for Window Virtual Desktop glo
If you want to monitor more Performance counters or Windows Event Logs, you can enable them to send diagnostics info to your Log Analytics workspace and monitor them in **Host Diagnostics: Host browser**. - To add performance counters, see [Configuring performance counters](../azure-monitor/agents/data-sources-performance-counters.md#configuring-performance-counters)-- To add Windows Events, see [Configuring Windows Event Logs](../azure-monitor/agents/data-sources-windows-events.md#configuring-windows-event-logs)
+- To add Windows Events, see [Configuring Windows Event Logs](../azure-monitor/agents/data-sources-windows-events.md#configure-windows-event-logs)
Can't find a data point to help diagnose an issue? Send us feedback!
virtual-desktop Connect Android 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/connect-android-2019.md
To subscribe to a feed:
1. In the Connection Center, tap **+**, and then tap **Remote Resource Feed**. 2. Enter the feed URL into the **Feed URL** field. The feed URL can be either a URL or an email address.
- - If you use a URL, use the one your admin gave you, normally <https://rdweb.wvd.microsoft.com>.
+ - If you use a URL, use the one your admin gave you, normally `https://rdweb.wvd.microsoft.com/api/feeddiscovery/webfeeddiscovery.aspx`.
- To use email, enter your email address. The client will search for a URL associated with your email address if your admin configured the server that way. 3. Tap **NEXT**. 4. Provide your credentials when prompted.
virtual-desktop Connect Ios 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/connect-ios-2019.md
To subscribe to a feed:
1. In the Connection Center, tap **+**, and then tap **Add Workspace**. 2. Enter the feed URL into the **Feed URL** field. The feed URL can be either a URL or an email address.
- - If you use a URL, use the one your admin gave you. Normally, the URL is <https://rdweb.wvd.microsoft.com>.
+ - If you use a URL, use the one your admin gave you. Normally, the URL is `https://rdweb.wvd.microsoft.com/api/feeddiscovery/webfeeddiscovery.aspx`.
- To use email, enter your email address. This tells the client to search for a URL associated with your email address if your admin configured the server that way. 3. Tap **Next**. 4. Provide your credentials when prompted.
virtual-desktop Connect Macos 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/connect-macos-2019.md
To subscribe to a feed:
1. Select **Add Workspace** on the main page to connect to the service and retrieve your resources. 2. Enter the Feed URL. This can be a URL or email address:
- - If you use a URL, use the one your admin gave you. Normally, the URL is <https://rdweb.wvd.microsoft.com>.
+ - If you use a URL, use the one your admin gave you. Normally, the URL is `https://rdweb.wvd.microsoft.com/api/feeddiscovery/webfeeddiscovery.aspx`.
- To use email, enter your email address. This tells the client to search for a URL associated with your email address if your admin configured the server that way. 3. Select **Add**. 4. Sign in with your user account when prompted.
virtual-desktop What Is App Attach https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/what-is-app-attach.md
In a Azure Virtual Desktop deployment, MSIX app attach can:
- Reduce the time it takes for a user to sign in. - Reduce infrastructure requirements and cost.
-MSIX app attach must be applicable outside of VDI or SBC.
- ## Traditional app layering compared to MSIX app attach The following table compares key feature of MSIX app attach and app layering.
virtual-desktop Windows 11 Language Packs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/windows-11-language-packs.md
Title: Install language packs on Windows 11 Enterprise VMs in Azure Virtual Desk
description: How to install language packs for Windows 11 Enterprise VMs in Azure Virtual Desktop. Previously updated : 10/04/2021 Last updated : 08/23/2022
The second option is more efficient in terms of resources and cost, but requires
Before you can add languages to a Windows 11 Enterprise VM, you'll need to have the following things ready: - An Azure VM with Windows 11 Enterprise installed-- A Language and Optional Features (LoF) ISO. You can download the ISO at [Windows 11 Language and Optional Features LoF ISO](https://software-download.microsoft.com/download/sg/22000.1.210604-1628.co_release_amd64fre_CLIENT_LOF_PACKAGES_OEM.iso)
+- A Language and Optional Features (LoF) ISO. You can download the ISO at [Windows 11 Language and Optional Features LoF ISO](https://software-download.microsoft.com/download/sg/22000.1.210604-1628.co_release_amd64fre_CLIENT_LOF_PACKAGES_OEM.iso)
- An Azure Files share or a file share on a Windows File Server VM >[!NOTE]
To create the content repository you'll use to add languages and features to you
1. Open the VM you want to add languages to in Azure.
-2. Open and mount the ISO file you downloaded in [Requirements](#requirements) on the VM.
+2. Open and mount the ISO file you downloaded in the [Requirements](#requirements) section above on the VM.
3. Create a folder on the file share.
virtual-machines Disks Change Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-change-performance.md
description: Learn about performance tiers for managed disks.
Previously updated : 03/24/2022 Last updated : 08/30/2022
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-The performance of your Azure managed disk is set when you create your disk, in the form of its performance tier. The performance tier determines the IOPS and throughput your managed disk has. When you set the provisioned size of your disk, a performance tier is automatically selected. The performance tier can be changed at deployment or afterwards, without changing the size of the disk.
+The performance of your Azure managed disk is set when you create your disk, in the form of its performance tier. When you set the provisioned size of your disk, a performance tier is automatically selected. The performance tier determines the IOPS and throughput your managed disk has. The performance tier can be changed at deployment or afterwards, without changing the size of the disk and without downtime.
-Changing the performance tier allows you to prepare for and meet higher demand without using your disk's bursting capability. It can be more cost-effective to change your performance tier rather than rely on bursting, depending on how long the additional performance is necessary. This is ideal for events that temporarily require a consistently higher level of performance, like holiday shopping, performance testing, or running a training environment. To handle these events, you can use a higher performance tier for as long as you need it. You can then return to the original tier when you no longer need the additional performance.
+Changing the performance tier allows you to prepare for and meet higher demand without using your disk's bursting capability. It can be more cost-effective to change your performance tier rather than rely on bursting, depending on how long the additional performance is necessary. This is ideal for events that temporarily require a consistently higher level of performance, like holiday shopping, performance testing, or running a training environment. To handle these events, you can switch a disk to a higher performance tier without downtime, for as long as you need the additional performance. You can then return to the original tier without downtime when the additional performance is no longer necessary.
## Restrictions
virtual-machines Disks Performance Tiers Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-performance-tiers-portal.md
description: Learn how to change performance tiers for new and existing managed
Previously updated : 03/24/2022 Last updated : 08/30/2022
The following steps show how to change the performance tier of your disk when yo
:::image type="content" source="media/disks-performance-tiers-portal/new-disk-change-performance-tier.png" alt-text="Screenshot of the disk creation blade, a disk is highlighted, and the performance tier dropdown is highlighted." lightbox="media/disks-performance-tiers-portal/performance-tier-settings.png":::
-### Change the performance tier of an existing disk without downtime
+### Change the performance tier of an existing disk
-You can also change your performance tier without downtime, so you don't have to deallocate your VM or detach your disk to change the tier.
+A disk's performance tier can be changed without downtime, so you don't have to deallocate your VM or detach your disk to change the tier.
### Change performance tier
-Now that the feature has been registered, you can change applicable disk's performance tiers without downtime.
- 1. Navigate to the VM containing the disk you'd like to change. 1. Select your disk 1. Select **Size + Performance**.
virtual-machines Disks Performance Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-performance-tiers.md
description: Learn how to change performance tiers for existing managed disks us
Previously updated : 03/24/2022 Last updated : 08/30/2022
virtual-machines Dplsv5 Dpldsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dplsv5-dpldsv5-series.md
Dplsv5-series virtual machines feature the Ampere® Altra® Arm-based processor
- [Memory Preserving Updates](maintenance-and-updates.md): Supported - [VM Generation Support](generation-2.md): Generation 2 - [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported -- [Ephemeral OS Disks](ephemeral-os-disks.md): Supported
+- [Ephemeral OS Disks](ephemeral-os-disks.md): Not supported
- [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not supported | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps | Max NICs | Max network bandwidth (Mbps) |
virtual-machines Dpsv5 Dpdsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dpsv5-dpdsv5-series.md
Dpsv5-series virtual machines feature the Ampere® Altra® Arm-based processor o
- [Memory Preserving Updates](maintenance-and-updates.md): Supported - [VM Generation Support](generation-2.md): Generation 2 - [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported -- [Ephemeral OS Disks](ephemeral-os-disks.md): Supported
+- [Ephemeral OS Disks](ephemeral-os-disks.md): Not supported
- [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not supported | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps | Max NICs | Max network bandwidth (Mbps) |
virtual-machines Epsv5 Epdsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/epsv5-epdsv5-series.md
Epsv5-series virtual machines feature the Ampere® Altra® Arm-based processor o
- [Memory Preserving Updates](maintenance-and-updates.md): Supported - [VM Generation Support](generation-2.md): Generation 2 - [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported -- [Ephemeral OS Disks](ephemeral-os-disks.md): Supported
+- [Ephemeral OS Disks](ephemeral-os-disks.md): Not supported
- [Nested virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not supported | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps | Max NICs | Max network bandwidth (Mbps) |
virtual-machines Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-cli.md
This quickstart shows you how to use the Azure CLI to deploy a Linux virtual machine (VM) in Azure. The Azure CLI is used to create and manage Azure resources via either the command line or scripts.
-In this tutorial, we will be installing the latest Ubuntu LTS image. To show the VM in action, you'll connect to it using SSH and install the NGINX web server.
+In this tutorial, we will be installing the latest Debian image. To show the VM in action, you'll connect to it using SSH and install the NGINX web server.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
virtual-machines Extensions Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/extensions-diagnostics.md
az vm extension list --resource-group myResourceGroup --vm-name myVM -o table
[See this article](../../azure-monitor/agents/diagnostics-extension-troubleshooting.md) for a more comprehensive troubleshooting guide for the Azure Diagnostics extension.
+#### Error: "Profile operation failed"
+
+To enable profiling, please follow [Enable Profiler for web apps on an Azure virtual machine](../../azure-monitor/profiler/profiler-vm.md#enable-profiler-for-web-apps-on-an-azure-virtual-machine).
+ ### Support If you need more help at any point in this article, you can contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/forums/). Alternatively, you can file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select Get support. For information about using Azure Support, read the [Microsoft Azure support FAQ](https://azure.microsoft.com/support/faq/).
virtual-machines Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-create-portal.md
Previously updated : 03/15/2021 Last updated : 08/29/2022
Sign in to the Azure portal at https://portal.azure.com.
1. Enter *virtual machines* in the search. 1. Under **Services**, select **Virtual machines**.
-1. In the **Virtual machines** page, select **Create** and then **Virtual machine**. The **Create a virtual machine** page opens.
-
-1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose to **Create new** resource group. Enter *myResourceGroup* for the name.
-
- ![Screenshot of the Project details section showing where you select the Azure subscription and the resource group for the virtual machine](./media/quick-create-portal/project-details.png)
-
-1. Under **Instance details**, enter *myVM* for the **Virtual machine name** and choose *Windows Server 2019 Datacenter - Gen2* for the **Image**. Leave the other defaults.
+1. In the **Virtual machines** page, select **Create** and then **Azure virtual machine**. The **Create a virtual machine** page opens.
+1. Under **Instance details**, enter *myVM* for the **Virtual machine name** and choose *Windows Server 2011 Datacenter Azure Edition - Gen 2* for the **Image**. Leave the other defaults.
:::image type="content" source="media/quick-create-portal/instance-details.png" alt-text="Screenshot of the Instance details section where you provide a name for the virtual machine and select its region, image and size.":::
Sign in to the Azure portal at https://portal.azure.com.
1. Under **Administrator account**, provide a username, such as *azureuser* and a password. The password must be at least 12 characters long and meet the [defined complexity requirements](faq.yml#what-are-the-password-requirements-when-creating-a-vm-).
- ![Screenshot of the Administrator account section where you provide the administrator username and password](./media/quick-create-portal/administrator-account.png)
+ :::image type="content" source="media/quick-create-portal/administrator-account.png" alt-text="Screenshot of the Administrator account section where you provide the administrator username and password":::
1. Under **Inbound port rules**, choose **Allow selected ports** and then select **RDP (3389)** and **HTTP (80)** from the drop-down.
- ![Screenshot of the inbound port rules section where you select what ports inbound connections are allowed on](./media/quick-create-portal/inbound-port-rules.png)
+ :::image type="content" source="media/quick-create-portal/inbound-port-rules.png" alt-text="Screenshot of the inbound port rules section where you select what ports inbound connections are allowed on":::
1. Leave the remaining defaults and then select the **Review + create** button at the bottom of the page.
- ![Screenshot showing the Review and create button at the bottom of the page](./media/quick-create-portal/review-create.png)
+ :::image type="content" source="media/quick-create-portal/review-create.png" alt-text="Screenshot showing the Review + create button at the bottom of the page.":::
+ 1. After validation runs, select the **Create** button at the bottom of the page.
+ :::image type="content" source="media/quick-create-portal/validation.png" alt-text="Screenshot showing that validation has passed. Select the Create button to create the VM.":::
1. After deployment is complete, select **Go to resource**.
- ![Screenshot showing the next step of going to the resource](./media/quick-create-portal/next-steps.png)
+ :::image type="content" source="media/quick-create-portal/next-steps.png" alt-text="Screenshot showing the next step of going to the resource.":::
## Connect to virtual machine
Create a remote desktop connection to the virtual machine. These directions tell
1. On the overview page for your virtual machine, select the **Connect** > **RDP**.
- ![Screenshot of the virtual machine overview page showing the location of the connect button](./media/quick-create-portal/portal-quick-start-9.png)
-
-2. In the **Connect with RDP** page, keep the default options to connect by IP address, over port 3389, and click **Download RDP file**.
+ :::image type="content" source="media/quick-create-portal/portal-quick-start-9.png" alt-text="Screenshot of the virtual machine overview page showing the location of the connect button.":::
+
+2. In the **Connect with RDP** tab, keep the default options to connect by IP address, over port 3389, and click **Download RDP file**.
+
+ :::image type="content" source="media/quick-create-portal/remote-desktop.png" alt-text="Screenshot showing the remote desktop settings and the Download RDP file button.":::
-2. Open the downloaded RDP file and click **Connect** when prompted.
+2. Open the downloaded RDP file and click **Connect** when prompted.
3. In the **Windows Security** window, select **More choices** and then **Use a different account**. Type the username as **localhost**\\*username*, enter the password you created for the virtual machine, and then click **OK**.
virtual-machines Expose Sap Process Orchestration On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/expose-sap-process-orchestration-on-azure.md
Existing implementations based on SAP middleware often relied on SAP's proprieta
Dispatching approaches range from traditional reverse proxies like Apache, to Platform-as-a-Service (PaaS) options like the [Azure Load Balancer](../../../load-balancer/load-balancer-overview.md), or the opinionated SAP WebDispatcher. The overall concepts described in this article apply to the options mentioned. Have a look at SAP's [wiki](https://wiki.scn.sap.com/wiki/display/SI/Can+I+use+a+different+load+balancer+instead+of+SAP+Web+Dispatcher) for their guidance on using non-SAP load balancers. > [!NOTE]
-> All described setups in this article assume a hub-spoke networking topology, where shared services are deployed into the hub. Given the criticality of SAP, even more isolation may be desirable.
+> All described setups in this article assume a hub-spoke networking topology, where shared services are deployed into the hub. Given the criticality of SAP, even more isolation may be desirable. For more information, see our SAP perimeter-network design (also known as DMZ) [guide](/azure/architecture/guide/sap/sap-internet-inbound-outbound#network-design).
## Primary Azure services used
Which integration flavor described in this article fits your requirements best,
## Alternatives to SAP Process Orchestration with Azure Integration Services
-The integration scenarios covered by SAP Process Orchestration can be addressed with the [Azure Integration Service portfolio](https://azure.microsoft.com/product-categories/integration/) natively. Have a look at the [Azure Logic Apps connectors](../../../logic-apps/logic-apps-using-sap-connector.md) for your desired SAP interfaces to get started. The connector guide contains more details for [AS2](../../../logic-apps/logic-apps-enterprise-integration-as2.md), [EDIFACT](../../../logic-apps/logic-apps-enterprise-integration-edifact.md) etc. too. See [this blog series](https://blogs.sap.com/2018/09/25/your-sap-on-azure-part-9-easy-integration-using-azure-logic-apps/) for a concrete example of iDoc processing with AS2 via Logic Apps.
+The integration scenarios covered by SAP Process Orchestration can be natively addressed with the [Azure Integration Service portfolio](https://azure.microsoft.com/product-categories/integration/). Have a look at the [Azure Logic Apps connectors](../../../logic-apps/logic-apps-using-sap-connector.md) for your desired SAP interfaces to get started. The connector guide contains more details for [AS2](../../../logic-apps/logic-apps-enterprise-integration-as2.md), [EDIFACT](../../../logic-apps/logic-apps-enterprise-integration-edifact.md) etc. too. See [this blog series](https://blogs.sap.com/2022/08/30/port-your-legacy-sap-middleware-flows-to-cloud-native-paas-solutions/) for insights on how to design SAP iFlow patterns with cloud-native means.
## Next steps
virtual-machines Hana Vm Operations Netapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-operations-netapp.md
When considering Azure NetApp Files for the SAP Netweaver and SAP HANA, be aware
- Azure NetApp Files offers [export policy](../../../azure-netapp-files/azure-netapp-files-configure-export-policy.md): you can control the allowed clients, the access type (Read&Write, Read Only, etc.). - Azure NetApp Files feature isn't zone aware yet. Currently Azure NetApp Files feature isn't deployed in all Availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions. Though to achieve proximity, the functionality of [Application Volume Groups](../../../azure-netapp-files/application-volume-group-introduction.md) is in public preview. See also later in this article - The User ID for <b>sid</b>adm and the Group ID for `sapsys` on the virtual machines must match the configuration in Azure NetApp Files.
+- Implement Linux OS parameters mentioned in SAP note [3024346](https://launchpad.support.sap.com/#/notes/3024346)
> [!IMPORTANT] > For SAP HANA workloads, low latency is critical. Work with your Microsoft representative to ensure that the virtual machines and the Azure NetApp Files volumes are deployed in close proximity.
Therefore you could consider to deploy similar throughput for the ANF volumes as
Documentation on how to deploy an SAP HANA scale-out configuration with standby node using NFS v4.1 volumes that are hosted in ANF is published in [SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files on SUSE Linux Enterprise Server](./sap-hana-scale-out-standby-netapp-files-suse.md).
+## Linux Kernel Settings
+To successfully deploy SAP HANA on ANF Linux kernel settings need to be implemented according to SAP note [3024346](https://launchpad.support.sap.com/#/notes/3024346).
+
+For systems using High Availability (HA) using pacemaker and Azure Load Balancer following settings need to be implemeneted in file /etc/sysctl.d/91-NetApp-HANA.conf
+
+```
+net.core.rmem_max = 16777216
+net.core.wmem_max = 16777216
+net.ipv4.tcp_rmem = 4096 131072 16777216
+net.ipv4.tcp_wmem = 4096 16384 16777216
+net.core.netdev_max_backlog = 300000
+net.ipv4.tcp_slow_start_after_idle=0
+net.ipv4.tcp_no_metrics_save = 1
+net.ipv4.tcp_moderate_rcvbuf = 1
+net.ipv4.tcp_window_scaling = 1
+net.ipv4.tcp_timestamps = 0
+net.ipv4.tcp_sack = 1
+```
+
+Systems running with no pacemaker and Azure Load Balancer should implemented these settings in /etc/sysctl.d/91-NetApp-HANA.conf
+
+```
+net.core.rmem_max = 16777216
+net.core.wmem_max = 16777216
+net.ipv4.tcp_rmem = 4096 131072 16777216
+net.ipv4.tcp_wmem = 4096 16384 16777216
+net.core.netdev_max_backlog = 300000
+net.ipv4.tcp_slow_start_after_idle=0
+net.ipv4.tcp_no_metrics_save = 1
+net.ipv4.tcp_moderate_rcvbuf = 1
+net.ipv4.tcp_window_scaling = 1
+net.ipv4.tcp_timestamps = 1
+net.ipv4.tcp_sack = 1
+```
+ ## Deployment through Azure NetApp Files application volume group for SAP HANA (AVG) To deploy ANF volumes with proximity to your VM, a new functionality called Azure NetApp Files application volume group for SAP HANA (AVG) got developed. **The functionality is currently in public preview**. There's a series of articles that document the functionality. Best is to start with the article [Understand Azure NetApp Files application volume group for SAP HANA](../../../azure-netapp-files/application-volume-group-introduction.md). As you read the articles, it becomes clear that the usage of AVGs involves the usage of Azure proximity placement groups as well. Proximity placement groups are used by the new functionality to tie into with the volumes that are getting created. To ensure that over the lifetime of the HANA system, the VMΓÇÖs aren't going to be moved away from the ANF volumes, we recommend using a combination of Avset/ PPG for each of the zones you deploy into. The order of deployment would look like:
virtual-machines Hana Vm Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-operations.md
tags: azure-resource-manager
Previously updated : 06/06/2022 Last updated : 08/30/2022
Site-to-site connectivity via VPN or ExpressRoute is necessary for production sc
### Choose Azure VM types
-The Azure VM types that can be used for production scenarios are listed in the [SAP documentation for IAAS](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html). For non-production scenarios, a wider variety of native Azure VM types is available.
+SAP lists which [Azure VM types that you can use for production scenarios](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;iaas;ve:24). For non-production scenarios, a wider variety of native Azure VM types is available.
>[!NOTE] > For non-production scenarios, use the VM types that are listed in the [SAP note #1928533](https://launchpad.support.sap.com/#/notes/1928533). For the usage of Azure VMs for production scenarios, check for SAP HANA certified VMs in the SAP published [Certified IaaS Platforms list](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120).
virtual-machines Sap Hana High Availability Netapp Files Red Hat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-red-hat.md
For more information about the required ports for SAP HANA, read the chapter [Co
10.32.0.5 hanadb2 ```
-3. **[A]** Prepare the OS for running SAP HANA on Azure NetApp with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/netapp-hana.conf* for the NetApp configuration settings.
+3. **[A]** Prepare the OS for running SAP HANA on Azure NetApp with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the NetApp configuration settings.
<pre><code>
- vi /etc/sysctl.d/netapp-hana.conf
+ vi /etc/sysctl.d/91-NetApp-HANA.conf
# Add the following entries in the configuration file net.core.rmem_max = 16777216 net.core.wmem_max = 16777216
virtual-machines Sap Hana High Availability Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-suse.md
For more information about the required ports for SAP HANA, read the chapter [Co
10.3.0.5 hanadb2 ```
-2.**[A]** Prepare the OS for running SAP HANA on Azure NetApp with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/netapp-hana.conf* for the NetApp configuration settings.
+2.**[A]** Prepare the OS for running SAP HANA on Azure NetApp with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the NetApp configuration settings.
```
- vi /etc/sysctl.d/netapp-hana.conf
+ vi /etc/sysctl.d/91-NetApp-HANA.conf
# Add the following entries in the configuration file net.core.rmem_max = 16777216 net.core.wmem_max = 16777216
virtual-machines Sap Hana High Availability Scale Out Hsr Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-scale-out-hsr-rhel.md
Configure and prepare your operating system by doing the following:
10.23.1.207 hana-s2-db3-hsr ```
-1. **[A]** Prepare the operating system for running SAP HANA. For more information, see SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/netapp-hana.conf* for the Azure NetApp Files configuration settings.
+1. **[A]** Prepare the operating system for running SAP HANA. For more information, see SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the Azure NetApp Files configuration settings.
<pre><code>
- vi /etc/sysctl.d/netapp-hana.conf
+ vi /etc/sysctl.d/91-NetApp-HANA.conf
# Add the following entries in the configuration file net.core.rmem_max = 16777216 net.core.wmem_max = 16777216
virtual-machines Sap Hana High Availability Scale Out Hsr Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-scale-out-hsr-suse.md
Configure and prepare your OS by doing the following steps:
10.23.1.201 hana-s2-db3-hsr ```
-3. **[A]** Prepare the OS for running SAP HANA on NetApp Systems with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/netapp-hana.conf* for the NetApp configuration settings.
+3. **[A]** Prepare the OS for running SAP HANA on NetApp Systems with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the NetApp configuration settings.
<pre><code>
- vi /etc/sysctl.d/netapp-hana.conf
+ vi /etc/sysctl.d/91-NetApp-HANA.conf
# Add the following entries in the configuration file net.core.rmem_max = 16777216 net.core.wmem_max = 16777216
virtual-machines Sap Hana Scale Out Standby Netapp Files Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-rhel.md
Configure and prepare your OS by doing the following steps:
yum install nfs-utils </code></pre>
-3. **[A]** Prepare the OS for running SAP HANA on Azure NetApp with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/netapp-hana.conf* for the NetApp configuration settings.
+3. **[A]** Prepare the OS for running SAP HANA on Azure NetApp with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the NetApp configuration settings.
<pre><code>
- vi /etc/sysctl.d/netapp-hana.conf
+ vi /etc/sysctl.d/91-NetApp-HANA.conf
# Add the following entries in the configuration file net.core.rmem_max = 16777216 net.core.wmem_max = 16777216
virtual-machines Sap Hana Scale Out Standby Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse.md
Configure and prepare your OS by doing the following steps:
Reboot the VM to activate the changes.
-3. **[A]** Prepare the OS for running SAP HANA on NetApp Systems with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/netapp-hana.conf* for the NetApp configuration settings.
+3. **[A]** Prepare the OS for running SAP HANA on NetApp Systems with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the NetApp configuration settings.
<pre><code>
- vi /etc/sysctl.d/netapp-hana.conf
+ vi /etc/sysctl.d/91-NetApp-HANA.conf
# Add the following entries in the configuration file net.core.rmem_max = 16777216 net.core.wmem_max = 16777216
virtual-network-manager Concept Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-deployments.md
In this article, you'll learn about how configurations are applied to your netwo
*Deployment* is the method Azure Virtual Network Manager uses to apply configurations to your virtual networks in network groups. Configurations won't take effect until they're deployed. When a deployment request is sent to Azure Virtual Network Manager, it will calculate the [goal state](#goalstate) of all resources under your network manager in that region. Goal state is a combination of deployed configurations and network group membership. Network manager will then apply the necessary changes to your infrastructure.
-When committing a deployment, you select the region(s) to which the configuration will be applied. The deployed configuration is also static. Once deployed, you can edit your configurations freely without impacting your deployed setup. Applying any of these new changes will take another deployment. The changes reprocess the entire region and can take a few minutes depending on how large the configuration is. However, Changes to network groups will take effect without the need for redeployment. This includes adding or removing group members directly, or configuring an Azure Policy resource. Safe deployment practices recommend gradually rolling out changes on a per-region basis.
+When committing a deployment, you select the region(s) to which the configuration will be applied. The deployed configuration is also static. Once deployed, you can edit your configurations freely without impacting your deployed setup. Applying any of these new changes will take another deployment. The changes reprocess the entire region and can take a few minutes depending on how large the configuration is. There are two factors in how quick the configurations are applied:
+- The time of applying configuration is a few minutes.
+- The time to get notification of what is in a network group can very.
+
+For static members, it's immediate. For dynamic members where the scope is less than 1000 subscriptions, it takes a few minutes. In environments with over 1000 subscriptions, the notification mechanism works in a 24-hour window. Once the policy is deployed, commits are faster in the future. However, Changes to network groups will take effect without the need for redeployment. This includes adding or removing group members directly, or configuring an Azure Policy resource. Safe deployment practices recommend gradually rolling out changes on a per-region basis.
+
+AVNM will apply the configuration to the VNets in the network group. So even if your network group consists of dynamic members from more than 1000 subscriptions, if AVNM also is notified who is in the network group, the configuration will be applied in a few minutes.
## Deployment status When you commit a configuration deployment, the API does a POST operation. Once the deployment request has been made, Azure Virtual Network Manager will calculate the goal state of your networks in the deployed regions and request the underlying infrastructure to make the changes. You can see the deployment status on the *Deployment* page of the Virtual Network Manager.
virtual-network Create Vm Dual Stack Ipv6 Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-vm-dual-stack-ipv6-portal.md
In this article, you'll create a virtual machine in Azure with the Azure portal.
In this section, you'll create a dual-stack virtual network for the virtual machine.
-1. Sign-in to the [Azure portal](https://https://portal.azure.com).
+1. Sign-in to the [Azure portal](https://portal.azure.com).
2. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
virtual-network Virtual Network Multiple Ip Addresses Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-multiple-ip-addresses-cli.md
This article explains how to create a virtual machine (VM) through the Azure Res
The steps that follow explain how to create an example virtual machine with multiple IP addresses, as described in the scenario. Change variable values in "" and IP address types, as required, for your implementation. 1. Install the [Azure CLI](/cli/azure/install-azure-cli) if you don't already have it installed.
-2. Create an SSH public and private key pair for Linux VMs by completing the steps in the [Create an SSH public and private key pair for Linux VMs](../../virtual-machines/linux/mac-create-ssh-keys.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+2. Create an SSH public and private key pair for Linux VMs by completing the steps in the [Create an SSH public and private key pair for Linux VMs](../../virtual-machines/linux/mac-create-ssh-keys.md?toc=/azure/virtual-network/toc.json).
3. From a command shell, login with the command `az login` and select the subscription you're using. 4. Create the VM by executing the script that follows on a Linux or Mac computer. The script creates a resource group, one virtual network (VNet), one NIC with three IP configurations, and a VM with the two NICs attached to it. The NIC, public IP address, virtual network, and VM resources must all exist in the same location and subscription. Though the resources don't all have to exist in the same resource group, in the following script they do.
az vm create \
In addition to creating a VM with a NIC with 3 IP configurations, the script creates: -- A single premium managed disk by default, but you have other options for the disk type you can create. Read the [Create a Linux VM using the Azure CLI](../../virtual-machines/linux/quick-create-cli.md?toc=%2fazure%2fvirtual-network%2ftoc.json) article for details.
+- A single premium managed disk by default, but you have other options for the disk type you can create. Read the [Create a Linux VM using the Azure CLI](../../virtual-machines/linux/quick-create-cli.md?toc=/azure/virtual-network/toc.json) article for details.
- A virtual network with one subnet and two public IP addresses. Alternatively, you can use *existing* virtual network, subnet, NIC, or public IP address resources. To learn how to use existing network resources rather than creating additional resources, enter `az vm create -h`. Public IP addresses have a nominal fee. To learn more about IP address pricing, read the [IP address pricing](https://azure.microsoft.com/pricing/details/ip-addresses) page. There is a limit to the number of public IP addresses that can be used in a subscription. To learn more about the limits, read the [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits) article.
Add the private IP addresses to the VM operating system by completing the steps
You can add additional private and public IP addresses to an existing Azure network interface by completing the steps that follow. The examples build upon the scenario described in this article.
-1. Open a command shell and complete the remaining steps in this section within a single session. If you don't already have Azure CLI installed and configured, complete the steps in the [Azure CLI installation](/cli/azure/install-az-cli2?toc=%2fazure%2fvirtual-network%2ftoc.json) article and login to your Azure account with the `az-login` command.
+1. Open a command shell and complete the remaining steps in this section within a single session. If you don't already have Azure CLI installed and configured, complete the steps in the [Azure CLI installation](/cli/azure/install-az-cli2?toc=/azure/virtual-network/toc.json) article and login to your Azure account with the `az-login` command.
2. Complete the steps in one of the following sections, based on your requirements:
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-overview.md
Virtual Network NAT is a software defined networking service. A NAT gateway won'
* When NAT gateway is configured to a virtual network where standard Load balancer with outbound rules already exists, NAT gateway will take over all outbound traffic moving forward. There will be no drops in traffic flow for existing connections on Load balancer. All new connections will use NAT gateway.
-* Presence of custom UDRs for virtual appliances and ExpressRoute override NAT gateway for directing internet bound traffic (route to the 0.0.0.0/0 address prefix). See [Troubleshooting NAT gateway](./troubleshoot-nat.md#virtual-appliance-udrs-and-expressroute-override-nat-gateway-for-routing-outbound-traffic) to learn more.
+* Presence of custom UDRs for virtual appliances and ExpressRoute override NAT gateway for directing internet bound traffic (route to the 0.0.0.0/0 address prefix).
* The order of operations for outbound connectivity follows this order of precedence: Virtual appliance UDR / ExpressRoute >> NAT gateway >> Instance-level public IP addresses on virtual machines >> Load balancer outbound rules >> default system
Virtual appliance UDR / ExpressRoute >> NAT gateway >> Instance-level public IP
### NAT gateway timers
-* NAT gateway holds on to SNAT ports after a connection closes before it is available to reuse to connect to the same destination endpoint over the internet. SNAT port reuse timer durations vary depending on how the connection closes. To learn more, see [Port Reuse Timers](./nat-gateway-resource.md#port-reuse-timers).
+* NAT gateway holds on to SNAT ports after a connection closes before it is available to reuse to connect to the same destination endpoint over the internet. SNAT port reuse timer durations for TCP traffic vary depending on how the connection closes. To learn more, see [Port Reuse Timers](./nat-gateway-resource.md#port-reuse-timers).
* A default TCP idle timeout of 4 minutes is used and can be increased to up to 120 minutes. Any activity on a flow can also reset the idle timer, including TCP keepalives. To learn more, see [Idle Timeout Timers](./nat-gateway-resource.md#idle-timeout-timers).
-* UDP traffic has an idle timeout timer of 4 minutes that cannot be changed.
+* UDP traffic has an idle timeout timer of 4 minutes that cannot be changed.
+
+* UDP traffic has a port reset timer of 65 seconds for which a port is in hold down before it is available for reuse to the same destination endpoint.
## Pricing and SLA
For information on the SLA, see [SLA for Virtual Network NAT](https://azure.micr
* Learn about the [NAT gateway resource](./nat-gateway-resource.md).
-* [Learn module: Introduction to Azure Virtual Network NAT](/learn/modules/intro-to-azure-virtual-network-nat).
+* [Learn module: Introduction to Azure Virtual Network NAT](/learn/modules/intro-to-azure-virtual-network-nat).
virtual-network Troubleshoot Nat And Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat-and-azure-services.md
+
+ Title: Troubleshoot outbound connectivity with Azure services
+
+description: Troubleshoot issues with Virtual Network NAT and Azure services.
++++ Last updated : 08/29/2022+++
+# Troubleshoot outbound connectivity with NAT gateway and Azure services
+
+This article provides guidance on how to troubleshoot connectivity issues when using NAT gateway with other Azure services, including:
+
+* [Azure App Services](#azure-app-services)
+
+* [Azure Kubernetes Service](#azure-kubernetes-service)
+
+* [Azure Firewall](#azure-firewall)
+
+* [Azure Databricks](#azure-databricks)
+
+## Azure App Services
+
+### Azure App Services regional Virtual network integration turned off
+
+NAT gateway can be used with Azure app services to allow applications to make outbound calls from a virtual network. To use this integration between Azure app services and NAT gateway, regional virtual network integration must be enabled. See [how regional virtual network integration works](../../app-service/overview-vnet-integration.md#how-regional-virtual-network-integration-works) to learn more.
+
+To use NAT gateway with Azure App services, follow these steps:
+
+1. Ensure that your application(s) have virtual network integration configured, see [Enable virtual network integration](../../app-service/configure-vnet-integration-enable.md).
+
+2. Ensure that **Route All** is enabled for your virtual network integration, see [Configure virtual network integration routing](../../app-service/configure-vnet-integration-routing.md).
+
+3. Create a NAT gateway resource.
+
+4. Create a new public IP address or attach an existing public IP address in your network to NAT gateway.
+
+5. Assign NAT gateway to the same subnet being used for Virtual network integration with your application(s).
+
+To see step-by-step instructions on how to configure NAT gateway with virtual network integration, see [Configuring NAT gateway integration](../../app-service/networking/nat-gateway-integration.md#configuring-nat-gateway-integration)
+
+Important notes about the NAT gateway and Azure App Services integration:
+
+* Virtual network integration doesn't provide inbound private access to your app from the virtual network.
+
+* Because of the nature of how virtual network integration operates, the traffic from virtual network integration doesn't show up in Azure Network Watcher or NSG flow logs.
+
+### App services isn't using the NAT gateway public IP address to connect outbound
+
+App services can still connect outbound to the internet even if VNet integration isn't enabled. By default, apps that are hosted in App Service are accessible directly through the internet and can reach only internet-hosted endpoints. To learn more, see App Services Networking Features.
+
+If you notice that the IP address used to connect outbound isn't your NAT gateway public IP address or addresses, check that virtual network integration has been enabled. Ensure the NAT gateway is configured to the subnet used for integration with your application(s).
+
+To validate that web applications are using the NAT gateway public IP, ping a virtual machine on your Web Apps and check the traffic via a network capture.
+
+## Azure Kubernetes Service
+
+### How to deploy NAT gateway with AKS clusters
+
+NAT gateway can be deployed with AKS clusters in order to allow for explicit outbound connectivity. There are two different ways to deploy NAT gateway with AKS clusters:
+
+1. **Managed NAT gateway**: NAT gateway is provisioned by Azure at the time of the AKS cluster creation and managed by AKS.
+
+2. **User-Assigned NAT gateway**: NAT gateway is provisioned by you to an existing virtual network for the AKS cluster.
+
+Learn more at [Managed NAT Gateway](/azure/aks/nat-gateway).
+
+### Can't update my NAT gateway IPs or idle timeout timer for an AKS cluster
+
+Public IP addresses and the idle timeout timer for NAT gateway can be updated with the az aks update command for a Managed NAT gateway ONLY.
+
+If you've provisioned a User-Assigned NAT gateway to your AKS subnets, then you can't use the az aks update command to update public IP addresses or the idle timeout timer. A User-Assigned NAT gateway is managed by the user rather than by AKS. You'll need to update these configurations manually on your NAT gateway resource.
+
+Update your public IP addresses on your User-Assigned NAT gateway with the following steps:
+
+1. In your resource group, select on your NAT gateway resource in the portal
+
+2. Under Settings on the left-hand navigation bar, select Outbound IP
+
+3. To manage your Public IP addresses, select the blue Change
+
+4. From the Manage public IP addresses and prefixes configuration that slides in from the right, update your assigned public IPs from the drop-down menu or select **Create a new public IP address**.
+
+5. Once you're done updating your IP configurations, select the OK button at the bottom of the screen.
+
+6. After the configuration page disappears, select the Save button to save your changes
+
+7. Use steps 3 - 6 to do the same for public IP prefixes.
+
+Update your idle timeout timer configuration on your User-Assigned NAT gateway with the following steps:
+
+1. In your resource group, select on your NAT gateway resource in the portal
+
+2. Under Settings on the left-hand navigation bar, select Configuration
+
+3. In the TCP idle timeout (minutes) text bar, adjust the idle timeout timer (the timer can be configured 4 ΓÇô 120 minutes).
+
+4. Select the Save button when youΓÇÖre done.
+
+>[!Note]
+>Increasing the TCP idle timeout timer to longer than 4 minutes can increase the risk of SNAT port exhaustion. For more information, see timer considerations.
+
+## Azure Firewall
+
+### How NAT gateway integration with Azure Firewall works
+
+Azure Firewall can provide outbound connectivity to the internet from virtual networks. Azure Firewall provides only 2,496 SNAT ports per public IP address. While Azure Firewall can be associated with up to 250 public IP addresses to handle egress traffic, often, customers require much fewer public IP addresses for connecting outbound due to various architectural requirements and limitations by destination endpoints for the number of public IP addresses they can allowlist. One method by which to get around this allowlist IP limitation and to also reduce the risk of SNAT port exhaustion is to use NAT gateway in the same subnet with Azure Firewall. To learn how to set up NAT gateway in an Azure Firewall subnet, see [Scale SNAT ports with Azure Virtual Network NAT](/azure/firewall/integrate-with-nat-gateway).
+
+## Azure Databricks
+
+### How to use NAT gateway to connect outbound from a databricks cluster
+
+NAT gateway can be used to connect outbound from your databricks cluster when you create your Databricks workspace. NAT gateway can be deployed to your databricks cluster in one of two ways:
+
+1. By enabling [Secure Cluster Connectivity (No Public IP)](/azure/databricks/security/secure-cluster-connectivity#use-secure-cluster-connectivity) on the default virtual network that Azure Databricks creates, NAT gateway will automatically be deployed for connecting outbound from your workspaceΓÇÖs subnets to the internet. This NAT gateway resource is created within the managed resource group managed by Azure Databricks. You can't modify this resource group or any other resources provisioned in it.
+
+2. After deploying Azure Databricks workspace in your own VNet (via [VNet injection](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject)), you can deploy and configure NAT gateway to both of your workspaceΓÇÖs subnets to ensure outbound connectivity through the NAT gateway. You can implement this solution using an [Azure template](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject#advanced-configuration-using-azure-resource-manager-templates) or in the portal.
+
+## Next steps
+
+We're always looking to improve the experience of our customers. If you're experiencing issues with NAT gateway that aren't listed or resolved by this article, submit feedback through GitHub via the bottom of this page. We'll address your feedback as soon as possible.
+
+To learn more about NAT gateway, see:
+
+* [Virtual Network NAT](/azure/virtual-network/nat-gateway/nat-overview)
+
+* [NAT gateway resource](/azure/virtual-network/nat-gateway/nat-gateway-resource)
+
+* [Metrics and alerts for NAT gateway resources](/azure/virtual-network/nat-gateway/nat-metrics)
++
virtual-network Troubleshoot Nat Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat-connectivity.md
+
+ Title: Troubleshoot Azure Virtual Network NAT connectivity
+
+description: Troubleshoot connectivity issues with Virtual Network NAT.
++++ Last updated : 08/29/2022+++
+# Troubleshoot Azure Virtual Network NAT connectivity
+
+This article provides guidance on how to troubleshoot and resolve common outbound connectivity issues with your NAT gateway resource. This article also provides guidance on best practices for designing applications to use outbound connections efficiently.
+
+## SNAT exhaustion due to NAT gateway configuration
+
+Common SNAT exhaustion issues with NAT gateway typically have to do with the configurations on the NAT gateway. Common SNAT exhaustion issues include:
+
+* Outbound connectivity on NAT gateway not scaled out with enough public IP addresses.
+
+* NAT gateway's configurable TCP idle timeout timer is set higher than the default value of 4 minutes.
+
+### Outbound connectivity not scaled out enough
+
+Each public IP address provides 64,512 SNAT ports to subnets attached to NAT gateway. From those available SNAT ports, NAT gateway can support up to 50,000 concurrent connections to the same destination endpoint. If outbound connections are dropping because SNAT ports are being exhausted, then NAT gateway may not be scaled out enough to handle the workload. More public IP addresses may need to be added to NAT gateway in order to provide more SNAT ports for outbound connectivity.
+
+The table below describes two common scenarios in which outbound connectivity may not be scaled out enough and how to validate and mitigate these issues:
+
+| Scenario | Evidence |Mitigation |
+||||
+| You're experiencing contention for SNAT ports and SNAT port exhaustion during periods of high usage. | You run the following [metrics](nat-metrics.md) in Azure Monitor: **Total SNAT Connection**: "Sum" aggregation shows high connection volume. "Failed" connection state shows transient or persistent failures over time. **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume. | Determine if you can add more public IP addresses or public IP prefixes. This addition will allow for up to 16 IP addresses in total to your NAT gateway. This addition will provide more inventory for available SNAT ports (64,000 per IP address) and allow you to scale your scenario further. |
+| You've already given 16 IP addresses and still are experiencing SNAT port exhaustion. | Attempt to add more IP addresses fails. Total number of IP addresses from public IP address resources or public IP prefix resources exceeds a total of 16. | Distribute your application environment across multiple subnets and provide a NAT gateway resource for each subnet. |
+
+>[!NOTE]
+>It is important to understand why SNAT exhaustion occurs. Make sure you are using the right patterns for scalable and reliable scenarios. Adding more SNAT ports to a scenario without understanding the cause of the demand should be a last resort. If you do not understand why your scenario is applying pressure on SNAT port inventory, adding more SNAT ports to the inventory by adding more IP addresses will only delay the same exhaustion failure as your application scales. You may be masking other inefficiencies and anti-patterns.
+
+### TCP idle timeout timers set higher than the default value
+
+The NAT gateway TCP idle timeout timer is set to 4 minutes by default but is configurable up to 120 minutes. If this setting is changed to a higher value than the default, NAT gateway will hold on to flows longer, and can create [extra pressure on SNAT port inventory](/azure/virtual-network/nat-gateway/nat-gateway-resource#timers). The table below describes a common scenario in which a high TCP idle timeout may be causing SNAT exhaustion and provides possible mitigation steps to take:
+
+| Scenario | Evidence | Mitigation |
+||||
+| You want to ensure that TCP connections stay active for long periods of time without idle time-out. You increase the TCP idle timeout timer setting. After a period of time, you start to notice that connection failures occur more often. You suspect that you may be exhausting your inventory of SNAT ports since connections are holding on to them longer. | You check the following [NAT gateway metrics](nat-metrics.md) in Azure Monitor to determine if SNAT port exhaustion is happening: **Total SNAT Connection**: "Sum" aggregation shows high connection volume. "Failed" connection state shows transient or persistent failures over time. **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume. | You have a few possible mitigation steps that you can take to resolve SNAT port exhaustion: </br></br> **Reduce the TCP idle timeout** to a lower value to free up SNAT port inventory earlier. The TCP idle timeout timer can't be set lower than 4 minutes. </br></br> Consider **[asynchronous polling patterns](/azure/architecture/patterns/async-request-reply)** to free up connection resources for other operations. </br></br> **Use TCP keepalives or application layer keepalives** to avoid intermediate systems timing out. For examples, see [.NET examples](/dotnet/api/system.net.servicepoint.settcpkeepalive). </br></br> For connections to Azure PaaS services, use **[Private Link](../../private-link/private-link-overview.md)**. Private Link eliminates the need to use public IPs of your NAT gateway, which frees up more SNAT ports for outbound connections to the internet. |
+
+## Connection failures due to idle timeouts
+
+### TCP idle timeout
+
+As described in the [TCP timers](#tcp-idle-timeout-timers-set-higher-than-the-default-value) section above, TCP keepalives should be used instead to refresh idle flows and reset the idle timeout. TCP keepalives only need to be enabled from one side of a connection in order to keep a connection alive from both sides. When a TCP keepalive is sent from one side of a connection, the other side automatically sends an ACK packet. The idle timeout timer is then reset on both sides of the connection. To learn more, see [Timer considerations](/azure/virtual-network/nat-gateway/nat-gateway-resource#timer-considerations).
+
+>[!Note]
+>Increasing the TCP idle timeout is a last resort and may not resolve the root cause. A long timeout can cause low-rate failures when timeout expires and introduce delay and unnecessary failures.
+
+### UDP idle timeout
+
+UDP idle timeout timers are set to 4 minutes. Unlike TCP idle timeout timers for NAT gateway, UDP idle timeout timers aren't configurable. The table below describes a common scenario encountered with connections dropping due to UDP traffic idle timing out and steps to take to mitigate the issue.
+
+| Scenario | Evidence | Mitigation |
+||||
+| You notice that UDP traffic is dropping connections that need to be maintained for long periods of time. | You check the following [NAT gateway metrics](nat-metrics.md) in Azure Monitor, **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume. | A few possible mitigation steps that can be taken: - **Enable UDP keepalives**. Keep in mind that when a UDP keepalive is enabled, it's only active for one direction in a connection. This behavior means that the connection can still time out from going idle on the other side of a connection. To prevent a UDP connection from idle time-out, UDP keepalives should be enabled for both directions in a connection flow. - **Application layer keepalives** can also be used to refresh idle flows and reset the idle timeout. Check the server side for what options exist for application specific keepalives. |
+
+## NAT gateway public IP not being used for outbound traffic
+
+### VMs hold on to prior SNAT IP with active connection after NAT gateway added to a virtual network
+
+[Virtual Network NAT gateway](nat-overview.md) supersedes outbound connectivity for a subnet. Migrations from default SNAT or load balancer outbound SNAT to NAT gateway results in new connections immediately using the IP address(es) associated with the NAT gateway resource. If a virtual machine has an established connection during the migration, the connection will continue to use the old SNAT IP address that was assigned when the connection was established.
+
+Test and resolve issues with VMs holding on to old SNAT IP addresses by:
+
+- Ensure you've established a new connection and that existing connections aren't being reused in the OS or because the browser is caching the connections. For example, when using curl in PowerShell, make sure to specify the -DisableKeepalive parameter to force a new connection. If you're using a browser, connections may also be pooled.
+
+- It isn't necessary to reboot a virtual machine in a subnet configured to NAT gateway. However, if a virtual machine is rebooted, the connection state is flushed. When the connection state has been flushed, all connections will begin using the NAT gateway resource's IP address(es). This behavior is a side effect of the virtual machine reboot and not an indicator that a reboot is required.
+
+If you're still having trouble, open a support case for further troubleshooting.
+
+### Virtual appliance UDRs and ExpressRoute override NAT gateway for routing outbound traffic
+
+When forced tunneling with a custom UDR is enabled to direct traffic to a virtual appliance or VPN through ExpressRoute, the UDR or ExpressRoute takes precedence over NAT gateway for directing internet bound traffic. To learn more, see [custom UDRs](../virtual-networks-udr-overview.md#custom-routes).
+
+The order of precedence for internet routing configurations is as follows:
+Virtual appliance UDR / ExpressRoute >> NAT gateway >> instance level public IP addresses >> outbound rules on Load balancer >> default system
+
+Test and resolve issues with a virtual appliance UDR or VPN ExpressRoute overriding your NAT gateway by:
+
+1. [Testing that the NAT gateway public IP](./quickstart-create-nat-gateway-portal.md#test-nat-gateway) is used for outbound traffic. If a different IP is being used, it could be because of a custom UDR, follow the remaining steps on how to check for and remove custom UDRs.
+
+2. Check for UDRs in the virtual networkΓÇÖs route table, refer to [view route tables](../manage-route-table.md#view-route-tables).
+
+3. Remove the UDR from the route table by following [create, change, or delete an Azure route table](../manage-route-table.md#change-a-route-table).
+
+Once the custom UDR is removed from the routing table, the NAT gateway public IP should now take precedence in routing outbound traffic to the internet.
+
+### Private IPs are used to connect to Azure services by Private Link
+
+[Private Link](../../private-link/private-link-overview.md) connects your Azure virtual networks privately to Azure PaaS services such as Storage, SQL, or Cosmos DB over the Azure backbone network instead of over the internet. Private Link uses the private IP addresses of virtual machine instances in your virtual network to connect to these Azure platform services instead of the public IP of NAT gateway. As a result, when looking at the source IP address used to connect to these Azure services, you'll notice that the private IPs of your instances are used. See [Azure services listed here](../../private-link/availability.md) for all services supported by Private Link.
+
+To check which Private Endpoints you have set up with Private Link:
+
+1. From the Azure portal, search for Private Link in the search box.
+
+2. In the Private Link center, select Private Endpoints or Private Link services to see what configurations have been set up. For more information, see [Manage private endpoint connections](../../private-link/manage-private-endpoint.md#manage-private-endpoint-connections-on-azure-paas-resources).
+
+Service endpoints can also be used to connect your virtual network to Azure PaaS services. To check if you have service endpoints configured for your virtual network:
+
+1. From the Azure portal, navigate to your virtual network and select "Service endpoints" from Settings.
+
+2. All Service endpoints created will be listed along with which subnets they're configured. For more information, see [logging and troubleshooting Service endpoints](../virtual-network-service-endpoints-overview.md#logging-and-troubleshooting).
+
+>[!NOTE]
+>Private Link is the recommended option over Service endpoints for private access to Azure hosted services.
+
+## Connection failures at the public internet destination
+
+Connection failures at the internet destination endpoint could be due to multiple possible factors. Factors that can affect connectivity success are:
+
+* Firewall or other traffic management components at the destination.
+
+* API rate limiting imposed by the destination side.
+
+* Volumetric DDoS mitigations or transport layer traffic shaping.
+
+Use NAT gateway [metrics]((nat-metrics.md) in Azure monitor to diagnose connection issues:
+
+* Look at packet count at the source and the destination (if available) to determine how many connection attempts were made.
+
+* Look at dropped packets to see how many packets were dropped by NAT gateway.
+
+What else to check for:
+
+* Check for [SNAT exhaustion]((#snat-exhaustion-due-to-nat-gateway-configuration).
+
+* Validate connectivity to an endpoint in the same region or elsewhere for comparison.
+
+* If you're creating high volume or transaction rate testing, explore if reducing the rate reduces the occurrence of failures.
+
+* If changing rate impacts the rate of failures, check if API rate limits, or other constraints on the destination side might have been reached.
+
+### Extra network captures
+
+If your investigation is inconclusive, open a support case for further troubleshooting and collect the following information for a quicker resolution. Choose a single virtual machine in your NAT gateway configured subnet to perform the following tests:
+
+* Use **`ps ping`** from one of the backend VMs within the virtual network to test the probe port response (example: **`ps ping 10.0.0.4:3389`**) and record results.
+
+* If no response is received in these ping tests, run a simultaneous Netsh trace on the backend VM, and the virtual network test VM while you run PsPing then stop the Netsh trace.
+
+## Best practices for efficient use of outbound connections
+
+Azure monitors and operates its infrastructure with great care. However, transient failures can still occur from deployed applications, there's no guarantee that transmissions are lossless. AT gateway is the preferred option to connect outbound from Azure deployments in order to ensure highly reliable and resilient outbound connectivity. In addition to using NAT gateway to connect outbound, use the guidance below for extra steps that can be taken to ensure that applications are using connections efficiently.
+
+### Modify the application to use connection pooling
+
+When you pool your connections, you avoid opening new network connections for calls to the same address and port. You can implement a connection pooling scheme in your application where requests are internally distributed across a fixed set of connections and reused when possible. This setup constrains the number of SNAT ports in use and creates a predictable environment. Connection pooling helps reduce latency and resource utilization and ultimately improve the performance of your applications.
+
+To learn more on pooling HTTP connections, see [Pool HTTP connections](/aspnet/core/performance/performance-best-practices#pool-http-connections-with-httpclientfactory) with HttpClientFactory.
+
+### Modify the application to reuse connections
+
+Rather than generating individual, atomic TCP connections for each request, configure your application to reuse connections. Connection reuse results in more performant TCP transactions and is especially relevant for protocols like HTTP/1.1, where connection reuse is the default. This reuse applies to other protocols that use HTTP as their transport such as REST.
+
+### Modify the application to use less aggressive retry logic
+
+When SNAT ports are exhausted or application failures occur, aggressive or brute force retries without delay and back-off logic cause exhaustion to occur or persist. You can reduce demand for SNAT ports by using a less aggressive retry logic.
+
+Depending on the configured idle timeout, if retries are too aggressive, connections may not have enough time to close and release SNAT ports for reuse.
+
+For extra guidance and examples, see [Retry pattern](/azure/app-service/troubleshoot-intermittent-outbound-connection-errors).
+
+### Use keepalives to reset the outbound idle timeout
+
+For more information about keepalives, see [TCP idle timeout timers set higher than the default value](#tcp-idle-timeout-timers-set-higher-than-the-default-value).
+
+### Use Private link to reduce SNAT port usage for connecting to other Azure services
+
+When possible, Private Link should be used to connect directly from your virtual networks to Azure platform services in order to [reduce the demand](/azure/virtual-network/nat-gateway/troubleshoot-nat#tcp-idle-timeout-timers-set-higher-than-the-default-value) on SNAT ports. Reducing the demand on SNAT ports can help reduce the risk of SNAT port exhaustion.
+
+To create a Private Link, see the following Quickstart guides to get started:
+
+* [Create a Private Endpoint](/azure/private-link/create-private-endpoint-portal?tabs=dynamic-ip)
+
+* [Create a Private Link](/azure/private-link/create-private-link-service-portal)
+
+## Next steps
+
+We're always looking to improve the experience of our customers. If you're experiencing issues with NAT gateway that aren't listed or resolved by this article, submit feedback through GitHub via the bottom of this page. We'll address your feedback as soon as possible.
+
+To learn more about NAT gateway, see:
+
+* [Virtual Network NAT](/azure/virtual-network/nat-gateway/nat-overview)
+
+* [NAT gateway resource](/azure/virtual-network/nat-gateway/nat-gateway-resource)
+
+* [Metrics and alerts for NAT gateway resources](/azure/virtual-network/nat-gateway/nat-metrics)
++
virtual-network Troubleshoot Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat.md
Title: Troubleshoot Azure Virtual Network NAT connectivity
+ Title: Troubleshoot Azure Virtual Network NAT (NAT gateway)
description: Troubleshoot issues with Virtual Network NAT. -
-# Customer intent: As an IT administrator, I want to troubleshoot Virtual Network NAT.
+ - Previously updated : 05/20/2020 Last updated : 08/29/2022
-# Troubleshoot Azure Virtual Network NAT connectivity
+# Troubleshoot Azure Virtual Network NAT (NAT gateway)
-This article provides guidance on how to configure your NAT gateway to ensure outbound connectivity. This article also provides mitigating steps to resolve common configuration and connectivity issues with NAT gateway.
+This article provides guidance on how to correctly configure your NAT gateway and troubleshoot common configuration and deployment related issues.
-## Common connection issues with NAT gateway
+* [NAT gateway configuration basics](#nat-gateway-configuration-basics)
-* [Configuration issues with NAT gateway](#configuration-issues-with-nat-gateway)
-* [Configuration issues with your subnets and virtual network](#configuration-issues-with-subnets-and-virtual-networks-using-nat-gateway)
-* [SNAT exhaustion due to NAT gateway configuration](#snat-exhaustion-due-to-nat-gateway-configuration)
-* [Connection failures due to idle timeouts](#connection-failures-due-to-idle-timeouts)
-* [Connection issues with NAT gateway and integrated services](#connection-issues-with-nat-gateway-and-integrated-services)
-* [NAT gateway public IP not being used for outbound traffic](#nat-gateway-public-ip-not-being-used-for-outbound-traffic)
-* [Connection failures in the Azure infrastructure](#connection-failures-in-the-azure-infrastructure)
-* [Connection failures outside of the Azure infrastructure](#connection-failures-outside-of-the-azure-infrastructure)
+* [NAT gateway in a failed state](#nat-gateway-in-a-failed-state)
-## Configuration issues with NAT gateway
+* [Add or remove NAT gateway](#add-or-remove-nat-gateway)
-### NAT gateway configuration basics
+* [Add or remove subnet](#add-or-remove-subnet)
+
+* [Add or remove public IPs](#add-or-remove-public-ip-addresses)
+
+## NAT gateway configuration basics
Check the following configurations to ensure that NAT gateway can be used to direct traffic outbound:+ 1. At least one public IP address or one public IP prefix is attached to NAT gateway. At least one public IP address must be associated with the NAT gateway for it to provide outbound connectivity.
-2. At least one subnet is attached to a NAT gateway. You can attach multiple subnets to a NAT gateway for going outbound, but those subnets must exist within the same virtual network. NAT gateway cannot span beyond a single virtual network.
-3. No [NSG rules](../network-security-groups-overview.md#outbound) or [UDRs](#virtual-appliance-udrs-and-expressroute-override-nat-gateway-for-routing-outbound-traffic) are blocking NAT gateway from directing traffic outbound to the internet.
+
+2. At least one subnet is attached to a NAT gateway. You can attach multiple subnets to a NAT gateway for going outbound, but those subnets must exist within the same virtual network. NAT gateway can't span beyond a single virtual network.
+
+3. No [NSG rules](../network-security-groups-overview.md#outbound) or UDRs are blocking NAT gateway from directing traffic outbound to the internet.
### How to validate connectivity
-[Virtual Network NAT gateway](./nat-overview.md#virtual-network-nat-basics) supports IPv4 UDP and TCP protocols. ICMP is not supported and is expected to fail.
+[Virtual Network NAT gateway](./nat-overview.md#virtual-network-nat-basics) supports IPv4 UDP and TCP protocols. ICMP isn't supported and is expected to fail.
To validate end-to-end connectivity of NAT gateway, follow these steps: 1. Validate that your [NAT gateway public IP address is being used](./quickstart-create-nat-gateway-portal.md#test-nat-gateway).+ 2. Conduct TCP connection tests and UDP-specific application layer tests.+ 3. Look at NSG flow logs to analyze outbound traffic flows from NAT gateway. Refer to the table below for which tools to use to validate NAT gateway connectivity.
Refer to the table below for which tools to use to validate NAT gateway connecti
| Linux | nc (generic connection test) | curl (TCP application layer test) | application specific | | Windows | [PsPing](/sysinternals/downloads/psping) | PowerShell [Invoke-WebRequest](/powershell/module/microsoft.powershell.utility/invoke-webrequest) | application specific |
-To analyze outbound traffic from NAT gateway, use NSG flow logs. NSG flow logs provide information on when a connection from your virtual network takes place, from where (source IP and port) to which destination (destination IP and port) along with the state of the connection, the traffic flow direction and size of the traffic (packets and bytes sent).
-* To learn more about NSG flow logs, see [NSG flow log overview](../../network-watcher/network-watcher-nsg-flow-logging-overview.md).
-* For guides on how to enable NSG flow logs, see [Enabling NSG flow logs](../../network-watcher/network-watcher-nsg-flow-logging-overview.md#enabling-nsg-flow-logs).
-* For guides on how to read NSG flow logs, see [Working with NSG flow logs](../../network-watcher/network-watcher-nsg-flow-logging-overview.md#working-with-flow-logs).
+### How to analyze outbound connectivity
-## Configuration issues with subnets and virtual networks using NAT gateway
+To analyze outbound traffic from NAT gateway, use NSG flow logs. NSG flow logs provide connection information for your virtual machines. The connection information contains the source IP and port and the destination IP and port and the state of the connection. The traffic flow direction and the size of the traffic in number of packets and bytes sent is also logged.
-### Basic SKU resources cannot exist in the same subnet as NAT gateway
+* To learn more about NSG flow logs, see [NSG flow log overview](../../network-watcher/network-watcher-nsg-flow-logging-overview.md).
-NAT gateway is not compatible with basic resources, such as Basic Load Balancer or Basic Public IP. Basic resources must be placed on a subnet not associated with a NAT Gateway. Basic Load Balancer and Basic Public IP can be upgraded to standard to work with NAT gateway.
-* To upgrade a basic load balancer to standard, see [upgrade from basic public to standard public load balancer](../../load-balancer/upgrade-basic-standard.md).
-* To upgrade a basic public IP to standard, see [upgrade from basic public to standard public IP](../ip-services/public-ip-upgrade-portal.md).
+* For guides on how to enable NSG flow logs, see [Enabling NSG flow logs](../../network-watcher/network-watcher-nsg-flow-logging-overview.md#enabling-nsg-flow-logs).
-### NAT gateway cannot be attached to a gateway subnet
+* For guides on how to read NSG flow logs, see [Working with NSG flow logs](../../network-watcher/network-watcher-nsg-flow-logging-overview.md#working-with-flow-logs).
-NAT gateway cannot be deployed in a gateway subnet. A gateway subnet is used by Virtual network (VPN) gateway for sending encrypted traffic over the internet between an Azure virtual network and on-premises location or between Azure virtual networks over the Microsoft network. See [VPN gateway overview](../../vpn-gateway/vpn-gateway-about-vpngateways.md) to learn more about how gateway subnets are used by VPN gateway.
+## NAT gateway in a failed state
-### IPv6 coexistence
+You may experience outbound connectivity failure if your NAT gateway resource is in a failed state. To get your NAT gateway out of a failed state, follow these instructions:
-[Virtual Network NAT gateway](nat-overview.md) supports IPv4 UDP and TCP protocols. NAT gateway cannot be associated to an IPv6 Public IP address or IPv6 Public IP Prefix. NAT gateway can be deployed on a dual stack subnet, but will still only use IPv4 Public IP addresses for directing outbound traffic. Deploy NAT gateway on a dual stack subnet when you need IPv6 resources to exist in the same subnet as IPv4 resources.
+1. Once you identify the resource that is in a failed state, go to [Azure Resource Explorer](https://resources.azure.com/) and identify the resource in this state.
-### Cannot attach NAT gateway to a subnet that contains a VM NIC in a failed state
+2. Update the toggle on the right-hand top corner to Read/Write.
-When you try to associate NAT gateway to a subnet that contains a virtual machine network interface (NIC) in a failed state, you will receive an error message indicating that this action cannot be performed. You must first get the VM NIC out of the failed state before you can attach NAT gateway to the subnet.
+3. Select on Edit for the resource in failed state.
-To troubleshoot NICs in a failed state, follow these steps
-1. Determine the provisioning state of your NICs using the [Get-AzNetworkInterface Powershell command](/powershell/module/az.network/get-aznetworkinterface#example-2-get-all-network-interfaces-with-a-specific-provisioning-state) and setting the value of the "provisioningState" to "Succeeded".
-2. Perform [GET/SET powershell commands](/powershell/module/az.network/set-aznetworkinterface#example-1-configure-a-network-interface) on the network interface to update the provisioning state.
-3. Check the results of this operation by checking the provisioining state of your NICs again (follow commands from step 1).
+4. Select on PUT followed by GET to ensure the provisioning state was updated to Succeeded.
-## SNAT exhaustion due to NAT gateway configuration
+5. You can then proceed with other actions as the resource is out of failed state.
-Common SNAT exhaustion issues with NAT gateway typically have to do with the configurations on the NAT gateway. Common SNAT exhaustion issues include:
-* Outbound connectivity on NAT gateway not scaled out enough.
-* NAT gateway's configurable TCP idle timeout timer is set higher than the default value of 4 minutes.
+## Add or remove NAT gateway
-### Outbound connectivity not scaled out enough
+### Can't delete NAT gateway
-Each public IP address provides 64,512 SNAT ports to subnets attached to NAT gateway. From those available SNAT ports, NAT gateway can support up to 50,000 concurrent connections to the same destination endpoint. If outbound connections are dropping because SNAT ports are being exhausted, then NAT gateway may not be scaled out enough to handle the workload. More public IP addresses may need to be added to NAT gateway in order to provide more SNAT ports for outbound connectivity.
+NAT gateway must be detached from all subnets within a virtual network before the resource can be removed or deleted. Follow these steps to remove subnets from your NAT gateway before you delete it:
-The table below describes two common scenarios in which outbound connectivity may not be scaled out enough and how to validate and mitigate these issues:
+**Recommended Steps**
-| Scenario | Evidence |Mitigation |
-||||
-| You're experiencing contention for SNAT ports and SNAT port exhaustion during periods of high usage. | You run the following [metrics](nat-metrics.md) in Azure Monitor: **Total SNAT Connection**: "Sum" aggregation shows high connection volume. "Failed" connection state shows transient or persistent failures over time. **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume. | Determine if you can add more public IP addresses or public IP prefixes. This addition will allow for up to 16 IP addresses in total to your NAT gateway. This addition will provide more inventory for available SNAT ports (64,000 per IP address) and allow you to scale your scenario further.|
-| You've already given 16 IP addresses and still are experiencing SNAT port exhaustion. | Attempt to add more IP addresses fails. Total number of IP addresses from public IP address resources or public IP prefix resources exceeds a total of 16. | Distribute your application environment across multiple subnets and provide a NAT gateway resource for each subnet. |
+1. In the portal, navigate to your NAT gateway resource Overview page
->[!NOTE]
->It is important to understand why SNAT exhaustion occurs. Make sure you are using the right patterns for scalable and reliable scenarios. Adding more SNAT ports to a scenario without understanding the cause of the demand should be a last resort. If you do not understand why your scenario is applying pressure on SNAT port inventory, adding more SNAT ports to the inventory by adding more IP addresses will only delay the same exhaustion failure as your application scales. You may be masking other inefficiencies and anti-patterns.
+2. Under Settings on the left-hand navigation pane, select Subnets
-### TCP idle timeout timers set higher than the default value
+3. Uncheck all boxes next to subnets that are associated to your NAT gateway
-The NAT gateway TCP idle timeout timer is set to 4 minutes by default but is configurable up to 120 minutes. If this setting is changed to a higher value than the default, NAT gateway will hold on to flows longer and can create [additional pressure on SNAT port inventory](nat-gateway-resource.md#timers). The table below describes a common scenario in which a high TCP idle timeout may be causing SNAT exhaustion and provides possible mitigation steps to take:
+4. Save your Subnet configuration changes
-| Scenario | Evidence | Mitigation |
-||||
-| You would like to ensure that TCP connections stay active for long periods of time without idle timing out so you increase the TCP idle timeout timer setting. After a while you start to notice that connection failures occur more often. You suspect that you may be exhausting your inventory of SNAT ports since connections are holding on to them longer. | You check the following [NAT gateway metrics](nat-metrics.md) in Azure Monitor to determine if SNAT port exhaustion is happening: **Total SNAT Connection**: "Sum" aggregation shows high connection volume. "Failed" connection state shows transient or persistent failures over time. **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume. | You have a few possible mitigation steps that you can take to resolve SNAT port exhaustion: - **Reduce the TCP idle timeout** to a lower value to free up SNAT port inventory earlier. The TCP idle timeout timer cannot be set lower than 4 minutes. - Consider **[asynchronous polling patterns](/azure/architecture/patterns/async-request-reply)** to free up connection resources for other operations. - **Use TCP keepalives or application layer keepalives** to avoid intermediate systems timing out. For examples, see [.NET examples](/dotnet/api/system.net.servicepoint.settcpkeepalive). - For connections going to Azure PaaS services, use **[Private Link](../../private-link/private-link-overview.md)**. Private Link eliminates the need to use public IPs of your NAT gateway which frees up more SNAT ports for outbound connections to the internet.|
+## Add or remove subnet
-## Connection failures due to idle timeouts
+### NAT gateway can't be attached to subnet already attached to another NAT gateway
-### TCP idle timeout
+A subnet within a virtual network can't have more than one NAT gateway attached to it for connecting outbound to the internet. An individual NAT gateway resource can be associated to multiple subnets within the same virtual network. NAT gateway can't span beyond a single virtual network.
-As described in the [TCP timers](#tcp-idle-timeout-timers-set-higher-than-the-default-value) section above, TCP keepalives should be used instead to refresh idle flows and reset the idle timeout. TCP keepalives only need to be enabled from one side of a connection in order to keep a connection alive from both sides. When a TCP keepalive is sent from one side of a connection, the other side automatically sends an ACK packet. The idle timeout timer is then reset on both sides of the connection. To learn more, see [Timer considerations](/azure/virtual-network/nat-gateway-resource#timers).
+### Basic SKU resources can't exist in the same subnet as NAT gateway
->[!NOTE]
->Increasing the TCP idle timeout is a last resort and may not resolve the root cause. A long timeout can cause low rate failures when timeout expires and introduce delay and unnecessary failures.
+NAT gateway isn't compatible with basic resources, such as Basic Load Balancer or Basic Public IP. Basic resources must be placed on a subnet not associated with a NAT Gateway. Basic Load Balancer and Basic Public IP can be upgraded to standard to work with NAT gateway.
-### UDP idle timeout
+* To upgrade a basic load balancer to standard, see [upgrade from basic public to standard public load balancer](../../load-balancer/upgrade-basic-standard.md).
-UDP idle timeout timers are set to 4 minutes. Unlike TCP idle timeout timers for NAT gateway, UDP idle timeout timers are not configurable. The table below describes a common scenario encountered with connections dropping due to UDP traffic idle timing out and steps to take to mitigate the issue.
+* To upgrade a basic public IP to standard, see [upgrade from basic public to standard public IP](../ip-services/public-ip-upgrade-portal.md).
-| Scenario | Evidence | Mitigation |
-||||
-| You notice that UDP traffic is dropping connections that need to be maintained for long periods of time. | You check the following [NAT gateway metrics](nat-metrics.md) in Azure Monitor, **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume. | A few possible mitigation steps that can be taken: - **Enable UDP keepalives**. Keep in mind that when a UDP keepalive is enabled, it is only active for one direction in a connection. This means that the connection can still time-out from going idle on the other side of a connection. To prevent a UDP connection from going idle and timing out, UDP keepalives should be enabled for both directions in a connection flow. - **Application layer keepalives** can also be used to refresh idle flows and reset the idle timeout. Check the server side for what options exist for application specific keepalives. |
+### NAT gateway can't be attached to a gateway subnet
-## Connection issues with NAT gateway and integrated services
+NAT gateway can't be deployed in a gateway subnet. A gateway subnet is used by a VPN gateway for sending encrypted traffic between an Azure virtual network and on-premises location. See [VPN gateway overview](../../vpn-gateway/vpn-gateway-about-vpngateways.md) to learn more about how gateway subnets are used by VPN gateway.
-### Azure App Service regional Virtual network integration turned off
+### Can't attach NAT gateway to a subnet that contains a virtual machine NIC in a failed state
-NAT gateway can be used with Azure app services to allow applications to make outbound calls from a virtual network. To use this integration between Azure app services and NAT gateway, regional virtual network integration must be enabled. See [how regional virtual network integration works](../../app-service/overview-vnet-integration.md#how-regional-virtual-network-integration-works) to learn more.
+When associating a NAT gateway to a subnet that contains a virtual machine network interface (NIC) in a failed state, you'll receive an error message indicating that this action can't be performed. You must first resolve the VM NIC failed state before you can attach a NAT gateway to the subnet.
-To use NAT gateway with Azure App services, follow these steps:
-1. Ensure that your application(s) have virtual network integration configured, see [Enable virtual network integration](../../app-service/configure-vnet-integration-enable.md).
-2. Ensure that **Route All** is enabled for your virtual network integration, see [Configure virtual network integration routing](../../app-service/configure-vnet-integration-routing.md).
-3. Create a NAT gateway resource.
-4. Create a new public IP address or attach an existing public IP address in your network to NAT gateway.
-5. Assign NAT gateway to the same subnet being used for Virtual network integration with your application(s).
+To get your virtual machine NIC out of a failed state, you can use one of the two following methods.
-To see step-by-step instructions on how to configure NAT gateway with virtual network integration, see [Configuring NAT gateway integration](../../app-service/networking/nat-gateway-integration.md#configuring-nat-gateway-integration)
+#### Use PowerShell to get your virtual machine NIC out of a failed state
-A couple important notes about the NAT gateway and Azure App Services integration:
-* Virtual network integration does not provide inbound private access to your app from the virtual network.
-* Because of the nature of how virtual network integration operates, the traffic from virtual network integration does not show up in Azure Network Watcher or NSG flow logs.
+1. Determine the provisioning state of your NICs using the [Get-AzNetworkInterface PowerShell command](/powershell/module/az.network/get-aznetworkinterface#example-2-get-all-network-interfaces-with-a-specific-provisioning-state) and setting the value of the "provisioningState" to "Succeeded".
-## NAT gateway public IP not being used for outbound traffic
+2. Perform [GET/SET PowerShell commands](/powershell/module/az.network/set-aznetworkinterface#example-1-configure-a-network-interface) on the network interface to update the provisioning state.
-### VMs hold on to prior SNAT IP with active connection after NAT gateway added to a VNet
+3. Check the results of this operation by checking the provisioning state of your NICs again (follow commands from step 1).
-[Virtual Network NAT gateway](nat-overview.md) supersedes outbound connectivity for a subnet. When transitioning from default SNAT or load balancer outbound SNAT to using NAT gateway, new connections will immediately begin using the IP address(es) associated with the NAT gateway resource. However, if a virtual machine still has an established connection during the switch to NAT gateway, the connection will continue using the old SNAT IP address that was assigned when the connection was established.
+#### Use Azure Resource Explorer to get your virtual machine NIC out of a failed state
-Test and resolve issues with VMs holding on to old SNAT IP addresses by:
-1. Make sure you are really establishing a new connection and that connections are not being reused due to having already existed in the OS or because the browser was caching the connections in a connection pool. For example, when using curl in PowerShell, make sure to specify the -DisableKeepalive parameter to force a new connection. If you are using a browser, connections may also be pooled.
-2. It is not necessary to reboot a virtual machine in a subnet configured to NAT gateway. However, if a virtual machine is rebooted, the connection state is flushed. When the connection state has been flushed, all connections will begin using the NAT gateway resource's IP address(es). However, this is a side effect of the virtual machine being rebooted and not an indicator that a reboot is required.
+1. Go to [Azure Resource Explorer](https://resources.azure.com/) (recommended to use Microsoft Edge browser)
-If you are still having trouble, open a support case for further troubleshooting.
+2. Expand Subscriptions (takes a few seconds for it to appear on the left)
-### Virtual appliance UDRs and ExpressRoute override NAT gateway for routing outbound traffic
+3. Expand your subscription that contains the VM NIC in the failed state
-When forced tunneling with a custom UDR is enabled to direct traffic to a virtual appliance or VPN through ExpressRoute, the UDR or ExpressRoute takes precedence over NAT gateway for directing internet bound traffic. To learn more, see [custom UDRs](../virtual-networks-udr-overview.md#custom-routes).
+4. Expand resourceGroups
-The order of precedence for internet routing configurations is as follows:
+5. Expand the correct resource group that contains the VM NIC in the failed state
-Virtual appliance UDR / ExpressRoute >> NAT gateway >> instance level public IP addresses >> outbound rules on Load balancer >> default system
+6. Expand providers
-Test and resolve issues with a virtual appliance UDR or VPN ExpressRoute overriding your NAT gateway by:
-1. [Testing that the NAT gateway public IP](./quickstart-create-nat-gateway-portal.md#test-nat-gateway) is used for outbound traffic. If a different IP is being used, it could be because of a custom UDR, follow the remaining steps on how to check for and remove custom UDRs.
-2. Check for UDRs in the virtual networkΓÇÖs route table, refer to [view route tables](../manage-route-table.md#view-route-tables).
-3. Remove the UDR from the route table by following [create, change, or delete an Azure route table](../manage-route-table.md#change-a-route-table).
+7. Expand Microsoft.Network
-Once the custom UDR is removed from the routing table, the NAT gateway public IP should now take precedence in routing outbound traffic to the internet.
+8. Expand networkInterfaces
-### Private IPs are used to connect to Azure services by Private Link
+9. Select on the NIC that is in the failed provisioning state
-[Private Link](../../private-link/private-link-overview.md) connects your Azure virtual networks privately to Azure PaaS services such as Storage, SQL, or Cosmos DB over the Azure backbone network instead of over the internet. Private Link uses the private IP addresses of virtual machine instances in your virtual network to connect to these Azure platform services instead of the public IP of NAT gateway. As a result, when looking at the source IP address used to connect to these Azure services, you will notice that the private IPs of your instances are used. See [Azure services listed here](../../private-link/availability.md) for all services supported by Private Link.
+10. Select the Read/Write button at the top
-When possible, Private Link should be used to connect directly from your virtual networks to Azure platform services in order to [reduce the demand on SNAT ports](#tcp-idle-timeout-timers-set-higher-than-the-default-value). Reducing the demand on SNAT ports can help reduce the risk of SNAT port exhaustion.
+11. Select the green GET button
-To create a Private Link, see the following Quickstart guides to get started:
-- [Create a Private Endpoint](../../private-link/create-private-endpoint-portal.md)-- [Create a Private Link](../../private-link/create-private-link-service-portal.md)
+12. Select the blue EDIT button
-To check which Private Endpoints you have set up with Private Link:
-1. From the Azure portal, search for Private Link in the search box.
-2. In the Private Link center, select Private Endpoints or Private Link services to see what configurations have been set up. See [Manage private endpoint connections](../../private-link/manage-private-endpoint.md#manage-private-endpoint-connections-on-azure-paas-resources) for more details.
+13. Select the green PUT button
-Service endpoints can also be used to connect your virtual network to Azure PaaS services. To check if you have service endpoints configured for your virtual network:
-1. From the Azure portal, navigate to your virtual network and select "Service endpoints" from Settings.
-2. All Service endpoints created will be listed along with which subnets they are configured. See [logging and troubleshooting Service endpoints](../virtual-network-service-endpoints-overview.md#logging-and-troubleshooting) for more details.
+14. Select Read Only button at the top
->[!NOTE]
->Private Link is the recommended option over Service endpoints for private access to Azure hosted services.
+15. The VM NIC should now be in a succeeded provisioning state, you can close your browser
+
+## Add or remove public IP addresses
-## Connection failures in the Azure infrastructure
+### Can't exceed 16 public IP addresses on NAT gateway
-Azure monitors and operates its infrastructure with great care. However, transient failures can still occur, there is no guarantee that transmissions are lossless. Use design patterns that allow for SYN retransmissions for TCP applications. Use connection timeouts large enough to permit TCP SYN retransmission to reduce transient impacts caused by a lost SYN packet.
+NAT gateway can't be associated with more than 16 public IP addresses. You can use any combination of public IP addresses and prefixes with NAT gateway up to a total of 16 IP addresses. The following IP prefix sizes can be used with NAT gateway:
-**What to check for:**
+* /28 (16 addresses)
-* Check for [SNAT exhaustion](#snat-exhaustion-due-to-nat-gateway-configuration).
-* The configuration parameter in a TCP stack that controls the SYN retransmission behavior is called RTO ([Retransmission Time-Out](https://tools.ietf.org/html/rfc793)). The RTO value is adjustable but typically 1 second or higher by default with exponential back-off. If your application's connection time-out is too short (for example 1 second), you may see sporadic connection timeouts. Increase the application connection time-out.
-* If you observe longer, unexpected timeouts with default application behaviors, open a support case for further troubleshooting.
+* /29 (eight addresses)
-We don't recommend artificially reducing the TCP connection timeout or tuning the RTO parameter.
+* /30 (four addresses)
-## Connection failures outside of the Azure infrastructure
+* /31 (two addresses)
-### Connection failures with public internet transit
+### IPv6 coexistence
-The chances of transient failures increases with a longer path to the destination and more intermediate systems. It's expected that transient failures can increase in frequency over [Azure infrastructure](#connection-failures-in-the-azure-infrastructure).
+[Virtual Network NAT gateway](nat-overview.md) supports IPv4 UDP and TCP protocols. NAT gateway can't be associated to an IPv6 Public IP address or IPv6 Public IP Prefix. NAT gateway can be deployed on a dual stack subnet, but will still only use IPv4 Public IP addresses for directing outbound traffic. Deploy NAT gateway on a dual stack subnet when you need IPv6 resources to exist in the same subnet as IPv4 resources.
-Follow the same guidance as preceding [Azure infrastructure](#connection-failures-in-the-azure-infrastructure) section.
+### Can't use basic SKU public IPs with NAT gateway
-### Connection failures at the public internet destination
+NAT gateway is a standard SKU resource and can't be used with basic SKU resources, including basic public IP addresses. You can upgrade your basic SKU public IP address in order to use with your NAT gateway using the following guidance: [Upgrade a public IP address](/azure/virtual-network/ip-services/public-ip-upgrade-portal)
-The previous sections apply, along with the internet endpoint that communication is established with. Other factors that can impact connectivity success are:
+### Can't mismatch zones of public IP addresses and NAT gateway
-* Traffic management on destination side, including,
-- API rate limiting imposed by the destination side.-- Volumetric DDoS mitigations or transport layer traffic shaping.
-* Firewall or other components at the destination.
+NAT gateway is a zonal resource and can either be designated to a specific zone or to ΓÇÿno zoneΓÇÖ. When NAT gateway is placed in ΓÇÿno zoneΓÇÖ, Azure places the NAT gateway into a zone for you, but you don't have visibility into which zone the NAT gateway is located.
-Use NAT gateway [metrics](nat-metrics.md) in Azure monitor to diagnose connection issues:
-* Look at packet count at the source and the destination (if available) to determine how many connection attempts were made.
-* Look at dropped packets to see how many packets were dropped by NAT gateway.
+NAT gateway can be used with public IP addresses designated to a specific zone, no zone, all zones (zone-redundant) depending on its own availability zone configuration. Follow guidance below:
-What else to check for:
-* Check for [SNAT exhaustion](#snat-exhaustion-due-to-nat-gateway-configuration).
-* Validate connectivity to an endpoint in the same region or elsewhere for comparison.
-* If you are creating high volume or transaction rate testing, explore if reducing the rate reduces the occurrence of failures.
-* If changing rate impacts the rate of failures, check if API rate limits or other constraints on the destination side might have been reached.
+| NAT gateway availability zone designation | Public IP address / prefix designation that can be used |
+|||
+| No zone | Zone-redundant, No zone, or Zonal (the public IP zone designation can be any zone within a region in order to work with a no zone NAT gateway) |
+| Designated to a specific zone | The public IP address zone must match the zone of the NAT gateway |
-If your investigation is inconclusive, open a support case for further troubleshooting.
+>[!NOTE]
+>If you need to know the zone that your NAT gateway resides in, make sure to designate it to a specific availability zone.
## Next steps
-We are always looking to improve the experience of our customers. If you are experiencing issues with NAT gateway that are not listed or resolved by this article, submit feedback through GitHub via the bottom of this page and we will address your feedback as soon as possible.
+We're always looking to improve the experience of our customers. If you're experiencing issues with NAT gateway that aren't listed or resolved by this article, submit feedback through GitHub via the bottom of this page. We'll address your feedback as soon as possible.
To learn more about NAT gateway, see: * [Virtual Network NAT](nat-overview.md)+ * [NAT gateway resource](nat-gateway-resource.md)
-* [Metrics and alerts for NAT gateway resources](nat-metrics.md).
+
+* [Metrics and alerts for NAT gateway resources](nat-metrics.md).
virtual-network Virtual Networks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-overview.md
Azure resources communicate securely with each other in one of the following way
You can connect your on-premises computers and networks to a virtual network using any of the following options: -- **Point-to-site virtual private network (VPN):** Established between a virtual network and a single computer in your network. Each computer that wants to establish connectivity with a virtual network must configure its connection. This connection type is great if you're just getting started with Azure, or for developers, because it requires little or no changes to your existing network. The communication between your computer and a virtual network is sent through an encrypted tunnel over the internet. To learn more, see [Point-to-site VPN](../vpn-gateway/point-to-site-about.md?toc=%2fazure%2fvirtual-network%2ftoc.json#).-- **Site-to-site VPN:** Established between your on-premises VPN device and an Azure VPN Gateway that is deployed in a virtual network. This connection type enables any on-premises resource that you authorize to access a virtual network. The communication between your on-premises VPN device and an Azure VPN gateway is sent through an encrypted tunnel over the internet. To learn more, see [Site-to-site VPN](../vpn-gateway/design.md?toc=%2fazure%2fvirtual-network%2ftoc.json#s2smulti).-- **Azure ExpressRoute:** Established between your network and Azure, through an ExpressRoute partner. This connection is private. Traffic does not go over the internet. To learn more, see [ExpressRoute](../expressroute/expressroute-introduction.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+- **Point-to-site virtual private network (VPN):** Established between a virtual network and a single computer in your network. Each computer that wants to establish connectivity with a virtual network must configure its connection. This connection type is great if you're just getting started with Azure, or for developers, because it requires little or no changes to your existing network. The communication between your computer and a virtual network is sent through an encrypted tunnel over the internet. To learn more, see [Point-to-site VPN](../vpn-gateway/point-to-site-about.md?toc=/azure/virtual-network/toc.json#).
+- **Site-to-site VPN:** Established between your on-premises VPN device and an Azure VPN Gateway that is deployed in a virtual network. This connection type enables any on-premises resource that you authorize to access a virtual network. The communication between your on-premises VPN device and an Azure VPN gateway is sent through an encrypted tunnel over the internet. To learn more, see [Site-to-site VPN](../vpn-gateway/design.md?toc=/azure/virtual-network/toc.json#s2smulti).
+- **Azure ExpressRoute:** Established between your network and Azure, through an ExpressRoute partner. This connection is private. Traffic does not go over the internet. To learn more, see [ExpressRoute](../expressroute/expressroute-introduction.md?toc=/azure/virtual-network/toc.json).
### Filter network traffic
You can filter network traffic between subnets using either or both of the follo
Azure routes traffic between subnets, connected virtual networks, on-premises networks, and the Internet, by default. You can implement either or both of the following options to override the default routes Azure creates: - **Route tables:** You can create custom route tables with routes that control where traffic is routed to for each subnet. Learn more about [route tables](virtual-networks-udr-overview.md#user-defined).-- **Border gateway protocol (BGP) routes:** If you connect your virtual network to your on-premises network using an Azure VPN Gateway or ExpressRoute connection, you can propagate your on-premises BGP routes to your virtual networks. Learn more about using BGP with [Azure VPN Gateway](../vpn-gateway/vpn-gateway-bgp-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) and [ExpressRoute](../expressroute/expressroute-routing.md?toc=%2fazure%2fvirtual-network%2ftoc.json#dynamic-route-exchange).
+- **Border gateway protocol (BGP) routes:** If you connect your virtual network to your on-premises network using an Azure VPN Gateway or ExpressRoute connection, you can propagate your on-premises BGP routes to your virtual networks. Learn more about using BGP with [Azure VPN Gateway](../vpn-gateway/vpn-gateway-bgp-overview.md?toc=/azure/virtual-network/toc.json) and [ExpressRoute](../expressroute/expressroute-routing.md?toc=/azure/virtual-network/toc.json#dynamic-route-exchange).
### Virtual network integration for Azure services
visual-studio Vs Storage Cloud Services Getting Started Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/visual-studio/vs-storage-cloud-services-getting-started-blobs.md
Azure Blob Storage is a service for storing large amounts of unstructured data t
Just as files live in folders, storage blobs live in containers. After you have created a storage, you create one or more containers in the storage. For example, in a storage called "Scrapbook," you can create containers in the storage called "images" to store pictures and another called "audio" to store audio files. After you create the containers, you can upload individual blob files to them. * For more information on programmatically manipulating blobs, see [Get started with Azure Blob storage using .NET](../storage/blobs/storage-quickstart-blobs-dotnet.md).
-* For general information about Azure Storage, see [Storage documentation](https://azure.microsoft.com/documentation/services/storage/).
-* For general information about Azure Cloud Services, see [Cloud Services documentation](https://azure.microsoft.com/documentation/services/cloud-services/).
+* For general information about Azure Storage, see [Storage documentation](/azure/storage/).
+* For general information about Azure Cloud Services, see [Cloud Services documentation](/azure/cloud-services/).
* For more information about programming ASP.NET applications, see [ASP.NET](https://www.asp.net). ## Access blob containers in code
async public static Task ListBlobsSegmentedInFlatListing(CloudBlobContainer cont
``` ## Next steps
visual-studio Vs Storage Cloud Services Getting Started Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/visual-studio/vs-storage-cloud-services-getting-started-queues.md
We'll show you how to create a queue in code. We'll also show you how to perform
The **Add Connected Services** operation installs the appropriate NuGet packages to access Azure storage in your project and adds the connection string for the storage account to your project configuration files. * See [Get started with Azure Queue storage using .NET](../storage/queues/storage-dotnet-how-to-use-queues.md) for more information on manipulating queues in code.
-* See [Storage documentation](https://azure.microsoft.com/documentation/services/storage/) for general information about Azure Storage.
-* See [Cloud Services documentation](https://azure.microsoft.com/documentation/services/cloud-services/) for general information about Azure cloud services.
+* See [Storage documentation](/azure/storage/) for general information about Azure Storage.
+* See [Cloud Services documentation](/azure/cloud-services/) for general information about Azure cloud services.
* See [ASP.NET](https://www.asp.net) for more information about programming ASP.NET applications. Azure Queue storage is a service for storing large numbers of messages that can be accessed from anywhere in the world via authenticated calls using HTTP or HTTPS. A single queue message can be up to 64 KB in size, and a queue can contain millions of messages, up to the total capacity limit of a storage account.
messageQueue.Delete();
``` ## Next steps
visual-studio Vs Storage Cloud Services Getting Started Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/visual-studio/vs-storage-cloud-services-getting-started-tables.md
To get started, you first need to create a table in your storage account. We'll
**NOTE:** Some of the APIs that perform calls out to Azure storage are asynchronous. See [Asynchronous programming with Async and Await](/previous-versions/hh191443(v=vs.140)) for more information. The code below assumes async programming methods are being used. * See [Get started with Azure Table storage using .NET](../cosmos-db/tutorial-develop-table-dotnet.md) for more information on programmatically manipulating tables.
-* See [Storage documentation](https://azure.microsoft.com/documentation/services/storage/) for general information about Azure Storage.
-* See [Cloud Services documentation](https://azure.microsoft.com/documentation/services/cloud-services/) for general information about Azure cloud services.
+* See [Storage documentation](/azure/storage/) for general information about Azure Storage.
+* See [Cloud Services documentation](/azure/cloud-services/) for general information about Azure cloud services.
* See [ASP.NET](https://www.asp.net) for more information about programming ASP.NET applications. ## Access tables in code
vpn-gateway Vpn Gateway About Vpn Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpn-devices.md
To help configure your VPN device, refer to the links that correspond to the app
| Microsoft |Routing and Remote Access Service |Windows Server 2012 |Not compatible |Supported | | Open Systems AG |Mission Control Security Gateway |N/A |[Configuration guide](https://open-systems.com/wp-content/uploads/2019/12/OpenSystems-AzureVPNSetup-Installation-Guide.pdf) |Not compatible | | Palo Alto Networks |All devices running PAN-OS |PAN-OS<br>PolicyBased: 6.1.5 or later<br>RouteBased: 7.1.4 |Supported |[Configuration guide](https://knowledgebase.paloaltonetworks.com/KCSArticleDetail?id=kA10g000000Cm6WCAS) |
-| Sentrium (Developer) | VyOS | VyOS 1.2.2 | Not tested | [Configuration guide ](https://docs.vyos.io/en/latest/configexamples/azure-vpn-bgp.html)|
+| Sentrium (Developer) | VyOS | VyOS 1.2.2 | Not tested | [Configuration guide](https://docs.vyos.io/en/latest/configexamples/azure-vpn-bgp.html)|
| ShareTech | Next Generation UTM (NU series) | 9.0.1.3 | Not compatible | [Configuration guide](http://www.sharetech.com.tw/images/file/Solution/NU_UTM/S2S_VPN_with_Azure_Route_Based_en.pdf) | | SonicWall |TZ Series, NSA Series<br>SuperMassive Series<br>E-Class NSA Series |SonicOS 5.8.x<br>SonicOS 5.9.x<br>SonicOS 6.x |Not compatible |[Configuration guide](https://www.sonicwall.com/support/knowledge-base/170505320011694) | | Sophos | XG Next Gen Firewall | XG v17 | Not tested | [Configuration guide](https://community.sophos.com/kb/127546)<br><br>[Configuration guide - Multiple SAs](https://community.sophos.com/kb/en-us/133154) |
vpn-gateway Vpn Gateway Forced Tunneling Rm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-forced-tunneling-rm.md
Forced tunneling in Azure is configured using virtual network custom user-define
* **On-premises routes:** To the Azure VPN gateway. * **Default route:** Directly to the Internet. Packets destined to the private IP addresses not covered by the previous two routes are dropped. * Forced tunneling must be associated with a VNet that has a route-based VPN gateway. Your forced tunneling configuration will override the default route for any subnet in its VNet. You need to set a "default site" among the cross-premises local sites connected to the virtual network. Also, the on-premises VPN device must be configured using 0.0.0.0/0 as traffic selectors.
-* ExpressRoute forced tunneling is not configured via this mechanism, but instead, is enabled by advertising a default route via the ExpressRoute BGP peering sessions. For more information, see the [ExpressRoute Documentation](https://azure.microsoft.com/documentation/services/expressroute/).
+* ExpressRoute forced tunneling is not configured via this mechanism, but instead, is enabled by advertising a default route via the ExpressRoute BGP peering sessions. For more information, see the [ExpressRoute Documentation](/azure/expressroute/).
## Configuration overview
vpn-gateway Vpn Gateway Forced Tunneling Rm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vs-azure-tools-storage-explorer-blobs.md
The following steps illustrate how to manage the blobs (and folders) within a bl
## Next steps * View the [latest Storage Explorer release notes and videos](https://www.storageexplorer.com).
-* Learn how to [create applications using Azure blobs, tables, queues, and files](https://azure.microsoft.com/documentation/services/storage/).
+* Learn how to [create applications using Azure blobs, tables, queues, and files](/azure/storage/).
[0]: ./media/vs-azure-tools-storage-explorer-blobs/blob-containers-create-context-menu.png [1]: ./media/vs-azure-tools-storage-explorer-blobs/blob-container-create.png
vpn-gateway Vpn Gateway Forced Tunneling Rm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vs-azure-tools-storage-explorer-files.md
The following steps illustrate how to manage the files (and folders) within a fi
- View the [latest Storage Explorer release notes and videos](https://www.storageexplorer.com/). -- Learn how to [create applications using Azure blobs, tables, queues, and files](https://azure.microsoft.com/documentation/services/storage/).
+- Learn how to [create applications using Azure blobs, tables, queues, and files](/azure/storage/).
web-application-firewall Afds Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/afds-overview.md
Previously updated : 05/06/2022 Last updated : 08/16/2022
Unknown bots are classified via published user agents without additional validat
![Bot Protection Rule Set](../media/afds-overview/botprotect2.png)
-If bot protection is enabled, incoming requests that match bot rules are logged at the FrontdoorWebApplicationFirewallLog log. You may access WAF logs from a storage account, event hub, or log analytics.
+If bot protection is enabled, incoming requests that match bot rules are logged. You may access WAF logs from a storage account, event hub, or log analytics.
## Configuration
web-application-firewall Waf Front Door Configure Custom Response Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-configure-custom-response-code.md
Previously updated : 06/10/2020 Last updated : 08/16/2022 -
+zone_pivot_groups: front-door-tiers
# Configure a custom response for Azure Web Application Firewall (WAF)
In the above example, we kept the response code as 403, and configured a short "
:::image type="content" source="../media/waf-front-door-configure-custom-response-code/custom-response.png" alt-text="Custom response example":::
-"{{azure-ref}}" inserts the unique reference string in the response body. The value matches the TrackingReference field in the `FrontdoorAccessLog` and
-`FrontdoorWebApplicationFirewallLog` logs.
+
+"{{azure-ref}}" inserts the unique reference string in the response body. The value matches the TrackingReference field in the `FrontDoorAccessLog` and `FrontDoorWebApplicationFirewallLog` logs.
+++
+"{{azure-ref}}" inserts the unique reference string in the response body. The value matches the TrackingReference field in the `FrontdoorAccessLog` and `FrontdoorWebApplicationFirewallLog` logs.
+ ## Configure custom response status code and message use PowerShell
web-application-firewall Waf Front Door Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-monitor.md
Previously updated : 05/11/2022 Last updated : 08/16/2022 zone_pivot_groups: front-door-tiers
The following example query returns the access log entries:
AzureDiagnostics | where ResourceProvider == "MICROSOFT.CDN" and Category == "FrontDoorAccessLog" ```+ ::: zone-end ::: zone pivot="front-door-classic"
AzureDiagnostics
The following shows an example log entry: +
+```json
+{
+ "time": "2020-06-09T22:32:17.8383427Z",
+ "category": "FrontDoorAccessLog",
+ "operationName": "Microsoft.Cdn/Profiles/AccessLog/Write",
+ "properties": {
+ "trackingReference": "08Q3gXgAAAAAe0s71BET/QYwmqtpHO7uAU0pDRURHRTA1MDgANjMxNTAwZDAtOTRiNS00YzIwLTljY2YtNjFhNzMyOWQyYTgy",
+ "httpMethod": "GET",
+ "httpVersion": "2.0",
+ "requestUri": "https://wafdemofrontdoorwebapp.azurefd.net:443/?q=%27%20or%201=1",
+ "requestBytes": "715",
+ "responseBytes": "380",
+ "userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4157.0 Safari/537.36 Edg/85.0.531.1",
+ "clientIp": "xxx.xxx.xxx.xxx",
+ "socketIp": "xxx.xxx.xxx.xxx",
+ "clientPort": "52097",
+ "timeTaken": "0.003",
+ "securityProtocol": "TLS 1.2",
+ "routingRuleName": "WAFdemoWebAppRouting",
+ "rulesEngineMatchNames": [],
+ "backendHostname": "wafdemowebappuscentral.azurewebsites.net:443",
+ "sentToOriginShield": false,
+ "httpStatusCode": "403",
+ "httpStatusDetails": "403",
+ "pop": "SJC",
+ "cacheStatus": "CONFIG_NOCACHE"
+ }
+}
+```
+++ ```json { "time": "2020-06-09T22:32:17.8383427Z",
The following shows an example log entry:
} ``` + ### WAF logs ::: zone pivot="front-door-standard-premium"
The following table shows the values logged for each request:
| Property | Description | | - | - | | Action |Action taken on the request. Logs include requests with all actions. Metrics include requests with all actions except *Log*.|
-| ClientIp | The IP address of the client that made the request. If there was an `X-Forwarded-For` header in the request, the client IP address is taken from that header field instead. |
+| ClientIP | The IP address of the client that made the request. If there was an `X-Forwarded-For` header in the request, the client IP address is taken from that header field instead. |
| ClientPort | The IP port of the client that made the request. | | Details | Additional details on the request, including any threats that were detected. <br />matchVariableName: HTTP parameter name of the request matched, for example, header names (up to 100 characters maximum).<br /> matchVariableValue: Values that triggered the match (up to 100 characters maximum). | | Host | The `Host` header of the request. |
The following table shows the values logged for each request:
| PolicyMode | Operations mode of the WAF policy. Possible values are `Prevention` and `Detection`. | | RequestUri | Full URI of the request. | | RuleName | The name of the WAF rule that the request matched. |
-| SocketIp | The source IP address seen by WAF. This IP address is based on the TCP session, and does not consider any request headers. |
+| SocketIP | The source IP address seen by WAF. This IP address is based on the TCP session, and does not consider any request headers. |
| TrackingReference | The unique reference string that identifies a request served by Front Door. This value is sent to the client in the `X-Azure-Ref` response header. Use this field when searching for a specific request in the log. | The following example query shows the requests that were blocked by the Front Door WAF: +
+```kusto
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.CDN" and Category == "FrontDoorWebApplicationFirewallLog"
+| where action_s == "Block"
+```
++ ::: zone pivot="front-door-classic" ```kusto
AzureDiagnostics
::: zone-end
+The following shows an example log entry, including the reason that the request was blocked:
+ ::: zone pivot="front-door-standard-premium"
-```kusto
-AzureDiagnostics
-| where ResourceProvider == "MICROSOFT.CDN" and Category == "FrontDoorWebApplicationFirewallLog"
-| where action_s == "Block"
+```json
+{
+ "time": "2020-06-09T22:32:17.8376810Z",
+ "category": "FrontdoorWebApplicationFirewallLog",
+ "operationName": "Microsoft.Cdn/Profiles/Write",
+ "properties": {
+ "clientIP": "xxx.xxx.xxx.xxx",
+ "clientPort": "52097",
+ "socketIP": "xxx.xxx.xxx.xxx",
+ "requestUri": "https://wafdemofrontdoorwebapp.azurefd.net:443/?q=%27%20or%201=1",
+ "ruleName": "Microsoft_DefaultRuleSet-1.1-SQLI-942100",
+ "policy": "WafDemoCustomPolicy",
+ "action": "Block",
+ "host": "wafdemofrontdoorwebapp.azurefd.net",
+ "trackingReference": "08Q3gXgAAAAAe0s71BET/QYwmqtpHO7uAU0pDRURHRTA1MDgANjMxNTAwZDAtOTRiNS00YzIwLTljY2YtNjFhNzMyOWQyYTgy",
+ "policyMode": "prevention",
+ "details": {
+ "matches": [
+ {
+ "matchVariableName": "QueryParamValue:q",
+ "matchVariableValue": "' or 1=1"
+ }
+ ]
+ }
+ }
+}
``` ::: zone-end
-The following shows an example log entry, including the reason that the request was blocked:
```json {
The following shows an example log entry, including the reason that the request
} ``` + ## Next steps - Learn more about [Front Door](../../frontdoor/front-door-overview.md).
web-application-firewall Waf Front Door Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-tuning.md
Previously updated : 08/21/2022 Last updated : 08/28/2022
+zone_pivot_groups: front-door-tiers
# Tuning Web Application Firewall (WAF) for Azure Front Door
UserId=20&captchaId=7&captchaId=15&comment="1=1"&rating=3
``` If you try the request, the WAF blocks traffic that contains your *1=1* string in any parameter or field. This is a string often associated with a SQL injection attack. You can look through the logs and see the timestamp of the request and the rules that blocked/matched.
-
-In the following example, we explore a `FrontdoorWebApplicationFirewallLog` log generated due to a rule match. The following Log Analytics query can be used to find requests that have been blocked within the last 24 hours:
+
+In the following example, we explore a log entry generated due to a rule match. The following Log Analytics query can be used to find requests that have been blocked within the last 24 hours:
+ ```kusto AzureDiagnostics
-| where Category == 'FrontdoorWebApplicationFirewallLog'
+| where Category == 'FrontDoorWebApplicationFirewallLog'
| where TimeGenerated > ago(1d) | where action_s == 'Block'
+```
+ +
+```kusto
+AzureDiagnostics
+| where Category == 'FrontdoorWebApplicationFirewallLog'
+| where TimeGenerated > ago(1d)
+| where action_s == 'Block'
```
+
In the `requestUri` field, you can see the request was made to `/api/Feedbacks/` specifically. Going further, we find the rule ID `942110` in the `ruleName` field. Knowing the rule ID, you could go to the [OWASP ModSecurity Core Rule Set Official Repository](https://github.com/coreruleset/coreruleset) and search by that [rule ID](https://github.com/coreruleset/coreruleset/blob/v3.1/dev/rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf) to review its code and understand exactly what this rule matches on. Then, by checking the `action` field, we see that this rule is set to block requests upon matching, and we confirm that the request was in fact blocked by the WAF because the `policyMode` is set to `prevention`. Now, let's check the information in the `details` field. This is where you can see the `matchVariableName` and the `matchVariableValue` information. We learn that this rule was triggered because someone input *1=1* in the `comment` field of the web app.
-
++
+```json
+{
+ "time": "2020-09-24T16:43:04.5422943Z",
+ "resourceId": "/SUBSCRIPTIONS/<Subscription ID>/RESOURCEGROUPS/<Resource Group Name>/PROVIDERS/MICROSOFT.CDN/PROFILES/AFDWAFDEMOSITE",
+ "category": "FrontDoorWebApplicationFirewallLog",
+ "operationName": "Microsoft.Cdn/Profiles/WebApplicationFirewallLog/Write",
+ "properties": {
+ "clientIP": "1.1.1.1",
+ "clientPort": "53566",
+ "socketIP": "1.1.1.1",
+ "requestUri": "http://afdwafdemosite.azurefd.net:80/api/Feedbacks/",
+ "ruleName": "DefaultRuleSet-1.0-SQLI-942110",
+ "policy": "AFDWAFDemoPolicy",
+ "action": "Block",
+ "host": "afdwafdemosite.azurefd.net",
+ "trackingReference": "0mMxsXwAAAABEalekYeI4S55qpi5R7R0/V1NURURHRTA4MTIAZGI4NGQzZDgtNWQ5Ny00ZWRkLTg2ZGYtZDJjNThlMzI2N2I4",
+ "policyMode": "prevention",
+ "details": {
+ "matches": [
+ {
+ "matchVariableName": "PostParamValue:comment",
+ "matchVariableValue": "\"1=1\""
+ }
+ ],
+ "msg": "SQL Injection Attack: Common Injection Testing Detected",
+ "data": "Matched Data: \"1=1\" found within PostParamValue:comment: \"1=1\""
+ }
+ }
+}
+```
+++ ```json { "time": "2020-09-24T16:43:04.5422943Z",
Now, let's check the information in the `details` field. This is where you can s
} } ```+
-There is also value in checking the access logs to expand your knowledge about a given WAF event. Below we review the `FrontdoorAccessLog` log that was generated as a response to the event above.
+There is also value in checking the access logs to expand your knowledge about a given WAF event. Below we review the log that was generated as a response to the event above.
You can see these are related logs based on the `trackingReference` value being the same. Amongst various fields that provide general insight, such as `userAgent` and `clientIP`, we call attention to the `httpStatusCode` and `httpStatusDetails` fields. Here, we can confirm that the client has received an HTTP 403 response, which absolutely confirms this request was denied and blocked.
-
++
+```json
+{
+ "time": "2020-09-24T16:43:04.5430764Z",
+ "resourceId": "/SUBSCRIPTIONS/<Subscription ID>/RESOURCEGROUPS/<Resource Group Name>/PROVIDERS/MICROSOFT.CDN/PROFILES/AFDWAFDEMOSITE",
+ "category": "FrontDoorAccessLog",
+ "operationName": "Microsoft.Cdn/Profiles/AccessLog/Write",
+ "properties": {
+ "trackingReference": "0mMxsXwAAAABEalekYeI4S55qpi5R7R0/V1NURURHRTA4MTIAZGI4NGQzZDgtNWQ5Ny00ZWRkLTg2ZGYtZDJjNThlMzI2N2I4",
+ "httpMethod": "POST",
+ "httpVersion": "1.1",
+ "requestUri": "http://afdwafdemosite.azurefd.net:80/api/Feedbacks/",
+ "requestBytes": "2160",
+ "responseBytes": "324",
+ "userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.83 Safari/537.36",
+ "clientIp": "1.1.1.1",
+ "socketIp": "1.1.1.1",
+ "clientPort": "53566",
+ "timeToFirstByte": "0.01",
+ "timeTaken": "0.011",
+ "securityProtocol": "",
+ "routingRuleName": "DemoBERoutingRule",
+ "rulesEngineMatchNames": [],
+ "backendHostname": "13.88.65.130:3000",
+ "isReceivedFromClient": true,
+ "httpStatusCode": "403",
+ "httpStatusDetails": "403",
+ "pop": "WST",
+ "cacheStatus": "CONFIG_NOCACHE"
+ }
+}
+```
+++ ```json { "time": "2020-09-24T16:43:04.5430764Z",
You can see these are related logs based on the `trackingReference` value being
} ``` + ## Resolving false positives To make an informed decision about handling a false positive, itΓÇÖs important to familiarize yourself with the technologies your application uses. For example, say there isn't a SQL server in your technology stack, and you are getting false positives related to those rules. Disabling those rules doesn't necessarily weaken your security.
This is a field you can exclude. To learn more about exclusion lists, See [Web a
You can also examine the firewall logs to get the information to see what you need to add to the exclusion list. To enable logging, see [Monitoring metrics and logs in Azure Front Door](./waf-front-door-monitor.md). + Examine the firewall log in the `PT1H.json` file for the hour that the request you want to inspect occurred. `PT1H.json` files are available in the storage account containers where the `FrontDoorWebApplicationFirewallLog` and the `FrontDoorAccessLog` diagnostic logs are stored. ++
+Examine the firewall log in the `PT1H.json` file for the hour that the request you want to inspect occurred. `PT1H.json` files are available in the storage account containers where the `FrontdoorWebApplicationFirewallLog` and the `FrontdoorAccessLog` diagnostic logs are stored.
++ In this example, you can see the rule that blocked the request (with the same Transaction Reference) and occurred at the exact same time: +
+```json
+{
+ "time": "2020-09-24T16:43:04.5422943Z",
+ "resourceId": "/SUBSCRIPTIONS/<Subscription ID>/RESOURCEGROUPS/<Resource Group Name>/PROVIDERS/MICROSOFT.CDN/PROFILES/AFDWAFDEMOSITE",
+ "category": "FrontDoorWebApplicationFirewallLog",
+ "operationName": "Microsoft.Cdn/Profiles/WebApplicationFirewallLog/Write",
+ "properties": {
+ "clientIP": "1.1.1.1",
+ "clientPort": "53566",
+ "socketIP": "1.1.1.1",
+ "requestUri": "http://afdwafdemosite.azurefd.net:80/api/Feedbacks/",
+ "ruleName": "DefaultRuleSet-1.0-SQLI-942110",
+ "policy": "AFDWAFDemoPolicy",
+ "action": "Block",
+ "host": "afdwafdemosite.azurefd.net",
+ "trackingReference": "0mMxsXwAAAABEalekYeI4S55qpi5R7R0/V1NURURHRTA4MTIAZGI4NGQzZDgtNWQ5Ny00ZWRkLTg2ZGYtZDJjNThlMzI2N2I4",
+ "policyMode": "prevention",
+ "details": {
+ "matches": [
+ {
+ "matchVariableName": "PostParamValue:comment",
+ "matchVariableValue": "\"1=1\""
+ }
+ ],
+ "msg": "SQL Injection Attack: Common Injection Testing Detected",
+ "data": "Matched Data: \"1=1\" found within PostParamValue:comment: \"1=1\""
+ }
+ }
+}
+```
+++ ```json { "time": "2020-09-24T16:43:04.5422943Z",
In this example, you can see the rule that blocked the request (with the same Tr
} ``` + With your knowledge of how the Azure-managed rule sets work (see [Web Application Firewall on Azure Front Door](afds-overview.md)) you know that the rule with the *action: Block* property is blocking based on the data matched in the request body. You can see in the details that it matched a pattern (`1=1`), and the field is named `comment`. Follow the same previous steps to exclude the request body post args name that contains `comment`. ### Finding request header names
web-application-firewall Application Gateway Crs Rulegroups Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md
Application Gateway web application firewall (WAF) protects web applications fro
## Core rule sets
-The Application Gateway WAF comes pre-configured with CRS 3.1 by default, but you can choose to use any other supported CRS version.
+The Application Gateway WAF comes pre-configured with CRS 3.2 by default, but you can choose to use any other supported CRS version.
CRS 3.2 offers a new engine and new rule sets defending against Java infections, an initial set of file upload checks, and fewer false positives compared with earlier versions of CRS. You can also [customize rules to suit your needs](application-gateway-customize-waf-rules-portal.md). Learn more about the new [Azure WAF engine](waf-engine.md).
web-application-firewall Application Gateway Waf Request Size Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-waf-request-size-limits.md
For CRS 3.2 (on the WAF_v2 SKU) and newer, these limits are as follows when usin
- 2 MB request body size limit - 4 GB file upload limit
-Only requests with Content-Type of *multipart/form-data* are considered file uploads. The file part of the body content is evaluated against the file upload limit. For all other content types, the request body size limit applies.
+Only requests with Content-Type of *multipart/form-data* are considered for file uploads. For content to be considered as a file upload, it has to be a part of a multipart form with a *filename* header. For all other content types, the request body size limit applies.
To set request size limits in the Azure portal, configure **Global parameters** in the WAF policy resource's **Policy settings** page:
web-application-firewall Waf Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/waf-sentinel.md
description: This article shows you how to use Microsoft Sentinel with Azure Web
Previously updated : 10/12/2020 Last updated : 08/16/2022
The WAF workbook works for all Azure Front Door, Application Gateway, and CDN WA
To enable log analytics for each resource, go to your individual Azure Front Door, Application Gateway, or CDN resource: 1. Select **Diagnostic settings**.
-2. Select **+ Add diagnostic setting**.
-3. In the Diagnostic setting page:
+
+1. Select **+ Add diagnostic setting**.
+
+1. In the Diagnostic setting page:
1. Type a name. 1. Select **Send to Log Analytics**. 1. Choose the log destination workspace. 1. Select the log types that you want to analyze: 1. Application Gateway: ΓÇÿApplicationGatewayAccessLogΓÇÖ and ΓÇÿApplicationGatewayFirewallLogΓÇÖ
- 1. Azure Front Door: ΓÇÿFrontDoorAccessLogΓÇÖ and ΓÇÿFrontDoorFirewallLogΓÇÖ
+ 1. Azure Front Door Standard/Premium: ΓÇÿFrontDoorAccessLogΓÇÖ and ΓÇÿFrontDoorFirewallLogΓÇÖ
+ 1. Azure Front Door classic: ΓÇÿFrontdoorAccessLogΓÇÖ and ΓÇÿFrontdoorFirewallLogΓÇÖ
1. CDN: ΓÇÿAzureCdnAccessLogΓÇÖ 1. Select **Save**. :::image type="content" source="media//waf-sentinel/diagnostics-setting.png" alt-text="Diagnostic setting":::
-4. On the Azure home page, type *Microsoft Sentinel* in the search bar and select the **Microsoft Sentinel** resource.
-2. Select an already active workspace or create a new workspace.
-3. On the left side panel under **Configuration** select **Data Connectors**.
-4. Search for **Azure web application firewall** and select **Azure web application firewall (WAF)**. Select **Open connector** page on the bottom right.
+1. On the Azure home page, type *Microsoft Sentinel* in the search bar and select the **Microsoft Sentinel** resource.
+
+1. Select an already active workspace or create a new workspace.
+
+1. On the left side panel under **Configuration** select **Data Connectors**.
+
+1. Search for **Azure web application firewall** and select **Azure web application firewall (WAF)**. Select **Open connector** page on the bottom right.
:::image type="content" source="media//waf-sentinel/data-connectors.png" alt-text="Data connectors":::
-8. Follow the instructions under **Configuration** for each WAF resource that you want to have log analytic data for if you haven't done so previously.
-6. Once finished configuring individual WAF resources, select the **Next steps** tab. Select one of the recommended workbooks. This workbook will use all log analytic data that was enabled previously. A working WAF workbook should now exist for your WAF resources.
+1. Follow the instructions under **Configuration** for each WAF resource that you want to have log analytic data for if you haven't done so previously.
- :::image type="content" source="media//waf-sentinel/waf-workbooks.png" alt-text="WAF workbooks":::
+1. Once finished configuring individual WAF resources, select the **Next steps** tab. Select one of the recommended workbooks. This workbook will use all log analytic data that was enabled previously. A working WAF workbook should now exist for your WAF resources.
+ :::image type="content" source="media//waf-sentinel/waf-workbooks.png" alt-text="WAF workbooks" lightbox="media//waf-sentinel/waf-workbooks.png":::
## Next steps