Updates from: 08/31/2022 01:10:19
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner Deduce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-deduce.md
In order to collect the user_agent from client-side, create your own `**ContentD
To customize the user interface, you specify a URL in the `ContentDefinition` element with customized HTML content. In the self-asserted technical profile or orchestration step, you point to that ContentDefinition identifier.
-1. Open the `TrustFrameworksExtension.xml` and define a new **ContentDefinition** to customize the [self-asserted technical profile](https://docs.microsoft.com/azure/active-directory-b2c/self-asserted-technical-profile).
+1. Open the `TrustFrameworksExtension.xml` and define a new **ContentDefinition** to customize the [self-asserted technical profile](/azure/active-directory-b2c/self-asserted-technical-profile).
1. Find the `BuildingBlocks` element and add the `**api.selfassertedDeduce**` ContentDefinition:
The **ClaimsSchema** element defines the claim types that can be referenced as p
### Step 6: Add Deduce ClaimsProvider
-A **claims provider** is an interface to communicate with different types of parties via its [technical profiles](https://docs.microsoft.com/azure/active-directory-b2c/technicalprofiles).
+A **claims provider** is an interface to communicate with different types of parties via its [technical profiles](/azure/active-directory-b2c/technicalprofiles).
- `SelfAsserted-UserAgent` self-asserted technical profile is used to collect user_agent from client-side. -- `deduce_insight_api` technical profile sends data to the Deduce RESTful service in an input claims collection and receives data back in an output claims collection. For more information, see [integrate REST API claims exchanges in your Azure AD B2C custom policy](https://docs.microsoft.com/azure/active-directory-b2c/api-connectors-overview?pivots=b2c-custom-policy)
+- `deduce_insight_api` technical profile sends data to the Deduce RESTful service in an input claims collection and receives data back in an output claims collection. For more information, see [integrate REST API claims exchanges in your Azure AD B2C custom policy](/azure/active-directory-b2c/api-connectors-overview?pivots=b2c-custom-policy)
You can define Deduce as a claims provider by adding it to the **ClaimsProvider** element in the extension file of your policy.
active-directory-b2c Tutorial Delete Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-delete-tenant.md
Previously updated : 09/20/2021 Last updated : 08/30/2022 # Clean up resources and delete the tenant
-When you've finished the Azure AD B2C tutorials, you can delete the tenant you used for testing or training. To delete the tenant, you'll first need to delete all tenant resources. In this article, you'll:
+When you've finished the Azure Active Directory B2C (Azure AD B2C) tutorials, you can delete the tenant you used for testing or training. To delete the tenant, you'll first need to delete all tenant resources. In this article, you'll:
> [!div class="checklist"] > * Use the **Delete tenant** option to identify cleanup tasks
When you've finished the Azure AD B2C tutorials, you can delete the tenant you u
## Identify cleanup tasks 1. Sign in to the [Azure portal](https://portal.azure.com/) with a global administrator or subscription administrator role. Use the same work or school account or the same Microsoft account that you used to sign up for Azure.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
-1. Select the **Azure Active Directory** service.
-1. Under **Manage**, select **Properties**.
+1. Make sure you're using the directory that contains your Azure AD B2C tenant:
+ 1. Select the **Directories + subscriptions** icon in the portal toolbar.
+ 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch** button next to it.
+1. In the Azure portal, search for and select the **Azure Active Directory** service.
+1. In the left menu, under **Manage**, select **Properties**.
1. Under **Access management for Azure resources**, select **Yes**, and then select **Save**.
-1. Sign out of the Azure portal and then sign back in to refresh your access. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
-1. Select the **Azure Active Directory** service.
-1. On the **Overview** page, select **Delete tenant**. The **Required action** column indicates the resources you'll need to remove before you can delete the tenant.
+1. Sign out of the Azure portal and then sign back in to refresh your access.
+1. Repeat step two to make sure you're using the directory that contains your Azure AD B2C tenant.
+1. In the Azure portal, search for and select the **Azure Active Directory** service
+1. On the **Overview** page, select **Manage tenants**.
+1. On the **Manage tenants** page, select (by check marking) the tenant you want to delete, and then, at the top of the page, select the **Delete** button. The **Required action** column indicates the resources you need to remove before you can delete the tenant.
![Delete tenant tasks](media/tutorial-delete-tenant/delete-tenant-tasks.png) ## Delete tenant resources
-If you have the confirmation page open from the previous section, you can use the links in the **Required action** column to open the Azure portal pages where you can remove these resources. Or, you can remove tenant resources from within the Azure AD B2C service using the following steps.
+If you've the confirmation page open from the previous section, you can use the links in the **Required action** column to open the Azure portal pages where you can remove these resources. Or, you can remove tenant resources from within the Azure AD B2C service using the following steps.
1. Sign in to the [Azure portal](https://portal.azure.com/) with a global administrator or subscription administrator role. Use the same work or school account or the same Microsoft account that you used to sign up for Azure.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
-1. Select the **Azure AD B2C** service. Or use the search box to find and select **Azure AD B2C**.
-1. Delete all users *except* the admin account you're currently signed in as: Under **Manage**, select **Users**. On the **All users** page, select the checkbox next to each user (except the admin account you're currently signed in as). Select **Delete**, and then select **Yes** when prompted.
+1. Make sure you're using the directory that contains your Azure AD B2C tenant:
+ 1. Select the **Directories + subscriptions** icon in the portal toolbar.
+ 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch** button next to it.
+1. In the Azure portal, select the **Azure AD B2C** service, or search for and select **Azure AD B2C**.
+1. Delete all users *except* the admin account you're currently signed in as:
+ 1. Under **Manage**, select **Users**.
+ 1. On the **All users** page, select the checkbox next to each user (except the admin account you're currently signed in as).
+ 1. At the top of the page, select **Delete user**, and then select **Yes** when prompted.
![Delete users](media/tutorial-delete-tenant/delete-users.png)
-1. Delete app registrations and the *b2c-extensions-app*: Under **Manage**, select **App registrations**. Select the **All applications** tab. Select an application, and then select **Delete**. Repeat for all applications, including the **b2c-extensions-app** application.
+1. Delete app registrations and the *b2c-extensions-app*:
+ 1. Under **Manage**, select **App registrations**.
+ 1. Select the **All applications** tab.
+ 1. Select an application to open it, and then select **Delete** button. Repeat for all applications, including the **b2c-extensions-app** application.
![Delete application](media/tutorial-delete-tenant/delete-applications.png)
-1. Delete any identity providers you configured: Under **Manage**, select **Identity providers**. Select an identity provider you configured, and then select **Remove**.
+1. Delete any identity providers you configured:
+ 1. Under **Manage**, select **Identity providers**.
+ 1. Select an identity provider you configured, and then select **Remove**.
![Delete identity provider](media/tutorial-delete-tenant/identity-providers.png)
-1. Delete user flows: Under **Policies**, select **User flows**. Next to each user flow, select the ellipses (...) and then select **Delete**.
+1. Delete user flows:
+ 1. Under **Policies**, select **User flows**.
+ 1. Next to each user flow, select the ellipses (...) and then select **Delete**.
![Delete user flows](media/tutorial-delete-tenant/user-flow.png)
-1. Delete policy keys: Under **Policies**, select **Identity Experience Framework**, and then select **Policy keys**. Next to each policy key, select the ellipses (...) and then select **Delete**.
+1. Delete policy keys:
+ 1. Under **Policies**, select **Identity Experience Framework**, and then select **Policy keys**.
+ 1. Next to each policy key, select the ellipses (...) and then select **Delete**.
-1. Delete custom policies: Under **Policies**, select **Identity Experience Framework**, select **Custom policies**, and then delete all policies.
+1. Delete custom policies:
+ 1. Under **Policies**, select **Identity Experience Framework**, and then select **Custom policies**.
+ 1. Next to each Custom policy, select the ellipses (...) and then select **Delete**.
## Delete the tenant
+Once you delete all the tenant resources, you can now delete the tenant itself:
+ 1. Sign in to the [Azure portal](https://portal.azure.com/) with a global administrator or subscription administrator role. Use the same work or school account or the same Microsoft account that you used to sign up for Azure.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
-1. Select the **Azure Active Directory** service.
+1. Make sure you're using the directory that contains your Azure AD B2C tenant:
+ 1. Select the **Directories + subscriptions** icon in the portal toolbar.
+ 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch** button next to it.
+1. In the Azure portal, search for and select the **Azure Active Directory** service.
1. If you haven't already granted yourself access management permissions, do the following:
- * Under **Manage**, select **Properties**.
- * Under **Access management for Azure resources**, select **Yes**, and then select **Save**.
- * Sign out of the Azure portal and then sign back in to refresh your access, and select the **Azure Active Directory** service.
+ 1. Under **Manage**, select **Properties**.
+ 1. Under **Access management for Azure resources**, select **Yes**, and then select **Save**.
+ 1. Sign out of the Azure portal and then sign back in to refresh your access, and select the **Azure Active Directory** service.
-1. On the **Overview** page, select **Delete tenant**.
+1. On the **Overview** page, select **Manage tenants**.
- ![Delete the tenant](media/tutorial-delete-tenant/delete-tenant.png)
+ :::image type="content" source="media/tutorial-delete-tenant/manage-tenant.png" alt-text="Screenshot of how to manage tenant for deletion.":::
+1. On the **Manage tenants** page, select (by check marking) the tenant you want to delete, and then, at the top of the page, select the **Delete** button
1. Follow the on-screen instructions to complete the process. ## Next steps
active-directory How To Mfa Additional Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-additional-context.md
description: Learn how to use additional context in MFA notifications
Previously updated : 08/08/2022 Last updated : 08/18/2022
# How to use additional context in Microsoft Authenticator app notifications (Preview) - Authentication Methods Policy
-This article covers how to improve the security of user sign-in by adding the application and location in Microsoft Authenticator app push notifications.
+This topic covers how to improve the security of user sign-in by adding the application name and geographic location of the sign-in to Microsoft Authenticator push and passwordless notifications. The schema for the API to enable application name and geographic location is currently being updated. **While the API is updated over the next two weeks, you should only use the Azure AD portal to enable application name and geographic location.**
## Prerequisites
-Your organization will need to enable Authenticator app push notifications for some users or groups using the new Authentication Methods Policy API.
+Your organization will need to enable Microsoft Authenticator push notifications for some users or groups by using the Azure AD portal. The new Authentication Methods Policy API will soon be ready as another configuration option.
>[!NOTE] >Additional context can be targeted to only a single group, which can be dynamic or nested. On-premises synchronized security groups and cloud-only security groups are supported for the Authentication Method Policy. ## Passwordless phone sign-in and multifactor authentication
-When a user receives a Passwordless phone sign-in or MFA push notification in the Authenticator app, they'll see the name of the application that requests the approval and the location based on the IP address where the sign-in originated from.
+When a user receives a passwordless phone sign-in or MFA push notification in the Authenticator app, they'll see the name of the application that requests the approval and the location based on the IP address where the sign-in originated from.
:::image type="content" border="false" source="./media/howto-authentication-passwordless-phone/location.png" alt-text="Screenshot of additional context in the MFA push notification.":::
The additional context can be combined with [number matching](how-to-mfa-number-
:::image type="content" border="false" source="./media/howto-authentication-passwordless-phone/location-with-number-match.png" alt-text="Screenshot of additional context with number matching in the MFA push notification.":::
-### Policy schema changes
+## Enable additional context
->[!NOTE]
->In Graph Explorer, ensure you've consented to the **Policy.Read.All** and **Policy.ReadWrite.AuthenticationMethod** permissions.
+To enable application name or geographic location, complete the following steps:
-Identify a single target group for the schema configuration. Then use the following API endpoint to change the displayAppInformationRequiredState property to **enabled**:
+1. In the Azure AD portal, click **Security** > **Authentication methods** > **Microsoft Authenticator**.
+1. On the **Basics** tab, click **Yes** and **All users** to enable the policy for everyone, and change **Authentication mode** to **Any**.
+
+ Only users who are enabled for Microsoft Authenticator here can be included in the policy to show the application name or geographic location of the sign-in, or excluded from it. Users who aren't enabled for Microsoft Authenticator can't see application name or geographic location.
-https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
+ :::image type="content" border="true" source="./media/how-to-mfa-additional-context/enable-settings-additional-context.png" alt-text="Screenshot of how to enable Microsoft Authenticator settings for Any authentication mode.":::
->[!NOTE]
->For Passwordless phone sign-in, the Authenticator app does not retrieve policy information just in time for each sign-in request. Instead, the Authenticator app does a best effort retrieval of the policy once every 7 days. We understand this limitation is less than ideal and are working to optimize the behavior. In the meantime, if you want to force a policy update to test using additional context with Passwordless phone sign-in, you can remove and re-add the account in the Authenticator app.
-
-#### MicrosoftAuthenticatorAuthenticationMethodConfiguration properties
-
-**PROPERTIES**
-
-| Property | Type | Description |
-|||-|
-| ID | String | The authentication method policy identifier. |
-| state | authenticationMethodState | Possible values are: **enabled**<br>**disabled** |
-
-**RELATIONSHIPS**
-
-| Relationship | Type | Description |
-|--||-|
-| includeTargets | [microsoftAuthenticatorAuthenticationMethodTarget](/graph/api/resources/passwordlessmicrosoftauthenticatorauthenticationmethodtarget) |
-| collection | A collection of users or groups who are enabled to use the authentication method. |
-
-#### MicrosoftAuthenticator includeTarget properties
-
-**PROPERTIES**
-
-| Property | Type | Description |
-|-||-|
-| authenticationMode | String | Possible values are:<br>**any**: Both passwordless phone sign-in and traditional second factor notifications are allowed.<br>**deviceBasedPush**: Only passwordless phone sign-in notifications are allowed.<br>**push**: Only traditional second factor push notifications are allowed. |
-| ID | String | Object ID of an Azure AD user or group. |
-| targetType | authenticationMethodTargetType | Possible values are: **user**, **group**.<br>You can only set one group or user for additional context. |
-| displayAppInformationRequiredState | advancedConfigState | Possible values are:<br>**enabled** explicitly enables the feature for the selected group.<br>**disabled** explicitly disables the feature for the selected group.<br>**default** allows Azure AD to manage whether the feature is enabled or not for the selected group. |
+1. On the **Configure** tab, for **Show application name in push and passwordless notifications (Preview)**, change **Status** to **Enabled**, choose who to include or exclude from the policy, and click **Save**.
->[!NOTE]
->Additional context can only be enabled for a single group.
-
-#### Example of how to enable additional context for all users
-
-Change the **displayAppInformationRequiredState** from **default** to **enabled**.
-
-The value of Authentication Mode can be either **any** or **push**, depending on whether or not you also want to enable passwordless phone sign-in. In these examples, we'll use **any**, but if you don't want to allow passwordless, use **push**.
-
-You need to PATCH the entire includeTarget to prevent overwriting any previous configuration. In that case, do a GET first, update only the relevant fields, and then PATCH. The following example only shows the update to the **displayAppInformationRequiredState**.
-
-```json
-//Retrieve your existing policy via a GET.
-//Leverage the Response body to create the Request body section. Then update the Request body similar to the Request body as shown below.
-//Change the Query to PATCH and Run query
-
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "all_users",
- "authenticationMode": "any",
- "displayAppInformationRequiredState": "enabled",
- "numberMatchingRequiredState": "enabled"
- }
- ]
-}
-
-```
-
-To confirm this update has applied, run the GET request below using the endpoint below.
-GET - https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
-
-
-#### Example of how to enable additional context for a single group
-
-Change the **displayAppInformationRequiredState** value from **default** to **enabled.**
-Change the **id** from **all_users** to the ObjectID of the group from the Azure AD portal.
-
-You need to PATCH the entire includeTarget to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The example below only shows the update to the **displayAppInformationRequiredState**.
-
-```json
-//Copy paste the below in the Request body section as shown below.
-//Leverage the Response body to create the Request body section. Then update the Request body similar to the Request body as shown below.
-//Change query to PATCH and run query
-
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "1ca44590-e896-4dbe-98ed-b140b1e7a53a",
- "authenticationMode": "any",
- "displayAppInformationRequiredState": "enabled",
- "numberMatchingRequiredState": "enabled"
- }
- ]
-}
-```
-
-To verify, RUN GET again and verify the ObjectID
-GET https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
-
-
-#### Example of error when enabling additional context for multiple groups
-
-The PATCH request will fail with 400 Bad Request and the error will contain the following message:
-
-`Persistance of policy failed with error: You cannot enable multiple targets for feature 'Require Display App Information'. Choose only one of the following includeTargets to enable: aede0efe-c1b4-40dc-8ae7-2c402f23e312,aede0efe-c1b4-40dc-8ae7-2c402f23e317.`
-
-### Test the end-user experience
-Add the test user account to the Authenticator app. The account **doesn't** need to be enabled for phone sign-in.
-
-See the end-user experience of an Authenticator multifactor authentication push notification with additional context by signing into aka.ms/MFAsetup.
-
-### Turn off additional context
-
-To turn off additional context, you'll need to PATCH remove **displayAppInformationRequiredState** from **enabled** to **disabled**/**default**.
-
-```json
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "all_users",
- "authenticationMode": "any",
- "displayAppInformationRequiredState": "enabled",
- "numberMatchingRequiredState": "default"
- }
- ]
-}
-```
-
-## Enable additional context in the portal
-
-To enable additional context in the Azure AD portal, complete the following steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com) using an account with *global administrator* permissions.
-1. Search for and select **Azure Active Directory**, then choose **Security** from the menu on the left-hand side.
-1. Under the **Manage** menu header, select **Authentication methods** > **Policies**.
-1. From the list of available authentication methods, select **Microsoft Authenticator**.
-
- ![Screenshot that shows how to select the Microsoft Authenticator policy.](./media/how-to-mfa-additional-context/select-microsoft-authenticator-policy.png)
-
-1. Select the target users, select the three dots on the right, and choose **Configure**.
-
- ![Screenshot of configuring Microsoft authenticator additional context.](./media/how-to-mfa-additional-context/configure-microsoft-authenticator.png)
+ :::image type="content" border="true" source="./media/how-to-mfa-additional-context/enable-app-name.png" alt-text="Screenshot of how to enable application name.":::
+
+ Then do the same for **Show geographic location in push and passwordless notifications (Preview)**.
+
+ :::image type="content" border="true" source="./media/how-to-mfa-additional-context/enable-geolocation.png" alt-text="Screenshot of how to enable geographic location.":::
-1. Select the **Authentication mode**, and then for **Show additional context in notifications (Preview)**, select **Enable**, and then select **Done**.
+ You can configure application name and geographic location separately. For example, the following policy enables application name and geographic location for all users but excludes the Operations group from seeing geographic location.
- ![Screenshot of enabling additional context.](media/howto-authentication-passwordless-phone/enable-additional-context.png)
+ :::image type="content" border="true" source="./media/how-to-mfa-additional-context/exclude.png" alt-text="Screenshot of how to enable application name and geographic location separately.":::
## Known issues
-Additional context isn't supported for Network Policy Server (NPS).
+Additional context is not supported for Network Policy Server (NPS) or Active Directory Federation Services (AD FS).
## Next steps
-[Authentication methods in Azure Active Directory - Microsoft Authenticator app](concept-authentication-authenticator-app.md)
+[Authentication methods in Azure Active Directory - Microsoft Authenticator app](concept-authentication-authenticator-app.md)
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
# How to use number matching in multifactor authentication (MFA) notifications (Preview) - Authentication Methods Policy
-This article covers how to enable number matching in Microsoft Authenticator push notifications to improve user sign-in security.
+This topic covers how to enable number matching in Microsoft Authenticator push notifications to improve user sign-in security. The schema for the API to enable number match is currently being updated. **While the API is updated over the next two weeks, you should only use the Azure AD portal to enable number match.**
>[!NOTE]
->Number matching is a key security upgrade to traditional second factor notifications in the Authenticator app that will be enabled by default for all tenants a few months after general availability (GA).<br>
+>Number matching is a key security upgrade to traditional second factor notifications in Microsoft Authenticator that will be enabled by default for all tenants a few months after general availability (GA).<br>
>We highly recommend enabling number matching in the near-term for improved sign-in security. ## Prerequisites
-Your organization will need to enable Authenticator (traditional second factor) push notifications for some users or groups using the new Authentication Methods Policy API. If your organization is using ADFS adapter or NPS extensions, please upgrade to the latest versions for a consistent experience.
+Your organization will need to enable Authenticator (traditional second factor) push notifications for some users or groups only by using the Azure AD portal. The new Authentication Methods Policy API will soon be ready as another configuration option. If your organization is using ADFS adapter or NPS extensions, please upgrade to the latest versions for a consistent experience.
## Number matching
Number matching is available for the following scenarios. When enabled, all scen
>[!NOTE] >For passwordless users, enabling or disabling number matching has no impact because it's already part of the passwordless experience.
+Number matching is not supported for Apple Watch notifications. Apple Watch users need to use their phone to approve notifications when number matching is enabled.
+ ### Multifactor authentication When a user responds to an MFA push notification using the Authenticator app, they'll be presented with a number. They need to type that number into the app to complete the approval.
To create the registry key that overrides push notifications:
Value = TRUE 1. Restart the NPS Service.
-### Policy schema changes
-
->[!NOTE]
->In Graph Explorer, ensure you've consented to the **Policy.Read.All** and **Policy.ReadWrite.AuthenticationMethod** permissions.
-
-Identify your single target group for the schema configuration. Then use the following API endpoint to change the numberMatchingRequiredState property to **enabled**:
-
-https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
--
-#### MicrosoftAuthenticatorAuthenticationMethodConfiguration properties
-
-**PROPERTIES**
+## Enable number matching
-| Property | Type | Description |
-|||-|
-| ID | String | The authentication method policy identifier. |
-| state | authenticationMethodState | Possible values are: **enabled**<br>**disabled** |
-
-**RELATIONSHIPS**
+To enable number matching, complete the following steps:
-| Relationship | Type | Description |
-|--||-|
-| includeTargets | [microsoftAuthenticatorAuthenticationMethodTarget](/graph/api/resources/passwordlessmicrosoftauthenticatorauthenticationmethodtarget) |
-| collection | A collection of users or groups who are enabled to use the authentication method. |
-
-#### MicrosoftAuthenticator includeTarget properties
-
-**PROPERTIES**
+1. In the Azure AD portal, click **Security** > **Authentication methods** > **Microsoft Authenticator**.
+1. On the **Basics** tab, click **Yes** and **All users** to enable the policy for everyone, and change **Authentication mode** to **Push**.
-| Property | Type | Description |
-|-||-|
-| authenticationMode | String | Possible values are:<br>**any**: Both passwordless phone sign-in and traditional second factor notifications are allowed.<br>**deviceBasedPush**: Only passwordless phone sign-in notifications are allowed.<br>**push**: Only traditional second factor push notifications are allowed. |
-| ID | String | Object ID of an Azure AD user or group. |
-| targetType | authenticationMethodTargetType | Possible values are: **user**, **group**.<br>Note: You'll be able to only set one group or user for number matching. |
-| numberMatchingRequiredState | advancedConfigState | Possible values are:<br>**enabled** explicitly enables the feature for the selected group.<br>**disabled** explicitly disables the feature for the selected group.<br>**default** allows Azure AD to manage whether the feature is enabled or not for the selected group. |
+ Only users who are enabled for Microsoft Authenticator here can be included in the policy to require number matching for sign-in, or excluded from it. Users who aren't enabled for Microsoft Authenticator can't see a number match.
->[!NOTE]
->Number matching can only be enabled for a single group.
-
-#### Example of how to enable number matching for all users
-
-You'll need to change the **numberMatchingRequiredState** from **default** to **enabled**.
-
-Note that the value of Authentication Mode can be either **any** or **push**, depending on whether or not you also want to enable passwordless phone sign-in. In these examples, we'll use **any**, but if you don't want to allow passwordless, use **push**.
+ :::image type="content" border="true" source="./media/how-to-mfa-number-match/enable-settings-number-match.png" alt-text="Screenshot of how to enable Microsoft Authenticator settings for Push authentication mode.":::
->[!NOTE]
->For passwordless users, enabling or disabling number matching has no impact because it's already part of the passwordless experience.
-
-You might need to patch the entire includeTarget to prevent overwriting any previous configuration. In that case, do a GET first, update only the relevant fields, and then PATCH. The following example only shows the update to the **numberMatchingRequiredState**.
-
-```json
-//Retrieve your existing policy via a GET.
-//Leverage the Response body to create the Request body section. Then update the Request body similar to the Request body as shown below.
-//Change the Query to PATCH and Run query
-
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "all_users",
- "authenticationMode": "any",
- "displayAppInformationRequiredState": "enabled",
- "numberMatchingRequiredState": "enabled"
- }
- ]
-}
-
-```
-
-To confirm this update has applied, please run the GET request below using the endpoint below.
-GET - https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
-
-
-#### Example of how to enable number matching for a single group
-
-We'll need to change the **numberMatchingRequiredState** value from **default** to **enabled.**
-You'll need to change the **id** from **all_users** to the ObjectID of the group from the Azure AD portal.
-
-You need to PATCH the entire includeTarget to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The example below only shows the update to the **numberMatchingRequiredState**.
-
-```json
-//Copy paste the below in the Request body section as shown below.
-//Leverage the Response body to create the Request body section. Then update the Request body similar to the Request body as shown below.
-//Change query to PATCH and run query
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "1ca44590-e896-4dbe-98ed-b140b1e7a53a",
- "authenticationMode": "any",
- "displayAppInformationRequiredState": "enabled",
- "numberMatchingRequiredState": "enabled"
- }
- ]
-}
-```
-
-To verify, RUN GET again and verify the ObjectID
-GET https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
-
-
-#### Example of error when enabling number matching for multiple groups
-
-The PATCH request will fail with 400 Bad Request and the error will contain the following message:
--
-`Persistance of policy failed with error: You cannot enable multiple targets for feature 'Require Number Matching'. Choose only one of the following includeTargets to enable: aede0efe-c1b4-40dc-8ae7-2c402f23e312,aede0efe-c1b4-40dc-8ae7-2c402f23e317.`
-
-### Test the end user experience
-Add the test user account to the Authenticator app. The account does **not** need to be enabled for phone sign-in.
-
-See the end user experience of an Authenticator MFA push notification with number matching by signing into aka.ms/MFAsetup.
-
-### Turn off number matching
-
-To turn number matching off, you'll need to PATCH remove **numberMatchingRequiredState** from **enabled** to **disabled**/**default**.
-
-```json
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "all_users",
- "authenticationMode": "any",
- "displayAppInformationRequiredState": "enabled",
- "numberMatchingRequiredState": "default"
- }
- ]
-}
-```
-
-## Enable number matching in the portal
-
-To enable number matching in the Azure portal, complete the following steps:
-
-1. Sign-in to the [Azure portal](https://portal.azure.com) using an account with *global administrator* permissions.
-1. Search for and select **Azure Active Directory**, then choose **Security** from the menu on the left-hand side.
-1. Under the **Manage** menu header, select **Authentication methods** > **Policies**.
-1. From the list of available authentication methods, select **Microsoft Authenticator**.
-
- ![Screenshot that shows how to select the Microsoft Authenticator policy.](./media/how-to-mfa-number-match/select-microsoft-authenticator-policy.png)
-
-1. Select the target users, select the three dots on the right, and choose **Configure**.
-
- ![Screenshot of configuring number match.](./media/how-to-mfa-number-match/configure-microsoft-authenticator.png)
-
-1. Select the **Authentication mode**, and then for **Require number matching (Preview)**, select **Enable**, and then select **Done**.
-
- ![Screenshot of enabling number match configuration.](media/howto-authentication-passwordless-phone/enable-number-matching.png)
-
->[!NOTE]
->[Least privileged role in Azure Active Directory - Multifactor authentication](../roles/delegate-by-task.md#multi-factor-authentication)
+1. On the **Configure** tab, for **Require number matching for push notifications (Preview)**, change **Status** to **Enabled**, choose who to include or exclude from number matching, and click **Save**.
-Number matching isn't supported for Apple Watch notifications. Apple Watch need to use their phone to approve notifications when number matching is enabled.
+ :::image type="content" border="true" source="./media/how-to-mfa-number-match/number-match.png" alt-text="Screenshot of how to enable number matching.":::
## Next steps
active-directory Active Directory How To Integrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-how-to-integrate.md
Integration with the Microsoft identity platform comes with benefits that do not
### Advanced security features
-**Multi-factor authentication.** The Microsoft identity platform provides native multi-factor authentication. IT administrators can require multi-factor authentication to access your application, so that you do not have to code this support yourself. Learn more about [Multi-Factor Authentication](https://azure.microsoft.com/documentation/services/multi-factor-authentication/).
+**Multi-factor authentication.** The Microsoft identity platform provides native multi-factor authentication. IT administrators can require multi-factor authentication to access your application, so that you do not have to code this support yourself. Learn more about [Multi-Factor Authentication](/azure/multi-factor-authentication/).
**Anomalous sign in detection.** The Microsoft identity platform processes more than a billion sign-ins a day, while using machine learning algorithms to detect suspicious activity and notify IT administrators of possible problems. By supporting the Microsoft identity platform sign-in, your application gets the benefit of this protection. Learn more about [viewing Azure Active Directory access report](../reports-monitoring/overview-reports.md).
Integration with the Microsoft identity platform comes with benefits that do not
[Get started writing code](v2-overview.md#getting-started).
-[Sign users in using the Microsoft identity platform](./authentication-vs-authorization.md)
+[Sign users in using the Microsoft identity platform](./authentication-vs-authorization.md)
active-directory Howto Create Service Principal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-create-service-principal-portal.md
Previously updated : 10/11/2021 Last updated : 08/26/2022 # Use the portal to create an Azure AD application and service principal that can access resources
-This article shows you how to create a new Azure Active Directory (Azure AD) application and service principal that can be used with the role-based access control. When you have applications, hosted services, or automated tools that needs to access or modify resources, you can create an identity for the app. This identity is known as a service principal. Access to resources is restricted by the roles assigned to the service principal, giving you control over which resources can be accessed and at which level. For security reasons, it's always recommended to use service principals with automated tools rather than allowing them to log in with a user identity.
+This article shows you how to create a new Azure Active Directory (Azure AD) application and service principal that can be used with the role-based access control. When you have applications, hosted services, or automated tools that need to access or modify resources, you can create an identity for the app. This identity is known as a service principal. Access to resources is restricted by the roles assigned to the service principal, giving you control over which resources can be accessed and at which level. For security reasons, it's always recommended to use service principals with automated tools rather than allowing them to log in with a user identity.
This article shows you how to use the portal to create the service principal in the Azure portal. It focuses on a single-tenant application where the application is intended to run within only one organization. You typically use single-tenant applications for line-of-business applications that run within your organization. You can also [use Azure PowerShell to create a service principal](howto-authenticate-service-principal-powershell.md).
To check your subscription permissions:
1. Search for and select **Subscriptions**, or select **Subscriptions** on the **Home** page.
- ![Search](./media/howto-create-service-principal-portal/select-subscription.png)
+ :::image type="content" source="media/howto-create-service-principal-portal/select-subscription.png" alt-text="Screenshot how to search subscription permissions.":::
1. Select the subscription you want to create the service principal in.
- ![Select subscription for assignment](./media/howto-create-service-principal-portal/select-one-subscription.png)
+ :::image type="content" source="media/howto-create-service-principal-portal/select-one-subscription.png" alt-text="Select subscription for assignment.":::
If you don't see the subscription you're looking for, select **global subscriptions filter**. Make sure the subscription you want is selected for the portal. 1. Select **My permissions**. Then, select **Click here to view complete access details for this subscription**.
- ![Select the subscription you want to create the service principal in](./media/howto-create-service-principal-portal/view-details.png)
+ :::image type="content" source="media/howto-create-service-principal-portal/view-details.png" alt-text="Select the subscription you want to create the service principal in.":::
1. Select **Role assignments** to view your assigned roles, and determine if you have adequate permissions to assign a role to an AD app. If not, ask your subscription administrator to add you to User Access Administrator role. In the following image, the user is assigned the Owner role, which means that user has adequate permissions.
Let's jump straight into creating the identity. If you run into a problem, check
1. Select **Azure Active Directory**. 1. Select **App registrations**. 1. Select **New registration**.
-1. Name the application. Select a supported account type, which determines who can use the application. Under **Redirect URI**, select **Web** for the type of application you want to create. Enter the URI where the access token is sent to. You can't create credentials for a [Native application](../app-proxy/application-proxy-configure-native-client-application.md). You can't use that type for an automated application. After setting the values, select **Register**.
+1. Name the application, for example "example-app". Select a supported account type, which determines who can use the application. Under **Redirect URI**, select **Web** for the type of application you want to create. Enter the URI where the access token is sent to. You can't create credentials for a [Native application](../app-proxy/application-proxy-configure-native-client-application.md). You can't use that type for an automated application. After setting the values, select **Register**.
- ![Type a name for your application](./media/howto-create-service-principal-portal/create-app.png)
+ :::image type="content" source="media/howto-create-service-principal-portal/create-app.png" alt-text="Type a name for your application.":::
You've created your Azure AD application and service principal.
You can set the scope at the level of the subscription, resource group, or resou
1. In the Azure portal, select the level of scope you wish to assign the application to. For example, to assign a role at the subscription scope, search for and select **Subscriptions**, or select **Subscriptions** on the **Home** page.
- ![For example, assign a role at the subscription scope](./media/howto-create-service-principal-portal/select-subscription.png)
+ :::image type="content" source="media/howto-create-service-principal-portal/select-subscription.png" alt-text="For example, assign a role at the subscription scope.":::
1. Select the particular subscription to assign the application to.
- ![Select subscription for assignment](./media/howto-create-service-principal-portal/select-one-subscription.png)
+ :::image type="content" source="media/howto-create-service-principal-portal/select-one-subscription.png" alt-text="Select subscription for assignment.":::
If you don't see the subscription you're looking for, select **global subscriptions filter**. Make sure the subscription you want is selected for the portal. 1. Select **Access control (IAM)**.
-1. Select Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-1. Select the role you wish to assign to the application. For example, to allow the application to execute actions like **reboot**, **start** and **stop** instances, select the **Contributor** role. Read more about the [available roles](../../role-based-access-control/built-in-roles.md) By default, Azure AD applications aren't displayed in the available options. To find your application, search for the name and select it.
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
+1. In the **Role** tab, select the role you wish to assign to the application in the list. For example, to allow the application to execute actions like **reboot**, **start** and **stop** instances, select the **Contributor** role. Read more about the [available roles](../../role-based-access-control/built-in-roles.md).
- Assign the Contributor role to the application at the subscription scope. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+ Select the **Next** button to move to the **Members** tab. Select **Assign access to**-> **User, group, or service principal** and then select **Select members**. By default, Azure AD applications aren't displayed in the available options. To find your application, search by name (for example, "example-app") and select it from the returned list. Click the **Select** button. Then click the **Review + assign** button.
+ :::image type="content" source="media/howto-create-service-principal-portal/add-role-assignment.png" alt-text="Screenshot showing role assignment.":::
+
Your service principal is set up. You can start using it to run your scripts or apps. To manage your service principal (permissions, user consented permissions, see which users have consented, review permissions, see sign in information, and more), go to **Enterprise applications**. The next section shows how to get values that are needed when signing in programmatically.
When programmatically signing in, pass the tenant ID with your authentication re
1. From **App registrations** in Azure AD, select your application. 1. Copy the Directory (tenant) ID and store it in your application code.
- ![Copy the directory (tenant ID) and store it in your app code](./media/howto-create-service-principal-portal/copy-tenant-id.png)
+ :::image type="content" source="media/howto-create-service-principal-portal/copy-tenant-id.png" alt-text="Copy the directory (tenant ID) and store it in your app code.":::
The directory (tenant) ID can also be found in the default directory overview page. 1. Copy the **Application ID** and store it in your application code.
- ![Copy the application (client) ID](./media/howto-create-service-principal-portal/copy-app-id.png)
+ :::image type="content" source="media/howto-create-service-principal-portal/copy-app-id.png" alt-text="Copy the application (client) ID.":::
## Authentication: Two options
To upload the certificate:
1. Select **Certificates & secrets**. 1. Select **Certificates** > **Upload certificate** and select the certificate (an existing certificate or the self-signed certificate you exported).
- ![Select Upload certificate and select the one you want to add](./media/howto-create-service-principal-portal/upload-cert.png)
+ :::image type="content" source="media/howto-create-service-principal-portal/upload-cert.png" alt-text="Select Upload certificate and select the one you want to add.":::
1. Select **Add**.
If you choose not to use a certificate, you can create a new application secret.
After saving the client secret, the value of the client secret is displayed. Copy this value because you won't be able to retrieve the key later. You will provide the key value with the application ID to sign in as the application. Store the key value where your application can retrieve it.
- ![Copy the secret value because you can't retrieve this later](./media/howto-create-service-principal-portal/copy-secret.png)
+ :::image type="content" source="media/howto-create-service-principal-portal/copy-secret.png" alt-text="Copy the secret value because you can't retrieve this later.":::
## Configure access policies on resources Keep in mind, you might need to configure additional permissions on resources that your application needs to access. For example, you must also [update a key vault's access policies](../../key-vault/general/security-features.md#privileged-access) to give your application access to keys, secrets, or certificates.
active-directory Groups Dynamic Rule Member Of https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-rule-member-of.md
# Group membership in a dynamic group (preview) in Azure Active Directory
-This feature preview in Azure Active Directory (Azure AD), part of Microsoft Entra, enables admins to create dynamic groups that populate by adding members of other groups using the memberOf attribute. Apps that couldn't read group-based membership previously in Azure AD can now read the entire membership of these new memberOf groups. Not only can these groups be used for apps, they can also be used for licensing assignments. The following diagram illustrates how you could create Dynamic-Group-A with members of Security-Group-X and Security-Group-Y. Members of the groups inside of Security-Group-X and Security-Group-Y don't become members of Dynamic-Group-A.
+This feature preview in Azure Active Directory (Azure AD), part of Microsoft Entra, enables admins to create dynamic groups and administrative units that populate by adding members of other groups using the memberOf attribute. Apps that couldn't read group-based membership previously in Azure AD can now read the entire membership of these new memberOf groups. Not only can these groups be used for apps, they can also be used for licensing assignments. The following diagram illustrates how you could create Dynamic-Group-A with members of Security-Group-X and Security-Group-Y. Members of the groups inside of Security-Group-X and Security-Group-Y don't become members of Dynamic-Group-A.
:::image type="content" source="./media/groups-dynamic-rule-member-of/member-of-diagram.png" alt-text="Diagram showing how the memberOf attribute works.":::
Only administrators in the Global Administrator, Intune Administrator, or User A
- MemberOf can't be used with other rules. For example, a rule that states dynamic group A should contain members of group B and also should contain only users located in Redmond will fail. - Dynamic group rule builder and validate feature can't be used for memberOf at this time. - MemberOf can't be used with other operators. For example, you can't create a rule that states ΓÇ£Members Of group A can't be in Dynamic group B.ΓÇ¥
+- The objects specified in the rule can't be administrative units.
## Getting started
active-directory Plan Hybrid Identity Design Considerations Data Protection Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-data-protection-strategy.md
Once authenticated, the user principal name (UPN) is read from the authenticatio
Moving data from your on-premises datacenter into Azure Storage over an Internet connection may not always be feasible due to data volume, bandwidth availability, or other considerations. The [Azure Storage Import/Export Service](../../import-export/storage-import-export-service.md) provides a hardware-based option for placing/retrieving large volumes of data in blob storage. It allows you to send [BitLocker-encrypted](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn306081(v=ws.11)#BKMK_BL2012R2) hard disk drives directly to an Azure datacenter where cloud operators upload the contents to your storage account, or they can download your Azure data to your drives to return to you. Only encrypted disks are accepted for this process (using a BitLocker key generated by the service itself during the job setup). The BitLocker key is provided to Azure separately, thus providing out of band key sharing.
-Since data in transit can take place in different scenarios, is also relevant to know that Microsoft Azure uses [virtual networking](https://azure.microsoft.com/documentation/services/virtual-network/) to isolate tenantsΓÇÖ traffic from one another, employing measures such as host- and guest-level firewalls, IP packet filtering, port blocking, and HTTPS endpoints. However, most of AzureΓÇÖs internal communications, including infrastructure-to-infrastructure and infrastructure-to-customer (on-premises), are also encrypted. Another important scenario is the communications within Azure datacenters; Microsoft manages networks to assure that no VM can impersonate or eavesdrop on the IP address of another. TLS/SSL is used when accessing Azure Storage or SQL Databases, or when connecting to Cloud Services. In this case, the customer administrator is responsible for obtaining a TLS/SSL certificate and deploying it to their tenant infrastructure. Data traffic moving between Virtual Machines in the same deployment or between tenants in a single deployment via Microsoft Azure Virtual Network can be protected through encrypted communication protocols such as HTTPS, SSL/TLS, or others.
+Since data in transit can take place in different scenarios, is also relevant to know that Microsoft Azure uses [virtual networking](/azure/virtual-network/) to isolate tenantsΓÇÖ traffic from one another, employing measures such as host- and guest-level firewalls, IP packet filtering, port blocking, and HTTPS endpoints. However, most of AzureΓÇÖs internal communications, including infrastructure-to-infrastructure and infrastructure-to-customer (on-premises), are also encrypted. Another important scenario is the communications within Azure datacenters; Microsoft manages networks to assure that no VM can impersonate or eavesdrop on the IP address of another. TLS/SSL is used when accessing Azure Storage or SQL Databases, or when connecting to Cloud Services. In this case, the customer administrator is responsible for obtaining a TLS/SSL certificate and deploying it to their tenant infrastructure. Data traffic moving between Virtual Machines in the same deployment or between tenants in a single deployment via Microsoft Azure Virtual Network can be protected through encrypted communication protocols such as HTTPS, SSL/TLS, or others.
Depending on how you answered the questions in [Determine data protection requirements](plan-hybrid-identity-design-considerations-dataprotection-requirements.md), you should be able to determine how you want to protect your data and how the hybrid identity solution can assist you with that process. The following table shows the options supported by Azure that are available for each data protection scenario.
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history.md
This article helps you keep track of the versions that have been released and un
You can upgrade your Azure AD Connect server from all supported versions with the latest versions:
+You can download the latest version of Azure AD Connect 2.0 from the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=47594). See the [release notes for the latest V2.0 release](reference-connect-version-history.md#20280).
The following table lists related topics:
Required permissions | For permissions required to apply an update, see [Azure A
## Retiring Azure AD Connect 1.x versions > [!IMPORTANT]
-> *On August 31, 2022, all 1.x versions of Azure AD Connect will be retired because they include SQL Server 2012 components that will no longer be supported.* Upgrade to the most recent version of Azure AD Connect (2.x version) by that date or [evaluate and switch to Azure AD cloud sync](../cloud-sync/what-is-cloud-sync.md).
+> *As of August 31, 2022, all 1.x versions of Azure AD Connect are retired because they include SQL Server 2012 components that will no longer be supported.* Upgrade to the most recent version of Azure AD Connect (2.x version) by that date or [evaluate and switch to Azure AD cloud sync](../cloud-sync/what-is-cloud-sync.md).
+> AADConnect V1.x will stop working on December 31st, due to the decommisioning of the ADAL library service on that date.
## Retiring Azure AD Connect 2.x versions > [!IMPORTANT]
Required permissions | For permissions required to apply an update, see [Azure A
> > The following versions will retire on 15 March 2023: >
+> - 2.0.91.0
> - 2.0.89.0 > - 2.0.88.0 > - 2.0.28.0
Required permissions | For permissions required to apply an update, see [Azure A
> > If you are not already using the latest release version of Azure AD Connect Sync, you should upgrade your Azure AD Connect Sync software before that date. >
-> This policy does not change the retirement of all 1.x versions of Azure AD Connect Sync on 31 August 2022, which is due to the retirement of the SQL Server 2012 and Azure AD Authentication Library (ADAL) components.
If you run a retired version of Azure AD Connect, it might unexpectedly stop working. You also might not have the latest security fixes, performance improvements, troubleshooting and diagnostic tools, and service enhancements. If you require support, we might not be able to provide you with the level of service your organization needs.
active-directory How To Assign Managed Identity Via Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md
For example, if the policy in this document is updating the managed identities o
## Next steps -- [Deploy Azure Monitor Agent](../../azure-monitor/agents/azure-monitor-agent-manage.md#using-azure-policy)
+- [Deploy Azure Monitor Agent](../../azure-monitor/agents/azure-monitor-agent-manage.md#use-azure-policy)
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Users with this role can manage alerts and have global read-only access on secur
| [Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | All permissions of the Security Reader role<br>View, investigate, and respond to security alerts | | [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/prepare-deployment) | All permissions of the Security Reader role<br>View, investigate, and respond to security alerts | | [Intune](/intune/role-based-access-control) | All permissions of the Security Reader role |
-| [Cloud App Security](/cloud-app-security/manage-admins) | All permissions of the Security Reader role |
+| [Microsoft Defender for Cloud Apps](/cloud-app-security/manage-admins) | All permissions of the Security Reader role<br>View, investigate, and respond to security alerts |
| [Microsoft 365 service health](/microsoft-365/enterprise/view-service-health) | View the health of Microsoft 365 services | > [!div class="mx-tableFixed"]
Identity Protection Center | Read all security reports and settings information
[Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | View security policies<br>View and investigate security threats<br>View reports [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/prepare-deployment) | View and investigate alerts. When you turn on role-based access control in Microsoft Defender for Endpoint, users with read-only permissions such as the Azure AD Security Reader role lose access until they are assigned to a Microsoft Defender for Endpoint role. [Intune](/intune/role-based-access-control) | Views user, device, enrollment, configuration, and application information. Cannot make changes to Intune.
-[Cloud App Security](/cloud-app-security/manage-admins) | Has read permissions and can manage alerts
+[Microsoft Defender for Cloud Apps](/cloud-app-security/manage-admins) | Has read permissions.
[Microsoft 365 service health](/office365/enterprise/view-service-health) | View the health of Microsoft 365 services > [!div class="mx-tableFixed"]
active-directory Amazon Web Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/amazon-web-service-tutorial.md
To configure the integration of AWS Single-Account Access into Azure AD, you nee
1. In the **Add from the gallery** section, type **AWS Single-Account Access** in the search box. 1. Select **AWS Single-Account Access** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
## Configure and test Azure AD SSO for AWS Single-Account Access
active-directory Atlassian Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atlassian-cloud-tutorial.md
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Atlassian Cloud single sign-on (SSO) enabled subscription.
-* To enable Security Assertion Markup Language (SAML) single sign-on for Atlassian Cloud products, you need to set up Atlassian Access. Learn more about [Atlassian Access]( https://www.atlassian.com/enterprise/cloud/identity-manager).
+* To enable Security Assertion Markup Language (SAML) single sign-on for Atlassian Cloud products, you need to set up Atlassian Access. Learn more about [Atlassian Access](https://www.atlassian.com/enterprise/cloud/identity-manager).
> [!NOTE] > This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
To configure the integration of Atlassian Cloud into Azure AD, you need to add A
1. In the **Add from the gallery** section, type **Atlassian Cloud** in the search box. 1. Select **Atlassian Cloud** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
## Configure and test Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a Single sign-on method** page, select **SAML**.
- ![SAML in azure](./media/atlassian-cloud-tutorial/azure.png)
+ ![SAML in Azure](./media/atlassian-cloud-tutorial/azure.png)
1. On the **Set up Single Sign-On with SAML** page, scroll down to **Set Up Atlassian Cloud**.
active-directory Aws Single Sign On Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aws-single-sign-on-tutorial.md
To configure the integration of AWS IAM Identity Center into Azure AD, you need
1. In the **Add from the gallery** section, type **AWS IAM Identity Center** in the search box. 1. Select **AWS IAM Identity Center** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
## Configure and test Azure AD SSO for AWS IAM Identity Center
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps
-Once you configure AWS IAM Identity Center you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure AWS IAM Identity Center you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Cisco Anyconnect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-anyconnect.md
To configure the integration of Cisco AnyConnect into Azure AD, you need to add
1. In the **Add from the gallery** section, type **Cisco AnyConnect** in the search box. 1. Select **Cisco AnyConnect** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
## Configure and test Azure AD SSO for Cisco AnyConnect
active-directory Docusign Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/docusign-tutorial.md
To configure the integration of DocuSign into Azure AD, you must add DocuSign fr
1. In the **Add from the gallery** section, type **DocuSign** in the search box. 1. Select **DocuSign** from the results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
## Configure and test Azure AD SSO for DocuSign
active-directory Fortigate Ssl Vpn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortigate-ssl-vpn-tutorial.md
To configure the integration of FortiGate SSL VPN into Azure AD, you need to add
1. In the **Add from the gallery** section, enter **FortiGate SSL VPN** in the search box. 1. Select **FortiGate SSL VPN** in the results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
## Configure and test Azure AD SSO for FortiGate SSL VPN
active-directory Google Apps Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/google-apps-tutorial.md
To configure the integration of Google Cloud / G Suite Connector by Microsoft in
1. In the **Add from the gallery** section, type **Google Cloud / G Suite Connector by Microsoft** in the search box. 1. Select **Google Cloud / G Suite Connector by Microsoft** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
## Configure and test Azure AD single sign-on for Google Cloud / G Suite Connector by Microsoft
active-directory Saml Toolkit Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/saml-toolkit-tutorial.md
To configure the integration of Azure AD SAML Toolkit into Azure AD, you need to
1. In the **Add from the gallery** section, type **Azure AD SAML Toolkit** in the search box. 1. Select **Azure AD SAML Toolkit** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
## Configure and test Azure AD SSO for Azure AD SAML Toolkit
active-directory Servicenow Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicenow-tutorial.md
To configure the integration of ServiceNow into Azure AD, you need to add Servic
1. In the **Add from the gallery** section, enter **ServiceNow** in the search box. 1. Select **ServiceNow** from results panel, and then add the app. Wait a few seconds while the app is added to your tenant.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
## Configure and test Azure AD SSO for ServiceNow
active-directory Slack Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/slack-tutorial.md
To configure the integration of Slack into Azure AD, you need to add Slack from
1. In the **Add from the gallery** section, type **Slack** in the search box. 1. Select **Slack** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
## Configure and test Azure AD SSO for Slack
api-management Api Management Howto Developer Portal Customize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-developer-portal-customize.md
Learn more about the developer portal:
- [Azure API Management developer portal overview](api-management-howto-developer-portal.md) - [Migrate to the new developer portal](developer-portal-deprecated-migration.md) from the deprecated legacy portal.
+- Learn more about [customizing and extending](developer-portal-extend-custom-functionality.md) the functionality of the developer portal.
api-management Api Management Howto Developer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-developer-portal.md
Migration to the new developer portal is described in the [dedicated documentati
Your API Management service includes a built-in, always up-to-date, **managed** developer portal. You can access it from the Azure portal interface.
-Customize and style the managed portal through the built-in, drag-and-drop visual editor:
+[Customize and style](api-management-howto-developer-portal-customize.md) the managed portal through the built-in, drag-and-drop visual editor:
* Use the visual editor to modify pages, media, layouts, menus, styles, or website settings. * Take advantage of built-in widgets to add text, images, buttons, and other objects that the portal supports out-of-the-box.
-* [Add custom HTML](developer-portal-faq.md#how-do-i-add-custom-html-to-my-developer-portal) - for example, add HTML for a form or to embed a video player. The custom code is rendered in an inline frame (iframe).
-
-See [this tutorial](api-management-howto-developer-portal-customize.md) for example customizations.
- > [!NOTE] > The managed developer portal receives and applies updates automatically. Changes that you've saved but not published to the developer portal remain in that state during an update.
-## <a name="managed-vs-self-hosted"></a> Extensibility
-
-In some cases you might need functionality beyond the customization and styling options supported in the managed developer portal. If you need to implement custom logic, which isn't supported out-of-the-box, you can modify the portal's codebase, available on [GitHub](https://github.com/Azure/api-management-developer-portal). For example, you could create a new widget to integrate with a third-party support system. When you implement new functionality, you can choose one of the following options:
--- **Self-host** the resulting portal outside of your API Management service. When you self-host the portal, you become its maintainer and you are responsible for its upgrades. Azure Support's assistance is limited only to the [basic setup of self-hosted portals](developer-portal-self-host.md).-- Open a pull request for the API Management team to merge new functionality to the **managed** portal's codebase.-
-For extensibility details and instructions, refer to the [GitHub repository](https://github.com/Azure/api-management-developer-portal) and the tutorial to [implement a widget](developer-portal-implement-widgets.md). The tutorial to [customize the managed portal](api-management-howto-developer-portal-customize.md) walks you through the portal's administrative panel, which is common for **managed** and **self-hosted** versions.
+## <a name="managed-vs-self-hosted"></a> Options to extend portal functionality
+In some cases you might need functionality beyond the customization and styling options provided in the managed developer portal. If you need to implement custom logic, which isn't supported out-of-the-box, you have [several options](developer-portal-extend-custom-functionality.md):
+* [Add custom HTML](developer-portal-extend-custom-functionality.md#use-custom-html-code-widget) directly through a developer portal widget designed for small customizations - for example, add HTML for a form or to embed a video player. The custom code is rendered in an inline frame (iframe).
+* [Create and upload a custom widget](developer-portal-extend-custom-functionality.md#create-and-upload-custom-widget) to develop and add more complex custom portal features.
+* [Self-host the portal](developer-portal-self-host.md), only if you need to make modifications to the core of the developer portal [codebase](https://github.com/Azure/api-management-developer-portal). This option requires advanced configuration. Azure Support's assistance is limited only to the basic setup of self-hosted portals.
+> [!NOTE]
+> Because the API Management developer portal codebase is maintained on [GitHub](https://github.com/Azure/api-management-developer-portal), you can open issues and make pull requests for the API Management team to merge new functionality at any time.
+>
## Next steps Learn more about the developer portal: - [Access and customize the managed developer portal](api-management-howto-developer-portal-customize.md)
+- [Extend functionality of the managed developer portal](developer-portal-extend-custom-functionality.md)
- [Set up self-hosted version of the portal](developer-portal-self-host.md)-- [Implement your own widget](developer-portal-implement-widgets.md) Browse other resources:
api-management Developer Portal Extend Custom Functionality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-extend-custom-functionality.md
+
+ Title: Add custom functionality to the Azure API Management developer portal
+
+description: How to customize the managed API Management developer portal with custom functionality such as custom widgets.
++ Last updated : 08/29/2022++++
+# Extend the developer portal with custom features
+
+The API Management [developer portal](api-management-howto-developer-portal.md) features a visual editor and built-in widgets so that you can customize and style the portal's appearance. However, you may need to customize the developer portal further with custom functionality. For example, you might want to integrate your developer portal with a support system that involves adding a custom interface. This article explains ways to add custom functionality such as custom widgets to your API Management developer portal.
+
+The following table summarizes three options, with links to more detail.
++
+|Method |Description |
+|||
+|[Custom HTML code widget](#use-custom-html-code-widget) | - Lightweight solution for API publishers to add custom logic for basic use cases<br/><br/>- Copy and paste custom HTML code into a form, and developer portal renders it in an iframe |
+|[Create and upload custom widget](#create-and-upload-custom-widget) | - Developer solution for more advanced widget use cases<br/><br/>- Requires local implementation in React, Vue, or plain TypeScript<br/><br/>- Widget scaffold and tools provided to help developers create widget and upload to developer portal<br/><br/>- Supports workflows for source control, versioning, and code reuse<br/><br/> |
+|[Self-host developer portal](developer-portal-self-host.md) | - Legacy extensibility option for customers who need to customize source code of the entire portal core<br/><br/> - Gives complete flexibility for customizing portal experience<br/><br/>- Requires advanced configuration<br/><br/>- Customer responsible for managing complete code lifecycle: fork code base, develop, deploy, host, patch, and upgrade |
++
+## Use Custom HTML code widget
+
+The managed developer portal includes a **Custom HTML code** widget where you can insert HTML code for small portal customizations. For example, use custom HTML to embed a video or to add a form. The portal renders the custom widget in an inline frame (iframe).
+
+1. In the administrative interface for the developer portal, go to the page or section where you want to insert the widget.
+1. Select the grey "plus" (**+**) icon that appears when you hover the pointer over the page.
+1. In the **Add widget** window, select **Custom HTML code**.
+
+ :::image type="content" source="media/developer-portal-extend-custom-functionality/add-custom-html-code-widget.png" alt-text="Screenshot that shows how to add a widget for custom HTML code in the developer portal.":::
+1. Select the "pencil" icon to customize the widget.
+1. Enter a **Width** and **Height** (in pixels) for the widget.
+1. To inherit styles from the developer portal (recommended), select **Apply developer portal styling**.
+ > [!NOTE]
+ > If this setting isn't selected, the embedded elements will be plain HTML controls, without the styles of the developer portal.
+
+ :::image type="content" source="media/developer-portal-extend-custom-functionality/configure-html-custom-code.png" alt-text="Screenshot that shows how to configure HTML custom code in the developer portal.":::
+1. Replace the sample **HTML code** with your custom content.
+1. When configuration is complete, close the window.
+1. Save your changes, and [republish the portal](api-management-howto-developer-portal-customize.md#publish).
+
+> [!NOTE]
+> Microsoft does not support the HTML code you add in the Custom HTML Code widget.
+
+## Create and upload custom widget
+
+### Prerequisites
+
+* Install [Node.JS runtime](https://nodejs.org/en/) locally
+* Basic knowledge of programming and web development
+
+### Create widget
+
+1. In the administrative interface for the developer portal, select **Custom widgets** > **Create new custom widget**.
+1. Enter a widget name and choose a **Technology**. For more information, see [Widget templates](#widget-templates), later in this article.
+1. Select **Create widget**.
+1. Open a terminal, navigate to the location where you want to save the widget code, and run the following command to download the code scaffold:
+
+ ```
+ npx @azure/api-management-custom-widgets-scaffolder
+ ```
+1. Navigate to the newly created folder containing the widget's code scaffold.
+
+ ```
+ cd <name-of-widget>
+ ```
+
+1. Open the folder in your code editor of choice, such as VS Code.
+
+1. Install the dependencies and start the project:
+
+ ```
+ npm install
+ npm start
+ ```
+
+ Your browser should open a new tab with your developer portal connected to your widget in development mode.
+
+ > [!NOTE]
+ > If the tab doesn't open, do the following:
+ > 1. Make sure the development server started. To do that, check output on the console where you started the server in the previous step. It should display the port the server is running on (for example, `http://127.0.0.1:3001`).
+ > 1. Go to your API Management service in the Azure portal and open your developer portal with the administrative interface.
+ > 1. Append `/?MS_APIM_CW_localhost_port=3001` to the URL. Change the port number if your server runs on a different port.
+
+1. Implement the code of the widget and test it locally. The code of the widget is located in the `src` folder, in the following subfolders:
+
+ * **`app`** - Code for the widget component that visitors to the published developer portal see and interact with
+ * **`editor`** - Code for the widget component that you use in the administrative interface of the developer portal to edit widget settings
+
+ The `values.ts` file contains the default values and types of the widget's custom properties you can enable for editing.
+
+ :::image type="content" source="media/developer-portal-extend-custom-functionality/widget-custom-properties.png" alt-text="Screenshot of custom properties page in developer portal.":::
+
+ Custom properties let you adjust values in the custom widget's instance from the administrative user interface of the developer portal, without changing the code or redeploying the custom widget. This object needs to be passed to some of the widgets' helper functions.
+
+### Deploy the custom widget to the developer portal
+
+1. Specify the following values in the `deploy.js` file located in the root of your project:
+
+ * `resourceId` - Resource ID of your API Management service, in the following format: `subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ApiManagement/service/<api-management service-name>`
+
+ * `managementApiEndpoint` - Azure Management API endpoint (depends on your environment, typically `management.azure.com`)
+
+ * `apiVersion` - Optional, use to override the default management API version
+
+1. Run the following command:
+
+ ```
+ npm run deploy
+ ```
+
+ If prompted, sign in to your Azure account.
++
+The custom widget is now deployed to your developer portal. Using the portal's administrative interface, you can add it on pages in the developer portal and set values for any custom properties configured in the widget.
+
+### Publish the developer portal
+
+After you configure the widget in the administrative interface, [republish the portal](api-management-howto-developer-portal-customize.md#publish) to make the widget available in production.
+
+> [!NOTE]
+> * If you deploy updated widget code at a later date, the widget used in production doesn't update until you republish the developer portal.
+> * The widget's compiled code is associated with a specific portal *revision*. If you make a previous portal revision current, the custom widget associated with that revision is used.
+
+### Widget templates
+
+We provide templates for the following technologies you can use for the widget:
+
+* **TypeScript** (pure implementation without any framework)
+* **React**
+* **Vue**
+
+All templates are based on the TypeScript programming language.
+
+The React template contains prepared custom hooks in the `hooks.ts` file and established providers for sharing context through the component tree with dedicated `useSecrets`, `useValues`, and `useEditorValues` hooks.
+
+### Use the `@azure/api-management-custom-widgets-tools` package
+
+This [npm package](https://www.npmjs.com/package/@azure/api-management-custom-widgets-tools) contains the following functions to help you develop your custom widget and provides features including communication between the developer portal and your widget:
++
+|Function |Description |
+|||
+|[getValues](#azureapi-management-custom-widgets-toolsgetvalues) | Returns a JSON object containing values set in the widget editor combined with default values |
+|[getEditorValues](#azureapi-management-custom-widgets-toolsgeteditorvalues) | Returns a JSON object containing only values set in the widget editor |
+|[buildOnChange](#azureapi-management-custom-widgets-toolsbuildonchange) | Accepts a TypeScript type and returns a function to update the widget values. The returned function takes as parameter a JSON object with updated values and doesn't return anything.<br/><br/>Used internally in widget editor |
+|[askForSecrets](#azureapi-management-custom-widgets-toolsaskforsecrets) | Returns a JavaScript promise, which after resolution returns a JSON object of data needed to communicate with backend |
+|[deployNodeJs](#azureapi-management-custom-widgets-toolsdeploynodejs) | Deploys widget to blob storage |
+|[getWidgetData](#azureapi-management-custom-widgets-toolsgetwidgetdata) | Returns all data passed to your custom widget from the developer portal<br/><br/>Used internally in templates |
++
+#### `@azure/api-management-custom-widgets-tools/getValues`
+
+Function that returns a JSON object containing the values you've set in the widget editor combined with default values, passed as an argument.
+
+```JavaScript
+Import {getValues} from "@azure/api-management-custom-widgets-tools/getValues"
+import {valuesDefault} from "./values"
+const values = getValues(valuesDefault)
+```
+
+It's intended to be used in the runtime (`app`) part of your widget.
+
+#### `@azure/api-management-custom-widgets-tools/getEditorValues`
+
+Function that works the same way as `getValues`, but returns only values you've set in the editor.
+
+It's intended to be used in the editor of your widget but also works in runtime.
+
+#### `@azure/api-management-custom-widgets-tools/buildOnChange`
+
+> [!NOTE]
+> This function is intended to be used only in the widget editor.
+
+Accepts a TypeScript type and returns a function to update the widget values. The returned function takes as parameter a JSON object with updated values and doesn't return anything.
+
+```JavaScript
+import {Values} from "./values"
+const onChange = buildOnChange<Values>()
+onChange({fieldKey: 'newValue'})
+```
+
+#### `@azure/api-management-custom-widgets-tools/askForSecrets`
+
+This function returns a JavaScript promise, which after resolution returns a JSON object of data needed to communicate with backend. `token` is needed for authentication. `userId` is needed to query user-specific resources. Those values might be undefined when the portal is viewed by an anonymous user. The `Secrets` object also contains `managementApiUrl`, which is the URL of your portal's backend, and `apiVersion`, which is the apiVersion currently used by the developer portal.
+
+> [!CAUTION]
+> Manage and use the token carefully. Anyone who has it can access data in your API Management service.
++
+#### `@azure/api-management-custom-widgets-tools/deployNodeJs`
+
+This function deploys your widget to your blob storage. In all templates, it's preconfigured in the `deploy.js` file.
+
+It accepts three arguments by default:
+
+* `serviceInformation` ΓÇô Information about your Azure service:
+
+ * `resourceId` - Resource ID of your API Management service, in the following format: `subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ApiManagement/service/<api-management service-name>`
+
+ * `managementApiEndpoint` - Azure management API endpoint (depends on your environment, typically `management.azure.com`)
+
+* ID of your widget ΓÇô Name of your widget in "PC-friendly" format (Latin alphanumeric lowercase characters and dashes; `Contoso widget` becomes `contoso-widget`). You can find it in the `package.json` under the `name` key.
+
+* `fallbackConfigPath` ΓÇô Path for the local `config.msapim.json` file, for example, `./static/config.msapim.json`
+
+#### `@azure/api-management-custom-widgets-tools/getWidgetData`
+
+> [!NOTE]
+> This function is used internally in templates. In most implementations you shouldn't need it otherwise.
+
+This function returns all data passed to your custom widget from the developer portal. It contains other data that might be useful in debugging or in more advanced scenarios. This API is expected to change with potential breaking changes. It returns a JSON object that contains the following keys:
+
+* `values` - All the values you've set in the editor, the same object that is returned by `getEditorData`
+
+* `environment` - Current runtime environment for the widget
+
+* `origin` - Location origin of the developer portal
+
+* `instanceId` - ID of this instance of the widget
+
+### Add or remove custom properties
+
+Custom properties let you adjust values in the custom widget's code from the administrative user interface of the developer portal, without changing the code or redeploying the custom widget. By default, input fields for four custom properties are defined. You can add or remove other custom properties as needed.
+
+To add a custom property:
+
+1. In the file `src/values.ts`, add to the `Values` type the name of the property and type of the data it will save.
+1. In the same file, add a default value for it.
+1. Navigate to the `editor.html` or `editor/index` file (exact location depends on the framework you've chosen) and duplicate an existing input or add one yourself.
+1. Make sure the input field reports the changed value to the `onChange` function, which you can get from `[buildOnChange`](#azureapi-management-custom-widgets-toolsbuildonchange).
+
+### (Optional) Use another framework
+
+To implement your widget using another JavaScript UI framework and libraries, you need to set up the project yourself with the following guidelines:
+
+* In most cases, we recommend that you start from the TypeScript template.
+* Install dependencies as in any other npm project.
+* If your framework of choice isn't compatible with [Vite build tool](https://vitejs.dev/), configure it so that it outputs compiled files to the `./dist` folder. Optionally, redefine where the compiled files are located by providing a relative path as the fourth argument for the [`deployNodeJs`](#azureapi-management-custom-widgets-toolsdeploynodejs) function.
+* For local development, the `config.msapim.json` file must be accessible at the URL `localhost:<port>/config.msapim.json` when the server is running.
+++
+## Next steps
+
+Learn more about the developer portal:
+
+- [Azure API Management developer portal overview](api-management-howto-developer-portal.md)
+- [Frequently asked questions](developer-portal-faq.md)
+- [Scaffolder of a custom widget for developer portal of Azure API Management service](https://www.npmjs.com/package/@azure/api-management-custom-widgets-scaffolder)
+- [Tools for working with custom widgets of developer portal of Azure API Management service](https://www.npmjs.com/package/@azure/api-management-custom-widgets-tools)
api-management Developer Portal Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-faq.md
You have the following options:
-* For certain situations, you can [add custom HTML](#how-do-i-add-custom-html-to-my-developer-portal) to add functionality to the portal.
+* For small customizations, use a built-in widget to [add custom HTML](developer-portal-extend-custom-functionality.md#use-custom-html-code-widget) .
+
+* For larger customizations, [create and upload](developer-portal-extend-custom-functionality.md#create-and-upload-custom-widget) a custom widget to the managed developer portal.
+
+* [Self-host the developer portal](developer-portal-self-host.md), only if you need to make modifications to the core of the developer portal codebase.
* Open a feature request in the [GitHub repository](https://github.com/Azure/api-management-developer-portal).
-* [Implement the missing functionality yourself](developer-portal-implement-widgets.md).
+Learn more about [customizing and extending](developer-portal-extend-custom-functionality.md) the functionality of the developer portal.
-Learn more about developer portal [extensibility](api-management-howto-developer-portal.md#managed-vs-self-hosted).
## Can I have multiple developer portals in one API Management service?
You can generate *user-specific tokens* (including admin tokens) using the [Get
> [!NOTE] > The token must be URL-encoded.
-## How do I add custom HTML to my developer portal?
-
-The managed developer portal includes a **Custom HTML code** widget that enables you to insert HTML code for small portal customizations. For example, use custom HTML to embed a video or to add a form. The portal renders the custom widget in an inline frame (iframe).
-
-1. In the administrative interface for the developer portal, go to the page or section where you want to insert the widget.
-1. Select the grey "plus" (**+**) icon that appears when you hover the pointer over the page.
-1. In the **Add widget** window, select **Custom HTML code**.
-
- :::image type="content" source="media/developer-portal-faq/add-custom-html-code-widget.png" alt-text="Add widget for custom HTML code":::
-1. Select the "pencil" icon to customize the widget.
-1. Enter a **Width** and **Height** (in pixels) for the widget.
-1. To inherit styles from the developer portal (recommended), select **Apply developer portal styling**.
- > [!NOTE]
- > If this setting isn't selected, the embedded elements will be plain HTML controls, without the styles of the developer portal.
-
- :::image type="content" source="media/developer-portal-faq/configure-html-custom-code.png" alt-text="Configure HTML custom code":::
-1. Replace the sample **HTML code** with your custom content.
-1. When configuration is complete, close the window.
-1. Save your changes, and [republish the portal](api-management-howto-developer-portal-customize.md#publish).
-
-> [!NOTE]
-> Microsoft does not support the HTML code you add in the Custom HTML Code widget.
## Next steps Learn more about the developer portal: - [Access and customize the managed developer portal](api-management-howto-developer-portal-customize.md)
+- [Extend](developer-portal-extend-custom-functionality.md) the functionality of the developer portal.
- [Set up self-hosted version of the portal](developer-portal-self-host.md)-- [Implement your own widget](developer-portal-implement-widgets.md) Browse other resources:
api-management Developer Portal Implement Widgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-implement-widgets.md
-
Title: Implement widgets in the developer portal-
-description: Learn how to implement widgets that consume data from external APIs and display it on the API Management developer portal.
-- Previously updated : 04/15/2021----
-# Implement widgets in the developer portal
-
-In this tutorial, you implement a widget that consumes data from an external API and displays it on the API Management developer portal.
-
-The widget will retrieve session descriptions from the sample [Conference API](https://conferenceapi.azurewebsites.net/?format=json). The session identifier will be set through a designated widget editor.
-
-To help you in the development process, refer to the completed widget located in the `examples` folder of the API Management developer portal [GitHub repository](https://github.com/Azure/api-management-developer-portal/): `/examples/widgets/conference-session`.
--
-## Prerequisites
-
-* Set up a [local environment](developer-portal-self-host.md#step-1-set-up-local-environment) for the latest release of the developer portal.
-
-* You should understand the [Paperbits widget anatomy](https://paperbits.io/wiki/widget-anatomy).
--
-## Copy the scaffold
-
-Use a `widget` scaffold from the `/scaffolds` folder as a starting point to build the new widget.
-
-1. Copy the folder `/scaffolds/widget` to `/community/widgets`.
-1. Rename the folder to `conference-session`.
-
-## Rename exported module classes
-
-Rename the exported module classes by replacing the `Widget` prefix with `ConferenceSession` and change the binding name to avoid name collision, in these files:
--- `widget.design.module.ts`--- `widget.publish.module.ts`--- `widget.runtime.module.ts`
-
-For example, in the `widget.design.module.ts` file, change `WidgetDesignModule` to `ConferenceSessionDesignModule`:
-
-```typescript
-export class WidgetDesignModule implements IInjectorModule {
- public register(injector: IInjector): void {
- injector.bind("widget", WidgetViewModel);
- injector.bind("widgetEditor", WidgetEditorViewModel);
-```
-to
-
-```typescript
-export class ConferenceSessionDesignModule implements IInjectorModule {
- public register(injector: IInjector): void {
- injector.bind("conferenceSession", WidgetViewModel);
- injector.bind("conferenceSessionEditor", WidgetEditorViewModel);
-```
-
-
-## Register the widget
-
-Register the widget's modules in the portal's root modules by adding the following lines in the respective files:
-
-1. `src/apim.design.module.ts` - a module that registers design-time dependencies.
-
- ```typescript
- import { ConferenceSessionDesignModule } from "../community/widgets/conference-session/widget.design.module";
-
- ...
- injector.bindModule(new ConferenceSessionDesignModule());
- ```
-1. `src/apim.publish.module.ts` - a module that registers publish-time dependencies.
-
- ```typescript
- import { ConferenceSessionPublishModule } from "../community/widgets/conference-session/widget.publish.module";
-
- ...
-
- injector.bindModule(new ConferenceSessionPublishModule());
- ```
-
-1. `src/apim.runtime.module.ts` - runtime dependencies.
-
- ```typescript
- import { ConferenceSessionRuntimeModule } from "../community/widgets/conference-session/widget.runtime.module";
-
- ...
-
- injector.bindModule(new ConferenceSessionRuntimeModule());
- ```
-
-## Place the widget in the portal
-
-Now you're ready to plug in the duplicated scaffold and use it in developer portal.
-
-1. Run the `npm start` command.
-
-1. When the application loads, place the new widget on a page. You can find it under the name `Your widget` in the `Community` category in the widget selector.
-
- :::image type="content" source="media/developer-portal-implement-widgets/widget-selector.png" alt-text="Screenshot of widget selector":::
-
-1. Save the page by pressing **Ctrl**+**S** (or **Γîÿ**+**S** on macOS).
-
- > [!NOTE]
- > In design-time, you can still interact with the website by holding the **Ctrl** (or **Γîÿ**) key.
-
-## Add custom properties
-
-For the widget to fetch session descriptions, it needs to be aware of the session identifier. Add the `Session ID` property to the respective interfaces and classes:
-
-In order for the widget to fetch the session description, it needs to be aware of the session identifier. Add the session ID property to the respective interfaces and classes:
-
-1. `widgetContract.ts` - data contract (data layer) defining how the widget configuration is persisted.
-
- ```typescript
- export interface WidgetContract extends Contract {
- sessionNumber: string;
- }
- ```
-
-1. `widgetModel.ts` - model (business layer) - a primary representation of the widget in the system. It's updated by editors and rendered by the presentation layer.
-
- ```typescript
- export class WidgetModel {
- public sessionNumber: string;
- }
- ```
-
-1. `ko/widgetViewModel.ts` - viewmodel (presentation layer) - a UI framework-specific object that developer portal renders with the HTML template.
-
- > [!NOTE]
- > You don't need to change anything in this file.
-
-## Configure binders
-
-Enable the flow of the `sessionNumber` from the data source to the widget presentation. Edit the `ModelBinder` and `ViewModelBinder` entities:
-
-1. `widgetModelBinder.ts` helps to prepare the model using data described in the contract.
-
- ```typescript
- export class WidgetModelBinder implements IModelBinder<WidgetModel> {
- public async contractToModel(contract: WidgetContract): Promise<WidgetModel> {
- model.sessionNumber = contract.sessionNumber || "107"; // 107 is the default session id
- ...
- }
-
- public modelToContract(model: WidgetModel): Contract {
- const contract: WidgetContract = {
- sessionNumber: model.sessionNumber
- ...
- };
- ...
- }
- }
- ```
-
-1. `ko/widgetViewModelBinder.ts` knows how developer portal needs to present the model (as a viewmodel) in a specific UI framework.
-
- ```typescript
- ...
- public async updateViewModel(model: WidgetModel, viewModel: WidgetViewModel): Promise<void> {
- viewModel.runtimeConfig(JSON.stringify({
- sessionNumber: model.sessionNumber
- }));
- }
- }
- ...
- ```
-
-## Adjust design-time widget template
-
-The components of each scope run independently. They have separate dependency injection containers, their own configuration, lifecycle, etc. They may even be powered by different UI frameworks (in this example it is Knockout JS).
-
-From the design-time perspective, any runtime component is just an HTML tag with certain attributes and/or content. Configuration if necessary is passed with plain markup. In simple cases, like in this example, the parameter is passed in the attribute. If the configuration is more complex, you could use an identifier of the required setting(s) fetched by a designated configuration provider (for example, `ISettingsProvider`).
-
-1. Update the `ko/widgetView.html` file:
-
- ```html
- <widget-runtime data-bind="attr: { params: runtimeConfig }"></widget-runtime>
- ```
-
- When developer portal runs the `attr` binding in *design-time* or *publish-time*, the resulting HTML is:
-
- ```html
- <widget-runtime params="{ sessionNumber: 107 }"></widget-runtime>
- ```
-
- Then, in runtime, `widget-runtime` component will read `sessionNumber` and use it in the initialization code (see below).
-
-1. Update the `widgetHandlers.ts` file to assign the session ID on creation:
-
- ```typescript
- ...
- createModel: async () => {
- var model = new WidgetModel();
- model.sessionNumber = "107";
- return model;
- }
- ...
- ```
-
-## Revise runtime view model
-
-Runtime components are the code running in the website itself. For example, in the API Management developer portal, they are all the scripts behind dynamic components (for example, *API details*, *API console*), handling operations such as code sample generation, sending requests, etc.
-
-Your runtime component's view model needs to have the following methods and properties:
--- The `sessionNumber` property (marked with `Param` decorator) used as a component input parameter passed from outside (the markup generated in design-time; see the previous step).-- The `sessionDescription` property bound to the widget template (see `widget-runtime.html` later in this article).-- The `initialize` method (with `OnMounted` decorator) invoked after the widget is created and all its parameters are assigned. It's a good place to read the `sessionNumber` and invoke the API using the `HttpClient`. The `HttpClient` is a dependency injected by the IoC (Inversion of Control) container.--- First, developer portal creates the widget and assigns all its parameters. Then it invokes the `initialize` method.-
- ```typescript
- ...
- import * as ko from "knockout";
- import { Component, RuntimeComponent, OnMounted, OnDestroyed, Param } from "@paperbits/common/ko/decorators";
- import { HttpClient, HttpRequest } from "@paperbits/common/http";
- ...
-
- export class WidgetRuntime {
- public readonly sessionDescription: ko.Observable<string>;
-
- constructor(private readonly httpClient: HttpClient) {
- ...
- this.sessionNumber = ko.observable();
- this.sessionDescription = ko.observable();
- ...
- }
-
- @Param()
- public readonly sessionNumber: ko.Observable<string>;
-
- @OnMounted()
- public async initialize(): Promise<void> {
- ...
- const sessionNumber = this.sessionNumber();
-
- const request: HttpRequest = {
- url: `https://conferenceapi.azurewebsites.net/session/${sessionNumber}`,
- method: "GET"
- };
-
- const response = await this.httpClient.send<string>(request);
- const sessionDescription = response.toText();
-
- this.sessionDescription(sessionDescription);
- ...
- }
- ...
- }
- ```
-
-## Tweak the widget template
-
-Update your widget to display the session description.
-
-Use a paragraph tag and a `markdown` (or `text`) binding in the `ko/runtime/widget-runtime.html` file to render the description:
-
-```html
-<p data-bind="markdown: sessionDescription"></p>
-```
-
-## Add the widget editor
-
-The widget is now configured to fetch the description of the session `107`. You specified `107` in the code as the default session. To check that you did everything right, run `npm start` and confirm that developer portal shows the description on the page.
-
-Now, carry out these steps to allow the user to set up the session ID through a widget editor:
-
-1. Update the `ko/widgetEditorViewModel.ts` file:
-
- ```typescript
- export class WidgetEditor implements WidgetEditor<WidgetModel> {
- public readonly sessionNumber: ko.Observable<string>;
-
- constructor() {
- this.sessionNumber = ko.observable();
- }
-
- @Param()
- public model: WidgetModel;
-
- @Event()
- public onChange: (model: WidgetModel) => void;
-
- @OnMounted()
- public async initialize(): Promise<void> {
- this.sessionNumber(this.model.sessionNumber);
- this.sessionNumber.subscribe(this.applyChanges);
- }
-
- private applyChanges(): void {
- this.model.sessionNumber = this.sessionNumber();
- this.onChange(this.model);
- }
- }
- ```
-
- The editor view model uses the same approach that you've seen previously, but there is a new property `onChange`, decorated with `@Event()`. It wires the callback to notify the listeners (in this case - a content editor) of changes to the model.
-
-1. Update the `ko/widgetEditorView.html` file:
-
- ```html
- <input type="text" class="form-control" data-bind="textInput: sessionNumber" />
- ```
-
-1. Run `npm start` again. You should be able to change `sessionNumber` in the widget editor. Change the ID to `108`, save the changes, and refresh the browser's tab. If you're experiencing problems, you may need to add the widget onto the page again.
-
- :::image type="content" source="media/developer-portal-implement-widgets/widget-editor.png" alt-text="Screenshot of widget editor":::
-
-## Rename the widget
-
-Change the widget name in the `constants.ts` file:
-
-```typescript
-...
-export const widgetName = "conference-session";
-export const widgetDisplayName = "Conference session";
-...
-```
-
-> [!NOTE]
-> If you're contributing the widget to the repository, the `widgetName` needs to be the same as its folder name and needs to be derived from the display name (lowercase and spaces replaced with dashes). The category should remain `Community`.
-
-## Next steps
--
-Learn more about the developer portal:
--- [Azure API Management developer portal overview](api-management-howto-developer-portal.md)--- [Contribute widgets](developer-portal-widget-contribution-guidelines.md) - we welcome and encourage community contributions.--- See [Use community widgets](developer-portal-use-community-widgets.md) to learn how to use widgets contributed by the community.
api-management Developer Portal Self Host https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-self-host.md
# Self-host the API Management developer portal
-This tutorial describes how to self-host the [API Management developer portal](api-management-howto-developer-portal.md). Self-hosting gives you flexibility to extend the developer portal with custom logic and widgets that dynamically customize pages on runtime. You can self-host multiple portals for your API Management instance, with different features. When you self-host a portal, you become its maintainer and you're responsible for its upgrades.
+This tutorial describes how to self-host the [API Management developer portal](api-management-howto-developer-portal.md). Self-hosting is one of several options to [extend the functionality](developer-portal-extend-custom-functionality.md) of the developer portal. For example, you can self-host multiple portals for your API Management instance, with different features. When you self-host a portal, you become its maintainer and you're responsible for its upgrades.
-The following steps show how to set up your local development environment, carry out changes in the developer portal, and publish and deploy them to an Azure storage account.
+> [!IMPORTANT]
+> Consider self-hosting the developer portal only when you need to modify the core of the developer portal's codebase. This option requires advanced configuration, including:
+> * Deployment to a hosting platform, optionally fronted by a solution such as CDN for increased availability and performance
+> * Maintaining and managing hosting infrastructure
+> * Manual updates, including for security patches, which may require you to resolve code conflicts when upgrading the codebase
If you have already uploaded or modified media files in the managed portal, see [Move from managed to self-hosted](#move-from-managed-to-self-hosted-developer-portal), later in this article.
api-management Developer Portal Use Community Widgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-use-community-widgets.md
description: Learn about community widgets for the API Management developer portal and how to inject and use them in your code. Previously updated : 03/25/2021 Last updated : 08/18/2022 # Use community widgets in the developer portal
-All developers place their community-contributed widgets in the `/community/widgets/` folder of the API Management developer portal [GitHub repository](https://github.com/Azure/api-management-developer-portal). Each has been accepted by the developer portal team. You can use the widgets by injecting them into your [self-hosted version](developer-portal-self-host.md) of the portal. The managed version of the developer portal doesn't currently support community widgets.
+All developers place their community-contributed widgets in the `/community/widgets/` folder of the API Management developer portal [GitHub repository](https://github.com/Azure/api-management-developer-portal). Each has been accepted by the developer portal team. You can use the widgets by injecting them into your managed developer portal or a [self-hosted version](developer-portal-self-host.md) of the portal.
> [!NOTE] > The developer portal team thoroughly inspects contributed widgets and their dependencies. However, the team canΓÇÖt guarantee itΓÇÖs safe to load the widgets. Use your own judgment when deciding to use a widget contributed by the community. Refer to our [widget contribution guidelines](developer-portal-widget-contribution-guidelines.md#contribution-guidelines) to learn about our preventive measures.
-## Inject and use external widgets
+## Inject and use external widget - managed portal
+
+For guidance to create and use a development environment to scaffold and upload a custom widget, see [Create and upload custom widget](developer-portal-extend-custom-functionality.md#create-and-upload-custom-widget).
+
+## Inject and use external widget - self-hosted portal
1. Set up a [local environment](developer-portal-self-host.md#step-1-set-up-local-environment) for the latest release of the developer portal.
api-management Developer Portal Widget Contribution Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-widget-contribution-guidelines.md
description: Learn about recommended guidelines to follow when you contribute a widget to the API Management developer portal repository. Previously updated : 03/25/2021 Last updated : 08/18/2022
If you'd like to contribute a widget to the API Management developer portal [Git
1. Open a pull request to include your widget in the official repository.
-Your widget will inherit the repository's license. It will be available for [opt-in installation](developer-portal-use-community-widgets.md) in the self-hosted version of the portal. The developer portal team may decide to also include it in the managed version of the portal.
+Your widget will inherit the repository's license. It will be available for [opt-in installation](developer-portal-use-community-widgets.md) in either the managed developer portal or a [self-hosted version](developer-portal-self-host.md) of the portal. The developer portal team may decide to also include it in the managed version of the portal.
-Refer to the [widget implementation](developer-portal-implement-widgets.md) tutorial for an example of how to develop your own widget.
+For an example of how to develop your own widget and upload it to your developer portal, see [Create and upload custom widget](developer-portal-extend-custom-functionality.md#create-and-upload-custom-widget).
## Contribution guidelines
This guidance is intended to ensure the safety and privacy of our customers and
- For more information about contributions, see the API Management developer portal [GitHub repository](https://github.com/Azure/api-management-developer-portal/). -- See [Implement widgets](developer-portal-implement-widgets.md) to learn how to develop your own widget, step by step.
+- See [Extend the developer portal with custom features](developer-portal-extend-custom-functionality.md) to learn about options to add custom functionality to the developer portal.
- See [Use community widgets](developer-portal-use-community-widgets.md) to learn how to use widgets contributed by the community.
app-service App Service Configuration References https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-configuration-references.md
+
+ Title: Use App Configuration references (Preview)
+description: Learn how to set up Azure App Service and Azure Functions to use Azure App Configuration references. Make App Configuration key-values available to your application code without changing it.
+++ Last updated : 06/21/2022++++
+# Use App Configuration references for App Service and Azure Functions (preview)
+
+This topic shows you how to work with configuration data in your App Service or Azure Functions application without requiring any code changes. [Azure App Configuration](../azure-app-configuration/overview.md) is a service to centrally manage application configuration. Additionally, it's an effective audit tool for your configuration values over time or releases.
+
+## Granting your app access to App Configuration
+
+To get started with using App Configuration references in App Service, you'll first need an App Configuration store, and provide your app permission to access the configuration key-values in the store.
+
+1. Create an App Configuration store by following the [App Configuration quickstart](../azure-app-configuration/quickstart-dotnet-core-app.md#create-an-app-configuration-store).
+
+1. Create a [managed identity](overview-managed-identity.md) for your application.
+
+ App Configuration references will use the app's system assigned identity by default, but you can [specify a user-assigned identity](#access-app-configuration-store-with-a-user-assigned-identity).
+
+1. Enable the newly created identity to have the right set of access permissions on the App Configuration store. Update the [role assignments for your store](../azure-app-configuration/howto-integrate-azure-managed-service-identity.md#grant-access-to-app-configuration). You'll be assigning `App Configuration Data Reader` role to this identity, scoped over the resource.
+
+> [!NOTE]
+> App Configuration references do not yet support network-restricted configuration stores.
+
+### Access App Configuration Store with a user-assigned identity
+
+Some apps might need to reference configuration at creation time, when a system-assigned identity wouldn't yet be available. In these cases, a user-assigned identity can be created and given access to the App Configuration store, in advance. Follow these steps to [create user-assigned identity for App Configuration store](../azure-app-configuration/overview-managed-identity.md#adding-a-user-assigned-identity).
+
+Once you have granted permissions to the user-assigned identity, follow these steps:
+
+1. [Assign the identity](./overview-managed-identity.md#add-a-user-assigned-identity) to your application if you haven't already.
+
+1. Configure the app to use this identity for App Configuration reference operations by setting the `keyVaultReferenceIdentity` property to the resource ID of the user-assigned identity. Though the property has keyVault in the name, the identity will apply to App Configuration references as well.
+
+ ```azurecli-interactive
+ userAssignedIdentityResourceId=$(az identity show -g MyResourceGroupName -n MyUserAssignedIdentityName --query id -o tsv)
+ appResourceId=$(az webapp show -g MyResourceGroupName -n MyAppName --query id -o tsv)
+ az rest --method PATCH --uri "${appResourceId}?api-version=2021-01-01" --body "{'properties':{'keyVaultReferenceIdentity':'${userAssignedIdentityResourceId}'}}"
+ ```
+
+This configuration will apply to all references from this App.
+
+## Reference syntax
+
+An App Configuration reference is of the form `@Microsoft.AppConfiguration({referenceString})`, where `{referenceString}` is replaced by below:
+
+> [!div class="mx-tdBreakAll"]
+> | Reference string parts | Description |
+> |--||
+> | Endpoint=_endpoint_; | **Endpoint** is the required part of the reference string. The value for **Endpoint** should have the url of your App Configuration resource.|
+> | Key=_keyName_; | **Key** forms the required part of the reference string. Value for **Key** should be the name of the Key that you want to assign to the App setting.
+> | Label=_label_ | The **Label** part is optional in reference string. **Label** should be the value of Label for the Key specified in **Key**
+
+For example, a complete reference with `Label` would look like the following,
+
+```
+@Microsoft.AppConfiguration(Endpoint=https://myAppConfigStore.azconfig.io; Key=myAppConfigKey; Label=myKeysLabel)ΓÇï
+```
+
+Alternatively without any `Label`:
+
+```
+@Microsoft.AppConfiguration(Endpoint=https://myAppConfigStore.azconfig.io; Key=myAppConfigKey)ΓÇï
+```
+
+Any configuration change to the app that results in a site restart causes an immediate refetch of all referenced key-values from the App Configuration store.
+
+## Source Application Settings from App Config
+
+App Configuration references can be used as values for [Application Settings](configure-common.md#configure-app-settings), allowing you to keep configuration data in App Configuration instead of the site config. Application Settings and App Configuration key-values both are securely encrypted at rest. If you need centralized configuration management capabilities, then configuration data should go into App Config.
+
+To use an App Configuration reference for an [app setting](configure-common.md#configure-app-settings), set the reference as the value of the setting. Your app can reference the Configuration value through its key as usual. No code changes are required.
+
+> [!TIP]
+> Most application settings using App Configuration references should be marked as slot settings, as you should have separate stores or labels for each environment.
+
+> [!NOTE]
+> Azure App Configuration also supports its own format for storing [Key Vault references](../azure-app-configuration/use-key-vault-references-dotnet-core.md). If the value of an App Configuration reference is a Key Vault reference in App Configuration store, the secret value will not be retrieved from Key Vault, as of yet. For using the secrets from KeyVault in App Service or Functions, please refer to the [Key Vault references in App Service](app-service-key-vault-references.md).
+
+### Considerations for Azure Files mounting
+
+Apps can use the `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` application setting to mount Azure Files as the file system. This setting has additional validation checks to ensure that the app can be properly started. The platform relies on having a content share within Azure Files, and it assumes a default name unless one is specified via the `WEBSITE_CONTENTSHARE` setting. For any requests that modify these settings, the platform will attempt to validate if this content share exists, and it will attempt to create it if not. If it can't locate or create the content share, the request is blocked.
+
+If you use App Configuration references for this setting, this validation check will fail by default, as the connection itself can't be resolved while processing the incoming request. To avoid this issue, you can skip the validation by setting `WEBSITE_SKIP_CONTENTSHARE_VALIDATION` to "1". This setting will bypass all checks, and the content share won't be created for you. You should ensure it's created in advance.
+
+> [!CAUTION]
+> If you skip validation and either the connection string or content share are invalid, the app will be unable to start properly and will only serve HTTP 500 errors.
+
+As part of creating the site, it's also possible that attempted mounting of the content share could fail due to managed identity permissions not being propagated or the virtual network integration not being set up. You can defer setting up Azure Files until later in the deployment template to accommodate for the required setup. See [Azure Resource Manager deployment](#azure-resource-manager-deployment) to learn more. App Service will use a default file system until Azure Files is set up, and files aren't copied over so make sure that no deployment attempts occur during the interim period before Azure Files is mounted.
+
+### Azure Resource Manager deployment
+
+When automating resource deployments through Azure Resource Manager templates, you may need to sequence your dependencies in a particular order to make this feature work. Of note, you'll need to define your application settings as their own resource, rather than using a `siteConfig` property in the site definition. This is because the site needs to be defined first so that the system-assigned identity is created with it and can be used in the access policy.
+
+Below is an example pseudo-template for a function app with App Configuration references:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "roleNameGuid": {
+ "type": "string",
+ "defaultValue": "[newGuid()]",
+ "metadata": {
+ "description": "A new GUID used to identify the role assignment"
+ }
+ }
+ },
+ "variables": {
+ "functionAppName": "DemoMBFunc",
+ "appConfigStoreName": "DemoMBAppConfig",
+ "resourcesRegion": "West US2",
+ "appConfigSku": "standard",
+ "FontNameKey": "FontName",
+ "FontColorKey": "FontColor",
+ "myLabel": "Test",
+ "App Configuration Data Reader": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Authorization/roleDefinitions/', '516239f1-63e1-4d78-a4de-a74fb236a071')]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "name": "[variables('functionAppName')]",
+ "apiVersion": "2021-03-01",
+ "location": "[variables('resourcesRegion')]",
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ //...
+ "resources": [
+ {
+ "type": "config",
+ "name": "appsettings",
+ "apiVersion": "2021-03-01",
+ //...
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/sites', variables('functionAppName'))]",
+ "[resourceId('Microsoft.AppConfiguration/configurationStores', variables('appConfigStoreName'))]"
+ ],
+ "properties": {
+ "WEBSITE_FONTNAME": "[concat('@Microsoft.AppConfiguration(Endpoint=', reference(resourceId('Microsoft.AppConfiguration/configurationStores', variables('appConfigStoreName'))).endpoint,'; Key=',variables('FontNameKey'),'; Label=',variables('myLabel'), ')')]",
+ "WEBSITE_FONTCOLOR": "[concat('@Microsoft.AppConfiguration(Endpoint=', reference(resourceId('Microsoft.AppConfiguration/configurationStores', variables('appConfigStoreName'))).endpoint,'; Key=',variables('FontColorKey'),'; Label=',variables('myLabel'), ')')]",
+ "WEBSITE_ENABLE_SYNC_UPDATE_SITE": "true"
+ //...
+ }
+ },
+ {
+ "type": "sourcecontrols",
+ "name": "web",
+ "apiVersion": "2021-03-01",
+ //...
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/sites', variables('functionAppName'))]",
+ "[resourceId('Microsoft.Web/sites/config', variables('functionAppName'), 'appsettings')]"
+ ]
+ }
+ ]
+ },
+ {
+ "type": "Microsoft.AppConfiguration/configurationStores",
+ "name": "[variables('appConfigStoreName')]",
+ "apiVersion": "2019-10-01",
+ "location": "[variables('resourcesRegion')]",
+ "sku": {
+ "name": "[variables('appConfigSku')]"
+ },
+ //...
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/sites', variables('functionAppName'))]"
+ ],
+ "properties": {
+ },
+ "resources": [
+ {
+ "type": "keyValues",
+ "name": "[variables('FontNameKey')]",
+ "apiVersion": "2021-10-01-preview",
+ //...
+ "dependsOn": [
+ "[resourceId('Microsoft.AppConfiguration/configurationStores', variables('appConfigStoreName'))]"
+
+ ],
+ "properties": {
+ "value": "Calibri",
+ "contentType": "application/json"
+ }
+ },
+ {
+ "type": "keyValues",
+ "name": "[variables('FontColorKey')]",
+ "apiVersion": "2021-10-01-preview",
+ //...
+ "dependsOn": [
+ "[resourceId('Microsoft.AppConfiguration/configurationStores', variables('appConfigStoreName'))]"
+
+ ],
+ "properties": {
+ "value": "Blue",
+ "contentType": "application/json"
+ }
+ }
+ ]
+ },
+ {
+ "scope": "[resourceId('Microsoft.AppConfiguration/configurationStores', variables('appConfigStoreName'))]",
+ "type": "Microsoft.Authorization/roleAssignments",
+ "apiVersion": "2020-04-01-preview",
+ "name": "[parameters('roleNameGuid')]",
+ "properties": {
+ "roleDefinitionId": "[variables('App Configuration Data Reader')]",
+ "principalId": "[reference(resourceId('Microsoft.Web/sites/', variables('functionAppName')), '2020-12-01', 'Full').identity.principalId]",
+ "principalType": "ServicePrincipal"
+ }
+ }
+ ]
+}
+```
+
+> [!NOTE]
+> In this example, the source control deployment depends on the application settings. This is normally unsafe behavior, as the app setting update behaves asynchronously. However, because we have included the `WEBSITE_ENABLE_SYNC_UPDATE_SITE` application setting, the update is synchronous. This means that the source control deployment will only begin once the application settings have been fully updated. For more app settings, see [Environment variables and app settings in Azure App Service](reference-app-settings.md).
+
+## Troubleshooting App Configuration References
+
+If a reference isn't resolved properly, the reference value will be used instead. For the application settings, an environment variable would be created whose value has the `@Microsoft.AppConfiguration(...)` syntax. It may cause an error, as the application was expecting a configuration value instead.
+
+Most commonly, this error could be due to a misconfiguration of the [App Configuration access policy](#granting-your-app-access-to-app-configuration). However, it could also be due to a syntax error in the reference or the Configuration key-value not existing in the store.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Reference Key vault secrets from App Service](./app-service-key-vault-references.md)
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md
Community support for Java 7 will terminate on July 29th, 2022 and [Java 7 will
If a supported Java runtime will be retired, Azure developers using the affected runtime will be given a deprecation notice at least six months before the runtime is retired. -- [Reasons to move to Java 11](/java/openjdk/reasons-to-move-to-java-11?bc=%2fazure%2fdeveloper%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fdeveloper%2fjava%2ffundamentals%2ftoc.json)-- [Java 7 migration guide](/java/openjdk/transition-from-java-7-to-java-8?bc=%2fazure%2fdeveloper%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fdeveloper%2fjava%2ffundamentals%2ftoc.json)
+- [Reasons to move to Java 11](/java/openjdk/reasons-to-move-to-java-11?bc=/azure/developer/breadcrumb/toc.json&toc=/azure/developer/java/fundamentals/toc.json)
+- [Java 7 migration guide](/java/openjdk/transition-from-java-7-to-java-8?bc=/azure/developer/breadcrumb/toc.json&toc=/azure/developer/java/fundamentals/toc.json)
### Local development
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md
Azure App Service is a fully managed platform as a service (PaaS) offering for d
* **API and mobile features** - App Service provides turn-key CORS support for RESTful API scenarios, and simplifies mobile app scenarios by enabling authentication, offline data sync, push notifications, and more. * **Serverless code** - Run a code snippet or script on-demand without having to explicitly provision or manage infrastructure, and pay only for the compute time your code actually uses (see [Azure Functions](../azure-functions/index.yml)).
-Besides App Service, Azure offers other services that can be used for hosting websites and web applications. For most scenarios, App Service is the best choice. For microservice architecture, consider [Azure Spring Apps](../spring-apps/index.yml) or [Service Fabric](https://azure.microsoft.com/documentation/services/service-fabric). If you need more control over the VMs on which your code runs, consider [Azure Virtual Machines](https://azure.microsoft.com/documentation/services/virtual-machines/). For more information about how to choose between these Azure services, see [Azure App Service, Virtual Machines, Service Fabric, and Cloud Services comparison](/azure/architecture/guide/technology-choices/compute-decision-tree).
+Besides App Service, Azure offers other services that can be used for hosting websites and web applications. For most scenarios, App Service is the best choice. For microservice architecture, consider [Azure Spring Apps](../spring-apps/index.yml) or [Service Fabric](/azure/service-fabric). If you need more control over the VMs on which your code runs, consider [Azure Virtual Machines](/azure/virtual-machines/). For more information about how to choose between these Azure services, see [Azure App Service, Virtual Machines, Service Fabric, and Cloud Services comparison](/azure/architecture/guide/technology-choices/compute-decision-tree).
## App Service on Linux
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/17/2022 Last updated : 08/29/2022
compliant with the specific standard.
## Release notes
+### August 2022
+- **App Service apps should only be accessible over HTTPS**
+ - Update scope of policy to remove slots
+ - Creation of "App Service app slots should only be accessible over HTTPS" to monitor slots
+ - Add "Deny" effect
+ - Creation of "Configure App Service apps to only be accessible over HTTPS" for enforcement of policy
+- **App Service app slots should only be accessible over HTTPS**
+ - New policy created
+- **Configure App Service apps to only be accessible over HTTPS**
+ - New policy created
+- **Configure App Service app slots to only be accessible over HTTPS**
+ - New policy created
+ ### July 2022 - Deprecation of the following policies:
application-gateway Application Gateway Ilb Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-ilb-arm.md
If you want to configure SSL offload, see [Configure an application gateway for
If you want more information about load balancing options in general, see:
-* [Azure Load Balancer](https://azure.microsoft.com/documentation/services/load-balancer/)
-* [Azure Traffic Manager](https://azure.microsoft.com/documentation/services/traffic-manager/)
+* [Azure Load Balancer](/azure/load-balancer/)
+* [Azure Traffic Manager](/azure/traffic-manager/)
applied-ai-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/label-tool.md
keywords: document processing
>[!TIP] >
-> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
+> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio).
> * The v3.0 Studio supports any model trained with v2.1 labeled data. > * You can refer to the [API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0. > * *See* our [**REST API**](quickstarts/get-started-v3-sdk-rest-api.md) or [**C#**](quickstarts/get-started-v3-sdk-rest-api.md), [**Java**](quickstarts/get-started-v3-sdk-rest-api.md), [**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md), or [Python](quickstarts/get-started-v3-sdk-rest-api.md) SDK quickstarts to get started with the V3.0.
automanage Tutorial Create Assignment Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/tutorial-create-assignment-python.md
In this tutorial, you'll create a resource group and a virtual machine. You'll t
## Prerequisites - [Python](https://www.python.org/downloads/)-- [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli-windows?tabs=azure-cli) or [Azure PowerShell](https://docs.microsoft.com/powershell/azure/install-az-ps)
+- [Azure CLI](/cli/azure/install-azure-cli-windows?tabs=azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps)
## Create resources
automation Change Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/change-tracking.md
If you don't see your machine in query results, it hasn't recently checked in. T
If your machine shows up in the query results, verify the scope configuration. See [Targeting monitoring solutions in Azure Monitor](../../azure-monitor/insights/solution-targeting.md).
-For more troubleshooting of this issue, see [Issue: You are not seeing any Linux data](../../azure-monitor/agents/agent-linux-troubleshoot.md#issue-you-are-not-seeing-any-linux-data).
+For more troubleshooting of this issue, see [Issue: You are not seeing any Linux data](../../azure-monitor/agents/agent-linux-troubleshoot.md#issue-you-arent-seeing-any-linux-data).
##### Log Analytics agent for Linux not configured correctly
azure-app-configuration Concept App Configuration Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-app-configuration-event.md
Title: Reacting to Azure App Configuration key-value events
description: Use Azure Event Grid to subscribe to App Configuration events, which allow applications to react to changes in key-values without the need for complicated code. -+ Previously updated : 02/20/2020 Last updated : 08/30/2022
# Reacting to Azure App Configuration events
-Azure App Configuration events enable applications to react to changes in key-values. This is done without the need for complicated code or expensive and inefficient polling services. Instead, events are pushed through [Azure Event Grid](https://azure.microsoft.com/services/event-grid/) to subscribers such as [Azure Functions](https://azure.microsoft.com/services/functions/), [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/), or even to your own custom http listener. Critically, you only pay for what you use.
+Azure App Configuration events enable applications to react to changes in key-values. This is done without the need for complicated code or expensive and inefficient polling services. Instead, events are pushed through [Azure Event Grid](https://azure.microsoft.com/services/event-grid/) to subscribers, such as [Azure Functions](https://azure.microsoft.com/services/functions/), [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/), or even to your own custom HTTP listener. Critically, you only pay for what you use.
-Azure App Configuration events are sent to the Azure Event Grid, which provides reliable delivery services to your applications through rich retry policies and dead-letter delivery. To learn more, see [Event Grid message delivery and retry](../event-grid/delivery-and-retry.md).
+Azure App Configuration events are sent to the Azure Event Grid, which provides reliable delivery services to your applications through rich retry policies and dead-letter delivery. For more information, see [Event Grid message delivery and retry](../event-grid/delivery-and-retry.md).
Common App Configuration event scenarios include refreshing application configuration, triggering deployments, or any configuration-oriented workflow. When changes are infrequent, but your scenario requires immediate responsiveness, event-based architecture can be especially efficient.
-Take a look at [Use Event Grid for data change notifications](./howto-app-configuration-event.md) for a quick example.
+Take a look at [Use Event Grid for data change notifications](./howto-app-configuration-event.md) for a quick example.
-![Event Grid Model](./media/event-grid-functional-model.png)
## Available Azure App Configuration events
-Event grid uses [event subscriptions](../event-grid/concepts.md#event-subscriptions) to route event messages to subscribers. Azure App Configuration event subscriptions can include two types of events:
+
+Event Grid uses [event subscriptions](../event-grid/concepts.md#event-subscriptions) to route event messages to subscribers. Azure App Configuration event subscriptions can include two types of events:
> |Event Name|Description| > |-|--|
-> |`Microsoft.AppConfiguration.KeyValueModified`|Fired when a key-value is created or replaced|
-> |`Microsoft.AppConfiguration.KeyValueDeleted`|Fired when a key-value is deleted|
+> |`Microsoft.AppConfiguration.KeyValueModified`|Fired when a key-value is created or replaced.|
+> |`Microsoft.AppConfiguration.KeyValueDeleted`|Fired when a key-value is deleted.|
## Event schema
-Azure App Configuration events contain all the information you need to respond to changes in your data. You can identify an App Configuration event because the eventType property starts with "Microsoft.AppConfiguration". Additional information about the usage of Event Grid event properties is documented in [Event Grid event schema](../event-grid/event-schema.md).
+
+Azure App Configuration events contain all the information you need to respond to changes in your data. You can identify an App Configuration event because the `eventType` property starts with `Microsoft.AppConfiguration`. Additional information about the usage of Event Grid event properties is documented in the [Event Grid event schema](../event-grid/event-schema.md).
> |Property|Type|Description| > |-||--|
-> |topic|string|Full Azure Resource Manager id of the App Configuration that emits the event.|
-> |subject|string|The URI of the key-value that is the subject of the event.|
-> |eventTime|string|The date/time that the event was generated, in ISO 8601 format.|
-> |eventType|string|"Microsoft.AppConfiguration.KeyValueModified" or "Microsoft.AppConfiguration.KeyValueDeleted".|
+> |topic|string|Full Azure Resource Manager ID of the App Configuration that emits the event.|
+> |subject|string|The URI of the key-value that's the subject of the event.|
+> |eventTime|string|The date/time that the event was generated in ISO 8601 format.|
+> |eventType|string|`Microsoft.AppConfiguration.KeyValueModified` or `Microsoft.AppConfiguration.KeyValueDeleted`.|
> |Id|string|A unique identifier of this event.| > |dataVersion|string|The schema version of the data object.| > |metadataVersion|string|The schema version of top-level properties.|
-> |data|object|Collection of Azure App Configuration specific event data|
+> |data|object|Collection of Azure App Configuration specific event data.|
> |data.key|string|The key of the key-value that was modified or deleted.| > |data.label|string|The label, if any, of the key-value that was modified or deleted.|
-> |data.etag|string|For `KeyValueModified` the etag of the new key-value. For `KeyValueDeleted` the etag of the key-value that was deleted.|
+> |data.etag|string|For `KeyValueModified`, the etag of the new key-value. For `KeyValueDeleted`, the etag of the key-value that was deleted.|
+
+Here's an example of a `KeyValueModified` event:
-Here is an example of a KeyValueModified event:
```json [{ "id": "84e17ea4-66db-4b54-8050-df8f7763f87b",
Here is an example of a KeyValueModified event:
For more information, see [Azure App Configuration events schema](../event-grid/event-schema-app-configuration.md). ## Practices for consuming events+ Applications that handle App Configuration events should follow these recommended practices: > [!div class="checklist"]
-> * Multiple subscriptions can be configured to route events to the same event handler, so do not assume events are from a particular source. Instead, check the topic of the message to ensure the App Configuration instance sending the event.
-> * Check the eventType and do not assume that all events you receive will be the types you expect.
-> * Use the etag fields to understand if your information about objects is still up-to-date.
+> * Multiple subscriptions can be configured to route events to the same event handler, so don't assume events are from a particular source. Instead, check the topic of the message to ensure that the App Configuration instance is sending the event.
+> * Check the `eventType`, and don't assume that all events you receive will be the types you expect.
+> * Use the `etag` fields to understand if your information about objects is still up-to-date.
> * Use the sequencer fields to understand the order of events on any particular object. > * Use the subject field to access the key-value that was modified. - ## Next steps
-Learn more about Event Grid and give Azure App Configuration events a try:
+To learn more about Event Grid and to give Azure App Configuration events a try, see:
+
+> [!div class="nextstepaction"]
+> [About Event Grid](../event-grid/overview.md)
-- [About Event Grid](../event-grid/overview.md)-- [How to use Event Grid for data change notifications](./howto-app-configuration-event.md)
+> [!div class="nextstepaction"]
+> [How to use Event Grid for data change notifications](./howto-app-configuration-event.md)
azure-app-configuration Concept Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-customer-managed-keys.md
Title: Use customer-managed keys to encrypt your configuration data
description: Encrypt your configuration data using customer-managed keys Previously updated : 07/28/2020 Last updated : 08/30/2022+ # Use customer-managed keys to encrypt your App Configuration data
-Azure App Configuration [encrypts sensitive information at rest](../security/fundamentals/encryption-atrest.md). The use of customer-managed keys provides enhanced data protection by allowing you to manage your encryption keys. When managed key encryption is used, all sensitive information in App Configuration is encrypted with a user-provided Azure Key Vault key. This provides the ability to rotate the encryption key on demand. It also provides the ability to revoke Azure App Configuration's access to sensitive information by revoking the App Configuration instance's access to the key.
-## Overview
-Azure App Configuration encrypts sensitive information at rest using a 256-bit AES encryption key provided by Microsoft. Every App Configuration instance has its own encryption key managed by the service and used to encrypt sensitive information. Sensitive information includes the values found in key-value pairs. When customer-managed key capability is enabled, App Configuration uses a managed identity assigned to the App Configuration instance to authenticate with Azure Active Directory. The managed identity then calls Azure Key Vault and wraps the App Configuration instance's encryption key. The wrapped encryption key is then stored and the unwrapped encryption key is cached within App Configuration for one hour. App Configuration refreshes the unwrapped version of the App Configuration instance's encryption key hourly. This ensures availability under normal operating conditions.
+Azure App Configuration [encrypts sensitive information at rest](../security/fundamentals/encryption-atrest.md). The use of customer-managed keys provides enhanced data protection by allowing you to manage your encryption keys. When managed key encryption is used, all sensitive information in App Configuration is encrypted with a user-provided Azure Key Vault key. This provides the ability to rotate the encryption key on demand. It also provides the ability to revoke Azure App Configuration's access to sensitive information by revoking the App Configuration instance's access to the key.
->[!IMPORTANT]
-> If the identity assigned to the App Configuration instance is no longer authorized to unwrap the instance's encryption key, or if the managed key is permanently deleted, then it will no longer be possible to decrypt sensitive information stored in the App Configuration instance. Using Azure Key Vault's [soft delete](../key-vault/general/soft-delete-overview.md) function mitigates the chance of accidentally deleting your encryption key.
+## Overview
-When users enable the customer managed key capability on their Azure App Configuration instance, they control the serviceΓÇÖs ability to access their sensitive information. The managed key serves as a root encryption key. A user can revoke their App Configuration instanceΓÇÖs access to their managed key by changing their key vault access policy. When this access is revoked, App Configuration will lose the ability to decrypt user data within one hour. At this point, the App Configuration instance will forbid all access attempts. This situation is recoverable by granting the service access to the managed key once again. Within one hour, App Configuration will be able to decrypt user data and operate under normal conditions.
+Azure App Configuration encrypts sensitive information at rest by using a 256-bit AES encryption key provided by Microsoft. Every App Configuration instance has its own encryption key managed by the service and used to encrypt sensitive information. Sensitive information includes the values found in key-value pairs. When the customer-managed key capability is enabled, App Configuration uses a managed identity assigned to the App Configuration instance to authenticate with Azure Active Directory. The managed identity then calls Azure Key Vault and wraps the App Configuration instance's encryption key. The wrapped encryption key is then stored, and the unwrapped encryption key is cached within App Configuration for one hour. Every hour, the App Configuration refreshes the unwrapped version of the App Configuration instance's encryption key. This process ensures availability under normal operating conditions.
->[!NOTE]
->All Azure App Configuration data is stored for up to 24 hours in an isolated backup. This includes the unwrapped encryption key. This data is not immediately available to the service or service team. In the event of an emergency restore, Azure App Configuration will re-revoke itself from the managed key data.
+> [!IMPORTANT]
+> If the identity assigned to the App Configuration instance is no longer authorized to unwrap the instance's encryption key, or if the managed key is permanently deleted, then it will no longer be possible to decrypt sensitive information stored in the App Configuration instance. By using Azure Key Vault's [soft delete](../key-vault/general/soft-delete-overview.md) function, you mitigate the chance of accidentally deleting your encryption key.
+
+When users enable the customer managed key capability on their Azure App Configuration instance, they control the serviceΓÇÖs ability to access their sensitive information. The managed key serves as a root encryption key. Users can revoke their App Configuration instanceΓÇÖs access to their managed key by changing their key vault access policy. When this access is revoked, App Configuration will lose the ability to decrypt user data within one hour. At this point, the App Configuration instance will forbid all access attempts. This situation is recoverable by granting the service access to the managed key once again. Within one hour, App Configuration will be able to decrypt user data and operate under normal conditions.
+
+> [!NOTE]
+> All Azure App Configuration data is stored for up to 24 hours in an isolated backup. This includes the unwrapped encryption key. This data isn't immediately available to the service or service team. In the event of an emergency restore, Azure App Configuration will revoke itself again from the managed key data.
## Requirements+ The following components are required to successfully enable the customer-managed key capability for Azure App Configuration:-- Standard tier Azure App Configuration instance-- Azure Key Vault with soft-delete and purge-protection features enabled-- An RSA or RSA-HSM key within the Key Vault
- - The key must not be expired, it must be enabled, and it must have both wrap and unwrap capabilities enabled
-Once these resources are configured, two steps remain to allow Azure App Configuration to use the Key Vault key:
-1. Assign a managed identity to the Azure App Configuration instance
-2. Grant the identity `GET`, `WRAP`, and `UNWRAP` permissions in the target Key Vault's access policy.
+- Standard tier Azure App Configuration instance.
+- Azure Key Vault with soft-delete and purge-protection features enabled.
+- An RSA or RSA-HSM key within the Key Vault.
+ - The key must not be expired, it must be enabled, and it must have both wrap and unwrap capabilities enabled.
+
+After these resources are configured, use the following steps so that the Azure App Configuration can use the Key Vault key:
+
+1. Assign a managed identity to the Azure App Configuration instance.
+1. Grant the identity `GET`, `WRAP`, and `UNWRAP` permissions in the target Key Vault's access policy.
## Enable customer-managed key encryption for your Azure App Configuration instance
-To begin, you will need a properly configured Azure App Configuration instance. If you do not yet have an App Configuration instance available, follow one of these quickstarts to set one up:
+
+To begin, you'll need a properly configured Azure App Configuration instance. If you don't yet have an App Configuration instance available, follow one of these quickstarts to set one up:
+ - [Create an ASP.NET Core app with Azure App Configuration](quickstart-aspnet-core-app.md) - [Create a .NET Core app with Azure App Configuration](quickstart-dotnet-core-app.md) - [Create a .NET Framework app with Azure App Configuration](quickstart-dotnet-app.md) - [Create a Java Spring app with Azure App Configuration](quickstart-java-spring-app.md)
+- [Create a JavaScript app with Azure App Configuration](quickstart-javascript.md)
+- [Create a Python app with Azure App Configuration](quickstart-python.md)
->[!TIP]
-> The Azure Cloud Shell is a free interactive shell that you can use to run the command line instructions in this article. It has common Azure tools preinstalled, including the .NET Core SDK. If you are logged in to your Azure subscription, launch your [Azure Cloud Shell](https://shell.azure.com) from shell.azure.com. You can learn more about Azure Cloud Shell by [reading our documentation](../cloud-shell/overview.md)
+> [!TIP]
+> The Azure Cloud Shell is a free interactive shell that you can use to run the command line instructions in this article. It has common Azure tools preinstalled, including the .NET Core SDK. If you are logged in to your Azure subscription, launch your [Azure Cloud Shell](https://shell.azure.com) from shell.azure.com. You can learn more about Azure Cloud Shell by [reading our documentation](../cloud-shell/overview.md).
### Create and configure an Azure Key Vault
-1. Create an Azure Key Vault using the Azure CLI. Note that both `vault-name` and `resource-group-name` are user-provided and must be unique. We use `contoso-vault` and `contoso-resource-group` in these examples.
+
+1. Create an Azure Key Vault by using the Azure CLI. Both `vault-name` and `resource-group-name` are user-provided and must be unique. We use `contoso-vault` and `contoso-resource-group` in these examples.
```azurecli az keyvault create --name contoso-vault --resource-group contoso-resource-group ```
-
+ 1. Enable soft-delete and purge-protection for the Key Vault. Substitute the names of the Key Vault (`contoso-vault`) and Resource Group (`contoso-resource-group`) created in step 1. ```azurecli az keyvault update --name contoso-vault --resource-group contoso-resource-group --enable-purge-protection --enable-soft-delete ```
-
+ 1. Create a Key Vault key. Provide a unique `key-name` for this key, and substitute the names of the Key Vault (`contoso-vault`) created in step 1. Specify whether you prefer `RSA` or `RSA-HSM` encryption. ```azurecli az keyvault key create --name key-name --kty {RSA or RSA-HSM} --vault-name contoso-vault ```
-
- The output from this command shows the key ID ("kid") for the generated key. Make a note of the key ID to use later in this exercise. The key ID has the form: `https://{my key vault}.vault.azure.net/keys/{key-name}/{Key version}`. The key ID has three important components:
+
+ The output from this command shows the key ID ("kid") for the generated key. Make a note of the key ID to use later in this exercise. The key ID has the form: `https://{my key vault}.vault.azure.net/keys/{key-name}/{Key version}`. The key ID has three important components:
1. Key Vault URI: `https://{my key vault}.vault.azure.net 1. Key Vault key name: {Key Name} 1. Key Vault key version: {Key version}
-1. Create a system assigned managed identity using the Azure CLI, substituting the name of your App Configuration instance and resource group used in the previous steps. The managed identity will be used to access the managed key. We use `contoso-app-config` to illustrate the name of an App Configuration instance:
-
+1. Create a system-assigned managed identity by using the Azure CLI, substituting the name of your App Configuration instance and resource group used in the previous steps. The managed identity will be used to access the managed key. We use `contoso-app-config` to illustrate the name of an App Configuration instance:
+ ```azurecli az appconfig identity assign --name contoso-app-config --resource-group contoso-resource-group --identities [system] ```
-
- The output of this command includes the principal ID ("principalId") and tenant ID ("tenandId") of the system assigned identity. These IDs will be used to grant the identity access to the managed key.
+
+ The output of this command includes the principal ID (`"principalId"`) and tenant ID (`"tenandId"`) of the system-assigned identity. These IDs will be used to grant the identity access to the managed key.
```json {
To begin, you will need a properly configured Azure App Configuration instance.
} ```
-1. The managed identity of the Azure App Configuration instance needs access to the key to perform key validation, encryption, and decryption. The specific set of actions to which it needs access includes: `GET`, `WRAP`, and `UNWRAP` for keys. Granting the access requires the principal ID of the App Configuration instance's managed identity. This value was obtained in the previous step. It is shown below as `contoso-principalId`. Grant permission to the managed key using the command line:
+1. The managed identity of the Azure App Configuration instance needs access to the key to perform key validation, encryption, and decryption. The specific set of actions to which it needs access includes: `GET`, `WRAP`, and `UNWRAP` for keys. Granting access requires the principal ID of the App Configuration instance's managed identity. This value was obtained in the previous step. It's shown below as `contoso-principalId`. Grant permission to the managed key by using the command line:
```azurecli az keyvault set-policy -n contoso-vault --object-id contoso-principalId --key-permissions get wrapKey unwrapKey ```
-1. Once the Azure App Configuration instance can access the managed key, we can enable the customer-managed key capability in the service using the Azure CLI. Recall the following properties recorded during the key creation steps: `key name` `key vault URI`.
+1. After the Azure App Configuration instance can access the managed key, we can enable the customer-managed key capability in the service by using the Azure CLI. Recall the following properties recorded during the key creation steps: `key name` `key vault URI`.
```azurecli az appconfig update -g contoso-resource-group -n contoso-app-config --encryption-key-name key-name --encryption-key-version key-version --encryption-key-vault key-vault-Uri
To begin, you will need a properly configured Azure App Configuration instance.
Your Azure App Configuration instance is now configured to use a customer-managed key stored in Azure Key Vault. ## Next Steps
-In this article, you configured your Azure App Configuration instance to use a customer-managed key for encryption. Learn how to [integrate your service with Azure Managed Identities](howto-integrate-azure-managed-service-identity.md).
+
+In this article, you configured your Azure App Configuration instance to use a customer-managed key for encryption. To learn more about how to integrate your app service with Azure managed identities, continue to the next step.
+
+> [!div class="nextstepaction"]
+> [Integrate your service with Azure Managed Identities](./howto-integrate-azure-managed-service-identity.md)
azure-app-configuration Howto Leverage Json Content Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-leverage-json-content-type.md
ms.assetid:
ms.devlang: azurecli Previously updated : 08/03/2020 Last updated : 08/24/2022+ #Customer intent: I want to store JSON key-values in App Configuration store without losing the data type of each setting.
-# Leverage content-type to store JSON key-values in App Configuration
-
-Data is stored in App Configuration as key-values, where values are treated as the string type by default. However, you can specify a custom type by leveraging the content-type property associated with each key-value, so that you can preserve the original type of your data or have your application behave differently based on content-type.
+# Use content type to store JSON key-values in App Configuration
+Data is stored in App Configuration as key-values, where values are treated as the string type by default. However, you can specify a custom type by using the content type property associated with each key-value. This process preserves the original type of your data or makes your application behave differently based on content type.
## Overview
-In App Configuration, you can use the JSON media type as the content-type of your key-values to avail benefits like:
+In App Configuration, you can use the JSON media type as the content type of your key-values to avail the following benefits:
+ - **Simpler data management**: Managing key-values, like arrays, will become a lot easier in the Azure portal. - **Enhanced data export**: Primitive types, arrays, and JSON objects will be preserved during data export.-- **Native support with App Configuration provider**: Key-values with JSON content-type will work fine when consumed by App Configuration provider libraries in your applications.
+- **Native support with App Configuration provider**: Key-values with JSON content type will work fine when consumed by App Configuration provider libraries in your applications.
-#### Valid JSON content-type
+### Valid JSON content type
-Media types, as defined [here](https://www.iana.org/assignments/media-types/media-types.xhtml), can be assigned to the content-type associated with each key-value.
-A media type consists of a type and a subtype. If the type is `application` and the subtype (or suffix) is `json`, the media type will be considered a valid JSON content-type.
-Some examples of valid JSON content-types are:
+Media types, as defined [here](https://www.iana.org/assignments/media-types/media-types.xhtml), can be assigned to the content type associated with each key-value.
+A media type consists of a type and a subtype. If the type is `application` and the subtype (or suffix) is `json`, the media type will be considered a valid JSON content type.
+Some examples of valid JSON content types are:
-- application/json-- application/activity+json-- application/vnd.foobar+json;charset=utf-8
+- `application/json`
+- `application/activity+json`
+- `application/vnd.foobar+json;charset=utf-8`
-#### Valid JSON values
+### Valid JSON values
-When a key-value has JSON content-type, its value must be in valid JSON format for clients to process it correctly. Otherwise, clients may fail or fall back and treat it as string format.
+When a key-value has a JSON content type, its value must be in valid JSON format for clients to process it correctly. Otherwise, clients might fail or fall back and treat it as string format.
Some examples of valid JSON values are: -- "John Doe"-- 723-- false-- null-- "2020-01-01T12:34:56.789Z"-- [1, 2, 3, 4]-- {"ObjectSetting":{"Targeting":{"Default":true,"Level":"Information"}}}
+- `"John Doe"`
+- `723`
+- `false`
+- `null`
+- `"2020-01-01T12:34:56.789Z"`
+- `[1, 2, 3, 4]`
+- `{"ObjectSetting":{"Targeting":{"Default":true,"Level":"Information"}}}`
> [!NOTE]
-> For the rest of this article, any key-value in App Configuration that has a valid JSON content-type and a valid JSON value will be referred to as **JSON key-value**.
+> For the rest of this article, any key-value in App Configuration that has a valid JSON content type and a valid JSON value will be referred to as **JSON key-value**.
In this tutorial, you'll learn how to: > [!div class="checklist"]
In this tutorial, you'll learn how to:
> * Export JSON key-values to a JSON file. > * Consume JSON key-values in your applications. - [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)]
In this tutorial, you'll learn how to:
[!INCLUDE [azure-app-configuration-create](../../includes/azure-app-configuration-create.md)] - ## Create JSON key-values in App Configuration
-JSON key-values can be created using Azure portal, Azure CLI or by importing from a JSON file. In this section, you will find instructions on creating the same JSON key-values using all three methods.
+JSON key-values can be created using Azure portal, Azure CLI, or by importing from a JSON file. In this section, you'll find instructions on creating the same JSON key-values using all three methods.
### Create JSON key-values using Azure portal
az appconfig kv set -n $appConfigName --content-type application/json --key Sett
``` > [!IMPORTANT]
-> If you are using Azure CLI or Azure Cloud Shell to create JSON key-values, the value provided must be an escaped JSON string.
+> If you're using Azure CLI or Azure Cloud Shell to create JSON key-values, the value provided must be an escaped JSON string.
### Import JSON key-values from a file
-Create a JSON file called `Import.json` with the following content and import as key-values into App Configuration:
+Create a JSON file called `Import.json` with the following content and import it as key-values into App Configuration:
```json {
Create a JSON file called `Import.json` with the following content and import as
az appconfig kv import -s file --format json --path "~/Import.json" --content-type "application/json" --separator : --depth 2 ```
-> [!Note]
-> The `--depth` argument is used for flattening hierarchical data from a file into key-value pairs. In this tutorial, depth is specified for demonstrating that you can also store JSON objects as values in App Configuration. If depth is not specified, JSON objects will be flattened to the deepest level by default.
+> [!NOTE]
+> The `--depth` argument is used for flattening hierarchical data from a file into key-value pairs. In this tutorial, depth is specified for demonstrating that you can also store JSON objects as values in App Configuration. If depth isn't specified, JSON objects will be flattened to the deepest level by default.
The JSON key-values you created should look like this in App Configuration:
-![Config store containing JSON key-values](./media/create-json-settings.png)
- ## Export JSON key-values to a file
-One of the major benefits of using JSON key-values is the ability to preserve the original data type of your values while exporting. If a key-value in App Configuration doesn't have JSON content-type, its value will be treated as string.
+One of the major benefits of using JSON key-values is the ability to preserve the original data type of your values while exporting. If a key-value in App Configuration doesn't have JSON content type, its value will be treated as a string.
-Consider these key-values without JSON content-type:
+Consider these key-values without JSON content type:
| Key | Value | Content Type | ||||
When you export these key-values to a JSON file, the values will be exported as
} ```
-However, when you export JSON key-values to a file, all values will preserve their original data type. To verify this, export key-values from your App Configuration to a JSON file. You'll see that the exported file has the same contents as the `Import.json` file you previously imported.
+However, when you export JSON key-values to a file, all values will preserve their original data type. To verify this process, export key-values from your App Configuration to a JSON file. You'll see that the exported file has the same contents as the `Import.json` file you previously imported.
```azurecli-interactive az appconfig kv export -d file --format json --path "~/Export.json" --separator : ``` > [!NOTE]
-> If your App Configuration store has some key-values without JSON content-type, they will also be exported to the same file in string format.
-
+> If your App Configuration store has some key-values without JSON content type, they will also be exported to the same file in string format.
## Consuming JSON key-values in applications
-The easiest way to consume JSON key-values in your application is through App Configuration provider libraries. With the provider libraries, you don't need to implement special handling of JSON key-values in your application. They will be parsed and converted to match the native configuration of your application.
+The easiest way to consume JSON key-values in your application is through App Configuration provider libraries. With the provider libraries, you don't need to implement special handling of JSON key-values in your application. They'll be parsed and converted to match the native configuration of your application.
For example, if you have the following key-value in App Configuration:
Your .NET application configuration will have the following key-values:
| Settings:FontSize | 24 | | Settings:UseDefaultRouting | false |
-You may access the new keys directly or you may choose to [bind configuration values to instances of .NET objects](/aspnet/core/fundamentals/configuration/#bind-hierarchical-configuration-data-using-the-options-pattern).
-
+You might access the new keys directly or you might choose to [bind configuration values to instances of .NET objects](/aspnet/core/fundamentals/configuration/#bind-hierarchical-configuration-data-using-the-options-pattern).
-> [!Important]
-> Native support for JSON key-values is available in .NET configuration provider version 4.0.0 (or later). See [*Next steps*](#next-steps) section for more details.
-
-If you are using the SDK or REST API to read key-values from App Configuration, based on the content-type, your application is responsible for parsing the value of a JSON key-value.
+> [!IMPORTANT]
+> Native support for JSON key-values is available in .NET configuration provider version 4.0.0 (or later). For more information, go to [Next steps](#next-steps) section.
+If you're using the SDK or REST API to read key-values from App Configuration, based on the content type, your application is responsible for parsing the value of a JSON key-value.
## Clean up resources
If you are using the SDK or REST API to read key-values from App Configuration,
Now that you know how to work with JSON key-values in your App Configuration store, create an application for consuming these key-values:
-* [ASP.NET Core quickstart](./quickstart-aspnet-core-app.md)
- * Prerequisite: [Microsoft.Azure.AppConfiguration.AspNetCore](https://www.nuget.org/packages/Microsoft.Azure.AppConfiguration.AspNetCore) package v4.0.0 or later.
+> [!div class="nextstepaction"]
+> [ASP.NET Core quickstart](./quickstart-aspnet-core-app.md)
-* [.NET Core quickstart](./quickstart-dotnet-core-app.md)
- * Prerequisite: [Microsoft.Extensions.Configuration.AzureAppConfiguration](https://www.nuget.org/packages/Microsoft.Extensions.Configuration.AzureAppConfiguration) package v4.0.0 or later.
+> [!div class="nextstepaction"]
+> [.NET Core quickstart](./quickstart-dotnet-core-app.md)
azure-app-configuration Integrate Ci Cd Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/integrate-ci-cd-pipeline.md
- Previously updated : 04/19/2020-+ Last updated : 08/30/2022+ # Customer intent: I want to use Azure App Configuration data in my CI/CD pipeline.
If you have an Azure DevOps Pipeline, you can fetch key-values from App Configur
## Deploy App Configuration data with your application
-Your application may fail to run if it depends on Azure App Configuration and cannot reach it. Enhance the resiliency of your application by packaging configuration data into a file that's deployed with the application and loaded locally during application startup. This approach guarantees that your application has default setting values on startup. These values are overwritten by any newer changes in an App Configuration store when it's available.
+Your application might fail to run if it depends on Azure App Configuration and can't reach it. Enhance the resiliency of your application by packaging configuration data into a file that's deployed with the application and loaded locally during application startup. This approach guarantees that your application has a default setting values on startup. These values are overwritten by any newer changes in an App Configuration store when it's available.
Using the [Export](./howto-import-export-data.md#export-data) function of Azure App Configuration, you can automate the process of retrieving current configuration data as a single file. You can then embed this file in a build or deployment step in your continuous integration and continuous deployment (CI/CD) pipeline.
You can use any code editor to do the steps in this tutorial. [Visual Studio Cod
If you build locally, download and install the [Azure CLI](/cli/azure/install-azure-cli) if you havenΓÇÖt already.
-To do a cloud build, with Azure DevOps for example, make sure the [Azure CLI](/cli/azure/install-azure-cli) is installed in your build system.
- ### Export an App Configuration store 1. Open your *.csproj* file, and add the following script:
To do a cloud build, with Azure DevOps for example, make sure the [Azure CLI](/c
<Exec WorkingDirectory="$(MSBuildProjectDirectory)" Condition="$(ConnectionString) != ''" Command="az appconfig kv export -d file --path $(OutDir)\azureappconfig.json --format json --separator : --connection-string $(ConnectionString)" /> </Target> ```
-1. Open *Program.cs*, and update the `CreateWebHostBuilder` method to use the exported JSON file by calling the `config.AddJsonFile()` method. Add the `System.Reflection` namespace as well.
+
+1. Open *Program.cs*, and update the `CreateWebHostBuilder` method to use the exported JSON file by calling the `config.AddJsonFile()` method. Add the `System.Reflection` namespace as well.
```csharp public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
To do a cloud build, with Azure DevOps for example, make sure the [Azure CLI](/c
### Build and run the app locally
-1. Set an environment variable named **ConnectionString**, and set it to the access key to your App Configuration store.
- If you use the Windows command prompt, run the following command and restart the command prompt to allow the change to take effect:
+1. Set an environment variable named *ConnectionString*, and set it to the access key to your App Configuration store.
+ #### [Windows command prompt](#tab/windowscommandprompt)
+
+ To build and run the app locally using the Windows command prompt, run the following command and restart the command prompt to allow the change to take effect:
+
```console
- setx ConnectionString "connection-string-of-your-app-configuration-store"
+ setx ConnectionString "connection-string-of-your-app-configuration-store"
```-
+
+ ### [PowerShell](#tab/powershell)
+
If you use Windows PowerShell, run the following command:-
+
```powershell
- $Env:ConnectionString = "connection-string-of-your-app-configuration-store"
+ $Env:ConnectionString = "connection-string-of-your-app-configuration-store"
```-
- If you use macOS or Linux, run the following command:
-
+
+ ### [macOS](#tab/unix)
+
+ If you use macOS, run the following command:
+
+ ```console
+ export ConnectionString='connection-string-of-your-app-configuration-store'
+ ```
+
+ ### [Linux](#tab/linux)
+
+ If you use Linux, run the following command:
+
```console
- export ConnectionString='connection-string-of-your-app-configuration-store'
+ export ConnectionString='connection-string-of-your-app-configuration-store'
```
+
+
-2. To build the app by using the .NET Core CLI, run the following command in the command shell:
+1. To build the app by using the .NET Core CLI, run the following command in the command shell:
```console dotnet build ```
-3. After the build successfully completes, run the following command to run the web app locally:
+1. After the build completes successfully, run the following command to run the web app locally:
```console dotnet run ```
-4. Open a browser window and go to `http://localhost:5000`, which is the default URL for the web app hosted locally.
+1. Open a browser window and go to `http://localhost:5000`, which is the default URL for the web app hosted locally.
- ![Quickstart app launch local](./media/quickstarts/aspnet-core-app-launch-local.png)
+ :::image type="content" source="./media/quickstarts/aspnet-core-app-launch-local.png" alt-text="Screenshot that shows Quickstart app launch local page.":::
## Next steps
azure-arc Managed Instance Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery.md
The following image shows a properly configured distributed availability group:
2. Provision the managed instance in the secondary site and configure as a disaster recovery instance. At this point, the system databases are not part of the contained availability group. ```azurecli
- az sql mi-arc create --name <secondaryinstance> --tier bc --replicas 3 ΓÇôlicense-type DisasterRecovery --k8s-namespace <namespace> --use-k8s
+ az sql mi-arc create --name <secondaryinstance> --tier bc --replicas 3 --license-type DisasterRecovery --k8s-namespace <namespace> --use-k8s
``` 3. Copy the mirroring certificates from each site to a location that's accessible to both the geo-primary and geo-secondary instances.
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
Title: "Use the cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters" Previously updated : 07/22/2022 Last updated : 08/30/2022 description: "Use cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters"
A conceptual overview of this feature is available in [Cluster connect - Azure A
|`*.servicebus.windows.net` | 443 | |`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com` | 443 |
+ > [!NOTE]
+ > To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET https://guestnotificationservice.azure.com/urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the `<location>` placeholder.
+ - Replace the placeholders and run the below command to set the environment variables used in this document: ```azurecli
A conceptual overview of this feature is available in [Cluster connect - Azure A
|`*.servicebus.windows.net` | 443 | |`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com` | 443 |
+ > [!NOTE]
+ > To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET https://guestnotificationservice.azure.com/urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the `<location>` placeholder.
+ - Replace the placeholders and run the below command to set the environment variables used in this document: ```azurepowershell
azure-arc Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/private-link.md
Title: Private connectivity for Arc enabled Kubernetes clusters using private link (preview) Previously updated : 04/08/2021 Last updated : 08/28/2021 description: With Azure Arc, you can use a Private Link Scope model to allow multiple Kubernetes clusters to use a single private endpoint.
The rest of this document assumes you have already set up your ExpressRoute circ
## Network configuration
-Azure Arc-enabled Kubernetes integrates with several Azure services to bring cloud management and governance to your hybrid Kubernetes clusters. Most of these services already offer private endpoints, but you need to configure your firewall and routing rules to allow access to Azure Active Directory and Azure Resource Manager over the internet until these services offer private endpoints. You also need to allow access to Microsoft Container Registry (and Azure Front Door.First Party as a precursor for Microsoft Container Registry) to pull images & Helm charts to enable services like Azure Monitor, as well as for initial setup of Azure Arc agents on the Kubernetes clusters.
+Azure Arc-enabled Kubernetes integrates with several Azure services to bring cloud management and governance to your hybrid Kubernetes clusters. Most of these services already offer private endpoints, but you need to configure your firewall and routing rules to allow access to Azure Active Directory and Azure Resource Manager over the internet until these services offer private endpoints. You also need to allow access to Microsoft Container Registry (and AzureFrontDoor.FirstParty as a precursor for Microsoft Container Registry) to pull images & Helm charts to enable services like Azure Monitor, as well as for initial setup of Azure Arc agents on the Kubernetes clusters.
There are two ways you can achieve this:
-* If your network is configured to route all internet-bound traffic through the Azure VPN or ExpressRoute circuit, you can configure the network security group (NSG) associated with your subnet in Azure to allow outbound TCP 443 (HTTPS) access to Azure AD, Azure Resource Manager, Azure Frontdoor and Microsoft Container Registry using [service tags] (/azure/virtual-network/service-tags-overview). The NSG rules should look like the following:
+* If your network is configured to route all internet-bound traffic through the Azure VPN or ExpressRoute circuit, you can configure the network security group (NSG) associated with your subnet in Azure to allow outbound TCP 443 (HTTPS) access to Azure AD, Azure Resource Manager, Azure Front Door and Microsoft Container Registry using [service tags](/azure/virtual-network/service-tags-overview). The NSG rules should look like the following:
| Setting | Azure AD rule | Azure Resource Manager rule | AzureFrontDoorFirstParty rule | Microsoft Container Registry rule | |-|||| | Source | Virtual Network | Virtual Network | Virtual Network | Virtual Network | Source Port ranges | * | * | * | * | Destination | Service Tag | Service Tag | Service Tag | Service Tag
- | Destination service tag | AzureActiveDirectory | AzureResourceManager | FrontDoor.FirstParty | MicrosoftContainerRegistry
+ | Destination service tag | AzureActiveDirectory | AzureResourceManager | AzureFrontDoor.FirstParty | MicrosoftContainerRegistry
| Destination port ranges | 443 | 443 | 443 | 443 | Protocol | TCP | TCP | TCP | TCP | Action | Allow | Allow | Allow (Both inbound and outbound) | Allow | Priority | 150 (must be lower than any rules that block internet access) | 151 (must be lower than any rules that block internet access) | 152 (must be lower than any rules that block internet access) | 153 (must be lower than any rules that block internet access) | | Name | AllowAADOutboundAccess | AllowAzOutboundAccess | AllowAzureFrontDoorFirstPartyAccess | AllowMCROutboundAccess
-* Configure the firewall on your local network to allow outbound TCP 443 (HTTPS) access to Azure AD, Azure Resource Manager, and Microsoft Container Registry, and inbound & outbound access to Azure FrontDoor.FirstParty using the downloadable service tag files. The JSON file contains all the public IP address ranges used by Azure AD, Azure Resource Manager, Azure FrontDoor.FirstParty, and Microsoft Container Registry and is updated monthly to reflect any changes. Azure Active Directory's service tag is AzureActiveDirectory, Azure Resource Manager's service tag is AzureResourceManager, Microsoft Container Registry's service tag is MicrosoftContainerRegistry, and Azure Front Door's service tag is FrontDoor.FirstParty. Consult with your network administrator and network firewall vendor to learn how to configure your firewall rules.
+* Configure the firewall on your local network to allow outbound TCP 443 (HTTPS) access to Azure AD, Azure Resource Manager, and Microsoft Container Registry, and inbound & outbound access to AzureFrontDoor.FirstParty using the downloadable service tag files. The JSON file contains all the public IP address ranges used by Azure AD, Azure Resource Manager, AzureFrontDoor.FirstParty, and Microsoft Container Registry and is updated monthly to reflect any changes. Azure Active Directory's service tag is AzureActiveDirectory, Azure Resource Manager's service tag is AzureResourceManager, Microsoft Container Registry's service tag is MicrosoftContainerRegistry, and Azure Front Door's service tag is AzureFrontDoor.FirstParty. Consult with your network administrator and network firewall vendor to learn how to configure your firewall rules.
## Create an Azure Arc Private Link Scope
The Private Endpoint on your virtual network allows it to reach Azure Arc-enable
1. On the **Configuration** page, perform the following: 1. Choose the virtual network and subnet from which you want to connect to Azure Arc-enabled Kubernetes clusters. 1. For **Integrate with private DNS zone**, select **Yes**. A new Private DNS Zone will be created. The actual DNS zones may be different from what is shown in the screenshot below.
-
+ :::image type="content" source="media/private-link/create-private-endpoint-2.png" alt-text="Screenshot of the Configuration step to create a private endpoint in the Azure portal."::: > [!NOTE]
The Private Endpoint on your virtual network allows it to reach Azure Arc-enable
1. Select **Review + create**. 1. Let validation pass. 1. Select **Create**.
-
+ :::image type="content" source="media/private-link/create-private-endpoint-2.png" alt-text="Screenshot of the Configuration step to create a private endpoint in the Azure portal."::: > [!NOTE]
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
Title: "Quickstart: Connect an existing Kubernetes cluster to Azure Arc" description: In this quickstart, you learn how to connect an Azure Arc-enabled Kubernetes cluster. Previously updated : 08/25/2022 Last updated : 08/30/2022 ms.devlang: azurecli
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
| `https://mcr.microsoft.com`, `https://*.data.mcr.microsoft.com` | Required to pull container images for Azure Arc agents. | | `https://gbl.his.arc.azure.com` (for Azure Cloud), `https://gbl.his.arc.azure.us` (for Azure US Government) | Required to get the regional endpoint for pulling system-assigned Managed Identity certificates. | | `https://*.his.arc.azure.com` (for Azure Cloud), `https://usgv.his.arc.azure.us` (for Azure US Government) | Required to pull system-assigned Managed Identity certificates. |
-|`*.servicebus.windows.net`, `guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`, `sts.windows.net`, `https://k8sconnectcsp.azureedge.net` | For [Cluster Connect](cluster-connect.md) and for [Custom Location](custom-locations.md) based scenarios. |
|`https://k8connecthelm.azureedge.net` | `az connectedk8s connect` uses Helm 3 to deploy Azure Arc agents on the Kubernetes cluster. This endpoint is needed for Helm client download to facilitate deployment of the agent helm chart. |
+|`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`, `sts.windows.net`, `https://k8sconnectcsp.azureedge.net` | For [Cluster Connect](cluster-connect.md) and for [Custom Location](custom-locations.md) based scenarios. |
+|`*.servicebus.windows.net` | For [Cluster Connect](cluster-connect.md) and for [Custom Location](custom-locations.md) based scenarios. |
> [!NOTE]
-> To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET /urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the `<location>` placeholder.
+> To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET https://guestnotificationservice.azure.com/urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the `<location>` placeholder.
## Create a resource group
azure-arc Onboard Windows Admin Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-windows-admin-center.md
You can enable Azure Arc-enabled servers for one or more Windows machines in you
* Azure Arc-enabled servers - Review the [prerequisites](prerequisites.md) and verify that your subscription, your Azure account, and resources meet the requirements.
-* Windows Admin Center - Review the requirements to [prepare your environment](/windows-server/manage/windows-admin-center/deploy/prepare-environment) to deploy and [configure Azure integration ](/windows-server/manage/windows-admin-center/azure/azure-integration).
+* Windows Admin Center - Review the requirements to [prepare your environment](/windows-server/manage/windows-admin-center/deploy/prepare-environment) to deploy and [configure Azure integration](/windows-server/manage/windows-admin-center/azure/azure-integration).
* An Azure subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
azure-fluid-relay Container Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/container-deletion.md
-+ description: Learn how to delete individual containers using az-cli Title: Delete Fluid containers-+ Last updated 09/28/2021
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-infrastructure-as-code.md
Title: Automate function app resource deployment to Azure
-description: Learn how to build an Azure Resource Manager template that deploys your function app.
+description: Learn how to build a Bicep file or an Azure Resource Manager template that deploys your function app.
ms.assetid: d20743e3-aab6-442c-a836-9bcea09bfd32 Previously updated : 08/18/2022 Last updated : 08/30/2022 # Automate resource deployment for your function app in Azure Functions
-You can use an Azure Resource Manager template to deploy a function app. This article outlines the required resources and parameters for doing so. You might need to deploy other resources, depending on the [triggers and bindings](functions-triggers-bindings.md) in your function app.
+You can use a Bicep file or an Azure Resource Manager template to deploy a function app. This article outlines the required resources and parameters for doing so. You might need to deploy other resources, depending on the [triggers and bindings](functions-triggers-bindings.md) in your function app. For more information about creating Bicep files, see [Understand the structure and syntax of Bicep files](../azure-resource-manager/bicep/file.md). For more information about creating templates, see [Authoring Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md).
-For more information about creating templates, see [Authoring Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md).
-
-For sample templates, see:
+For sample Bicep files and ARM templates, see:
- [ARM templates for function app deployment](https://github.com/Azure-Samples/function-app-arm-templates) - [Function app on Consumption plan]
For sample templates, see:
An Azure Functions deployment typically consists of these resources:
+# [Bicep](#tab/bicep)
+
+| Resource | Requirement | Syntax and properties reference |
+||-|--|
+| A function app | Required | [Microsoft.Web/sites](/azure/templates/microsoft.web/sites?pivots=deployment-language-bicep) |
+| A [storage account](../storage/index.yml) | Required | [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts?pivots=deployment-language-bicep) |
+| An [Application Insights](../azure-monitor/app/app-insights-overview.md) component | Optional | [Microsoft.Insights/components](/azure/templates/microsoft.insights/components?pivots=deployment-language-bicep) |
+| A [hosting plan](./functions-scale.md) | Optional<sup>1</sup> | [Microsoft.Web/serverfarms](/azure/templates/microsoft.web/serverfarms?pivots=deployment-language-bicep)
+
+# [JSON](#tab/json)
+ | Resource | Requirement | Syntax and properties reference | ||-|--|
-| A function app | Required | [Microsoft.Web/sites](/azure/templates/microsoft.web/sites) |
-| An [Azure Storage](../storage/index.yml) account | Required | [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts) |
-| An [Application Insights](../azure-monitor/app/app-insights-overview.md) component | Optional | [Microsoft.Insights/components](/azure/templates/microsoft.insights/components) |
-| A [hosting plan](./functions-scale.md) | Optional<sup>1</sup> | [Microsoft.Web/serverfarms](/azure/templates/microsoft.web/serverfarms) |
+| A function app | Required | [Microsoft.Web/sites](/azure/templates/microsoft.web/sites?pivots=deployment-language-arm-template) |
+| A [storage account](../storage/index.yml) | Required | [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts?pivots=deployment-language-arm-template) |
+| An [Application Insights](../azure-monitor/app/app-insights-overview.md) component | Optional | [Microsoft.Insights/components](/azure/templates/microsoft.insights/components?pivots=deployment-language-arm-template) |
+| A [hosting plan](./functions-scale.md) | Optional<sup>1</sup> | [Microsoft.Web/serverfarms](/azure/templates/microsoft.web/serverfarms?pivots=deployment-language-arm-template) |
++ <sup>1</sup>A hosting plan is only required when you choose to run your function app on a [Premium plan](./functions-premium-plan.md) or on an [App Service plan](../app-service/overview-hosting-plans.md).
An Azure Functions deployment typically consists of these resources:
<a name="storage"></a> ### Storage account
-An Azure storage account is required for a function app. You need a general purpose account that supports blobs, tables, queues, and files. For more information, see [Azure Functions storage account requirements](storage-considerations.md#storage-account-requirements).
+A storage account is required for a function app. You need a general purpose account that supports blobs, tables, queues, and files. For more information, see [Azure Functions storage account requirements](storage-considerations.md#storage-account-requirements).
-```json
-{
- "type": "Microsoft.Storage/storageAccounts",
- "name": "[variables('storageAccountName')]",
- "apiVersion": "2019-06-01",
- "location": "[resourceGroup().location]",
- "kind": "StorageV2",
- "sku": {
- "name": "[parameters('storageAccountType')]"
+# [Bicep](#tab/bicep)
+
+```bicep
+resource storageAccountName 'Microsoft.Storage/storageAccounts@2021-09-01' = {
+ name: storageAccountName
+ location: location
+ kind: 'StorageV2'
+ sku: {
+ name: storageAccountType
} } ```
+# [JSON](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-09-01",
+ "name": "[parameters('storageAccountName')]",
+ "location": "[parameters('location')]",
+ "kind": "StorageV2",
+ "sku": {
+ "name": "[parameters('storageAccountType')]"
+ }
+ }
+]
+```
+++ You must also specify the `AzureWebJobsStorage` property as an app setting in the site configuration. If the function app doesn't use Application Insights for monitoring, it should also specify `AzureWebJobsDashboard` as an app setting.
-The Azure Functions runtime uses the `AzureWebJobsStorage` connection string to create internal queues. When Application Insights is not enabled, the runtime uses the `AzureWebJobsDashboard` connection string to log to Azure Table storage and power the **Monitor** tab in the portal.
+The Azure Functions runtime uses the `AzureWebJobsStorage` connection string to create internal queues. When Application Insights isn't enabled, the runtime uses the `AzureWebJobsDashboard` connection string to log to Azure Table storage and power the **Monitor** tab in the portal.
These properties are specified in the `appSettings` collection in the `siteConfig` object:
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ ...
+ properties: {
+ ...
+ siteConfig: {
+ ...
+ appSettings: [
+ {
+ name: 'AzureWebJobsDashboard'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ {
+ name: 'AzureWebJobsStorage'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ ...
+ ]
+ }
+ }
+}
+```
+
+# [JSON](#tab/json)
+ ```json
-"appSettings": [
- {
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]"
- },
+"resources": [
{
- "name": "AzureWebJobsDashboard",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]"
+ "type": "Microsoft.Web/sites",
+ ...
+ "properties": {
+ ...
+ "siteConfig": {
+ ...
+ "appSettings": [
+ {
+ "name": "AzureWebJobsDashboard",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', parameters('storageAccountName'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', parameters('storageAccountName'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ ...
+ ]
+ }
+ }
} ] ``` ++ ### Application Insights
-Application Insights is recommended for monitoring your function apps. The Application Insights resource is defined with the type **Microsoft.Insights/components** and the kind **web**:
+Application Insights is recommended for monitoring your function apps. The Application Insights resource is defined with the type `Microsoft.Insights/components` and the kind **web**:
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource applicationInsights 'Microsoft.Insights/components@2020-02-02' = {
+ name: applicationInsightsName
+ location: appInsightsLocation
+ kind: 'web'
+ properties: {
+ Application_Type: 'web'
+ Request_Source: 'IbizaWebAppExtensionCreate'
+ }
+}
+```
+
+# [JSON](#tab/json)
```json
-{
- "apiVersion": "2015-05-01",
- "name": "[variables('appInsightsName')]",
- "type": "Microsoft.Insights/components",
- "kind": "web",
- "location": "[resourceGroup().location]",
- "tags": {
- "[concat('hidden-link:', resourceGroup().id, '/providers/Microsoft.Web/sites/', variables('functionAppName'))]": "Resource"
- },
- "properties": {
- "Application_Type": "web",
- "ApplicationId": "[variables('appInsightsName')]"
+"resources": [
+ {
+ "type": "Microsoft.Insights/components",
+ "apiVersion": "2020-02-02",
+ "name": "[parameters('applicationInsightsName')]",
+ "location": "[parameters('appInsightsLocation')]",
+ "kind": "web",
+ "properties": {
+ "Application_Type": "web",
+ "Request_Source": "IbizaWebAppExtensionCreate"
+ }
}
-},
+]
``` ++ In addition, the instrumentation key needs to be provided to the function app using the `APPINSIGHTS_INSTRUMENTATIONKEY` application setting. This property is specified in the `appSettings` collection in the `siteConfig` object:
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ ...
+ properties: {
+ ...
+ siteConfig: {
+ ...
+ appSettings: [
+ {
+ name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
+ value: appInsights.properties.InstrumentationKey
+ }
+ ...
+ ]
+ }
+ }
+}
+```
+
+# [JSON](#tab/json)
+ ```json
-"appSettings": [
+"resources": [
{
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('microsoft.insights/components/', variables('appInsightsName')), '2015-05-01').InstrumentationKey]"
+ "type": "Microsoft.Web/sites",
+ ...
+ "properties": {
+ ...
+ "siteConfig": {
+ ...
+ "appSettings": [
+ {
+ "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
+ "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').InstrumentationKey]"
+ },
+ ...
+ ]
+ }
+ }
} ] ``` ++ ### Hosting plan
-The definition of the hosting plan varies, and can be one of the following:
+The definition of the hosting plan varies, and can be one of the following plans:
- [Consumption plan](#consumption) (default) - [Premium plan](#premium)
The definition of the hosting plan varies, and can be one of the following:
The function app resource is defined by using a resource of type **Microsoft.Web/sites** and kind **functionapp**:
-```json
-{
- "apiVersion": "2015-08-01",
- "type": "Microsoft.Web/sites",
- "name": "[variables('functionAppName')]",
- "location": "[resourceGroup().location]",
- "kind": "functionapp",
- "dependsOn": [
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
- "[resourceId('Microsoft.Insights/components', variables('appInsightsName'))]"
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ name: functionAppName
+ location: location
+ kind: 'functionapp'
+ identity:{
+ type:'SystemAssigned'
+ }
+ properties: {
+ serverFarmId: hostingPlan.id
+ clientAffinityEnabled: false
+ siteConfig: {
+ alwaysOn: true
+ }
+ httpsOnly: true
+ }
+ dependsOn: [
+ storageAccount
] } ```
+# [JSON](#tab/json)
+
+```json
+"resources:": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('functionAppName')]",
+ "location": "[parameters('location')]",
+ "kind": "functionapp",
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "properties": {
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "clientAffinityEnabled": false,
+ "siteConfig": {
+ "alwaysOn": true
+ },
+ "httpsOnly": true
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]",
+ "[resourceId('Microsoft.Insights/components', parameters('applicationInsightsName'))]"
+ ]
+ }
+]
+```
+++ > [!IMPORTANT]
-> If you are explicitly defining a hosting plan, an additional item would be needed in the dependsOn array: `"[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]"`
+> If you're explicitly defining a hosting plan, an additional item would be needed in the dependsOn array: `"[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]"`
A function app must include these application settings:
A function app must include these application settings:
These properties are specified in the `appSettings` collection in the `siteConfig` property:
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ ...
+ properties: {
+ ...
+ siteConfig: {
+ ...
+ appSettings: [
+ {
+ name: 'AzureWebJobsStorage'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${listKeys(storageAccountName, '2021-09-01').keys[0].value}'
+ }
+ {
+ name: 'FUNCTIONS_WORKER_RUNTIME'
+ value: 'node'
+ }
+ {
+ name: 'WEBSITE_NODE_DEFAULT_VERSION'
+ value: '~14'
+ }
+ {
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~4'
+ }
+ ...
+ ]
+ }
+ }
+}
+
+```
+
+# [JSON](#tab/json)
+ ```json
-"properties": {
- "siteConfig": {
- "appSettings": [
- {
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]"
- },
- {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
- },
- {
- "name": "WEBSITE_NODE_DEFAULT_VERSION",
- "value": "~14"
- },
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~4"
+"resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ ...
+ "properties": {
+ ...
+ "siteConfig": {
+ ...
+ "appSettings": [
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', parameters('storageAccountName'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "FUNCTIONS_WORKER_RUNTIME",
+ "value": "node"
+ },
+ {
+ "name": "WEBSITE_NODE_DEFAULT_VERSION",
+ "value": "~14"
+ },
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~4"
+ },
+ ...
+ ]
}
- ]
+ }
}
-}
+]
``` ++ <a name="consumption"></a> ## Deploy on Consumption plan
-The Consumption plan automatically allocates compute power when your code is running, scales out as necessary to handle load, and then scales in when code is not running. You don't have to pay for idle VMs, and you don't have to reserve capacity in advance. To learn more, see [Azure Functions scale and hosting](consumption-plan.md).
+The Consumption plan automatically allocates compute power when your code is running, scales out as necessary to handle load, and then scales in when code isn't running. You don't have to pay for idle VMs, and you don't have to reserve capacity in advance. To learn more, see [Azure Functions scale and hosting](consumption-plan.md).
-For a sample Azure Resource Manager template, see [Function app on Consumption plan].
+For a sample Bicep file/Azure Resource Manager template, see [Function app on Consumption plan].
### Create a Consumption plan
A Consumption plan doesn't need to be defined. When not defined, a plan is autom
The Consumption plan is a special type of `serverfarm` resource. You can specify it by using the `Dynamic` value for the `computeMode` and `sku` properties, as follows:
-# [Windows](#tab/windows)
+#### Windows
-```json
-{
- "type": "Microsoft.Web/serverfarms",
- "apiVersion": "2021-02-01",
- "name": "[variables('hostingPlanName')]",
- "location": "[parameters('location')]",
- "sku": {
- "name": "Y1",
- "tier": "Dynamic",
- "size": "Y1",
- "family": "Y",
- "capacity":0
- },
- "properties": {
- "name":"[variables('hostingPlanName')]",
- "computeMode": "Dynamic"
+# [Bicep](#tab/bicep)
+
+```bicep
+resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
+ name: hostingPlanName
+ location: location
+ sku: {
+ name: 'Y1'
+ tier: 'Dynamic'
+ size: 'Y1'
+ family: 'Y'
+ capacity: 0
+ }
+ properties: {
+ computeMode: 'Dynamic'
} } ```
-# [Linux](#tab/linux)
+# [JSON](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('hostingPlanName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Y1",
+ "tier": "Dynamic",
+ "size": "Y1",
+ "family": "Y",
+ "capacity": 0
+ },
+ "properties": {
+ "computeMode": "Dynamic"
+ }
+ }
+]
+```
+++
+#### Linux
To run your app on Linux, you must also set the property `"reserved": true` for the `serverfarms` resource:
-```json
-{
- "type": "Microsoft.Web/serverfarms",
- "apiVersion": "2021-02-01",
- "name": "[variables('hostingPlanName')]",
- "location": "[parameters('location')]",
- "sku": {
- "name": "Y1",
- "tier": "Dynamic",
- "size": "Y1",
- "family": "Y",
- "capacity":0
- },
- "properties": {
- "name":"[variables('hostingPlanName')]",
- "computeMode": "Dynamic",
- "reserved": true
+# [Bicep](#tab/bicep)
+
+```bicep
+resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
+ name: hostingPlanName
+ location: location
+ sku: {
+ name: 'Y1'
+ tier: 'Dynamic'
+ size: 'Y1'
+ family: 'Y'
+ capacity: 0
+ }
+ properties: {
+ computeMode: 'Dynamic'
+ reserved: true
} } ```
+# [JSON](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('hostingPlanName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Y1",
+ "tier": "Dynamic",
+ "size": "Y1",
+ "family": "Y",
+ "capacity":0
+ },
+ "properties": {
+ "computeMode": "Dynamic",
+ "reserved": true
+ }
+ }
+]
+```
+ ### Create a function app
When you explicitly define your Consumption plan, you must set the `serverFarmId
The settings required by a function app running in Consumption plan differ between Windows and Linux.
-# [Windows](#tab/windows)
+#### Windows
On Windows, a Consumption plan requires another two other settings in the site configuration: [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring) and [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare). This property configures the storage account where the function app code and configuration are stored.
-For a sample Azure Resource Manager template, see [Azure Function App Hosted on Windows Consumption Plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-windows-consumption).
+For a sample Bicep file/Azure Resource Manager template, see [Azure Function App Hosted on Windows Consumption Plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-windows-consumption).
-```json
-{
- "type": "Microsoft.Web/sites",
- "apiVersion": "2021-02-01",
- "name": "[parameters('functionAppName')]",
- "location": "[parameters('location')]",
- "kind": "functionapp",
- "dependsOn": [
- "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
- "[resourceId('Microsoft.Insights/components', variables('applicationInsightsName'))]"
- ],
- "properties": {
- "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "siteConfig": {
- "appSettings": [
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('microsoft.insights/components', variables('applicationInsightsName')), '2015-05-01').InstrumentationKey]"
- },
- {
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';EndpointSuffix=', environment().suffixes.storage, ';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value)]"
- },
- {
- "name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';EndpointSuffix=', environment().suffixes.storage, ';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value)]"
- },
- {
- "name": "WEBSITE_CONTENTSHARE",
- "value": "[toLower(parameters('functionAppName'))]"
- },
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~4"
- },
- {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
- },
- {
- "name": "WEBSITE_NODE_DEFAULT_VERSION",
- "value": "~14"
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ name: functionAppName
+ location: location
+ kind: 'functionapp'
+ properties: {
+ serverFarmId: hostingPlan.id
+ siteConfig: {
+ appSettings: [
+ {
+ name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
+ value: applicationInsights.properties.InstrumentationKey
+ }
+ {
+ name: 'AzureWebJobsStorage'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ {
+ name: 'WEBSITE_CONTENTAZUREFILECONNECTIONSTRING'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ {
+ name: 'WEBSITE_CONTENTSHARE'
+ value: toLower(functionAppName)
+ }
+ {
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~4'
+ }
+ {
+ name: 'FUNCTIONS_WORKER_RUNTIME'
+ value: 'node'
+ }
+ {
+ name: 'WEBSITE_NODE_DEFAULT_VERSION'
+ value: '~14'
} ] }
For a sample Azure Resource Manager template, see [Azure Function App Hosted on
} ```
+# [JSON](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('functionAppName')]",
+ "location": "[parameters('location')]",
+ "kind": "functionapp",
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components', parameters('applicationInsightsName'))]",
+ "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
+ ],
+ "properties": {
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "siteConfig": {
+ "appSettings": [
+ {
+ "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
+ "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').InstrumentationKey]"
+ },
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};AccountKey={2}', parameters('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};AccountKey={2}', parameters('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "WEBSITE_CONTENTSHARE",
+ "value": "[toLower(parameters('functionAppName'))]"
+ },
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~4"
+ },
+ {
+ "name": "FUNCTIONS_WORKER_RUNTIME",
+ "value": "node"
+ },
+ {
+ "name": "WEBSITE_NODE_DEFAULT_VERSION",
+ "value": "~14"
+ }
+ ]
+ }
+ }
+ }
+]
+```
+++ > [!IMPORTANT]
-> Do not need to set the [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) setting in a deployment slot. This setting is generated for you when the app is created in the deployment slot.
+> Don't set the [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) setting in a new deployment slot. This setting is generated for you when the app is created in the deployment slot.
-# [Linux](#tab/linux)
+#### Linux
-The function app must have set `"kind": "functionapp,linux"`, and it must have set property `"reserved": true`. Linux apps should also include a `linuxFxVersion` property under siteConfig. If you are just deploying code, the value for this is determined by your desired runtime stack in the format of runtime|runtimeVersion. For example: `python|3.7`, `node|14` and `dotnet|3.1`.
+The function app must have set `"kind": "functionapp,linux"`, and it must have set property `"reserved": true`. Linux apps should also include a `linuxFxVersion` property under siteConfig. If you're just deploying code, the value for this property is determined by your desired runtime stack in the format of runtime|runtimeVersion. For example: `python|3.7`, `node|14` and `dotnet|3.1`.
The [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring) and [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) settings aren't supported on Linux Consumption plan.
-For a sample Azure Resource Manager template, see [Azure Function App Hosted on Linux Consumption Plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-linux-consumption).
-
-```json
-{
- "type": "Microsoft.Web/sites",
- "apiVersion": "2021-02-01",
- "name": "[parameters('functionAppName')]",
- "location": "[parameters('location')]",
- "kind": "functionapp,linux",
- "dependsOn": [
- "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
- "[resourceId('Microsoft.Insights/components', variables('applicationInsightsName'))]"
- ],
- "properties": {
- "reserved": true,
- "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "siteConfig": {
- "linuxFxVersion": "node|14",
- "appSettings": [
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('Microsoft.Insights/components', parameters('functionAppName')), '2015-05-01').InstrumentationKey]"
- },
- {
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';EndpointSuffix=', environment().suffixes.storage, ';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value)]"
- },
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~4"
- },
- {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
+For a sample Bicep file/Azure Resource Manager template, see [Azure Function App Hosted on Linux Consumption Plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-linux-consumption).
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ name: functionAppName
+ location: location
+ kind: 'functionapp,linux'
+ properties: {
+ reserved: true
+ serverFarmId: hostingPlan.id
+ siteConfig: {
+ linuxFxVersion: 'node|14'
+ appSettings: [
+ {
+ name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
+ value: applicationInsights.properties.InstrumentationKey
+ }
+ {
+ name: 'AzureWebJobsStorage'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ {
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~4'
+ }
+ {
+ name: 'FUNCTIONS_WORKER_RUNTIME'
+ value: 'node'
} ] }
For a sample Azure Resource Manager template, see [Azure Function App Hosted on
} ```
+# [JSON](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('functionAppName')]",
+ "location": "[parameters('location')]",
+ "kind": "functionapp,linux",
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components', parameters('applicationInsightsName'))]",
+ "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
+ ],
+ "properties": {
+ "reserved": true,
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "siteConfig": {
+ "linuxFxVersion": "node|14",
+ "appSettings": [
+ {
+ "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
+ "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02).InstrumentationKey]"
+ },
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};AccountKey={2}', parameters('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~4"
+ },
+ {
+ "name": "FUNCTIONS_WORKER_RUNTIME",
+ "value": "node"
+ }
+ ]
+ }
+ }
+ }
+]
+```
+ <a name="premium"></a>
The Premium plan offers the same scaling as the Consumption plan but includes de
### Create a Premium plan
-A Premium plan is a special type of "serverfarm" resource. You can specify it by using either `EP1`, `EP2`, or `EP3` for the `Name` property value in the `sku` as following:
+A Premium plan is a special type of `serverfarm` resource. You can specify it by using either `EP1`, `EP2`, or `EP3` for the `Name` property value in the `sku` as shown in the following samples:
-# [Windows](#tab/windows)
+#### Windows
-```json
-{
- "type": "Microsoft.Web/serverfarms",
- "apiVersion": "2021-02-01",
- "name": "[variables('hostingPlanName')]",
- "location": "[parameters('location')]",
- "sku": {
- "tier": "ElasticPremium",
- "name": "EP1",
- "family": "EP"
- },
- "properties": {
- "name": "[parameters('hostingPlanName')]",
- "maximumElasticWorkerCount": 20
- },
- "kind": "elastic"
+# [Bicep](#tab/bicep)
+
+```bicep
+resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
+ name: hostingPlanName
+ location: location
+ sku: {
+ name: 'EP1'
+ tier: 'ElasticPremium'
+ family: 'EP'
+ }
+ kind: 'elastic'
+ properties: {
+ maximumElasticWorkerCount: 20
+ }
} ```
-# [Linux](#tab/linux)
-
-To run your app on Linux, you must also set property `"reserved": true` for the serverfarms resource:
+# [JSON](#tab/json)
```json
-{
- "type": "Microsoft.Web/serverfarms",
- "apiVersion": "2021-02-01",
- "name": "[variables('hostingPlanName')]",
- "location": "[parameters('location')]",
- "sku": {
- "tier": "ElasticPremium",
- "name": "EP1",
- "family": "EP"
- },
- "properties": {
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2022-03-01",
"name": "[parameters('hostingPlanName')]",
- "maximumElasticWorkerCount": 20,
- "reserved": true
- },
- "kind": "elastic"
-}
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "EP1",
+ "tier": "ElasticPremium",
+ "family": "EP"
+ },
+ "kind": "elastic",
+ "properties": {
+ "maximumElasticWorkerCount": 20
+ }
+ }
+]
```
-### Create a function app
-
-For function app on a Premium plan, you will need to set the `serverFarmId` property on the app so that it points to the resource ID of the plan. You should ensure that the function app has a `dependsOn` setting for the plan as well.
+#### Linux
-A Premium plan requires another settings in the site configuration: [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring) and [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare). This property configures the storage account where the function app code and configuration are stored, which are used for dynamic scale.
+To run your app on Linux, you must also set property `"reserved": true` for the serverfarms resource:
-For a sample Azure Resource Manager template, see [Azure Function App Hosted on Premium Plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-premium-plan).
+# [Bicep](#tab/bicep)
-The settings required by a function app running in Premium plan differ between Windows and Linux.
+```bicep
+resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
+ name: hostingPlanName
+ location: location
+ sku: {
+ name: 'EP1'
+ tier: 'ElasticPremium'
+ family: 'EP'
+ }
+ kind: 'elastic'
+ properties: {
+ maximumElasticWorkerCount: 20
+ reserved: true
+ }
+}
+```
-# [Windows](#tab/windows)
+# [JSON](#tab/json)
```json
-{
- "type": "Microsoft.Web/sites",
- "apiVersion": "2021-02-01",
- "name": "[parameters('functionAppName')]",
- "location": "[parameters('location')]",
- "kind": "functionapp",
- "dependsOn": [
- "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
- "[resourceId('Microsoft.Insights/components', variables('applicationInsightsName'))]"
- ],
- "properties": {
- "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "siteConfig": {
- "appSettings": [
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('microsoft.insights/components', variables('applicationInsightsName')), '2015-05-01').InstrumentationKey]"
- },
- {
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';EndpointSuffix=', environment().suffixes.storage, ';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value)]"
- },
- {
- "name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';EndpointSuffix=', environment().suffixes.storage, ';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value)]"
- },
- {
- "name": "WEBSITE_CONTENTSHARE",
- "value": "[toLower(parameters('functionAppName'))]"
- },
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~4"
- },
- {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
- },
- {
- "name": "WEBSITE_NODE_DEFAULT_VERSION",
- "value": "~14"
- }
- ]
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('hostingPlanName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "EP1",
+ "tier": "ElasticPremium",
+ "family": "EP",
+ },
+ "kind": "elastic",
+ "properties": {
+ "maximumElasticWorkerCount": 20,
+ "reserved": true
} }
-}
+]
```
-> [!IMPORTANT]
-> You don't need to set the [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) setting because it's generated for you when the site is first created.
+
+### Create a function app
-# [Linux](#tab/linux)
+For function app on a Premium plan, you'll need to set the `serverFarmId` property on the app so that it points to the resource ID of the plan. You should ensure that the function app has a `dependsOn` setting for the plan as well.
-The function app must have set `"kind": "functionapp,linux"`, and it must have set property `"reserved": true`. Linux apps should also include a `linuxFxVersion` property under siteConfig. If you are just deploying code, the value for this is determined by your desired runtime stack in the format of runtime|runtimeVersion. For example: `python|3.7`, `node|14` and `dotnet|3.1`.
+A Premium plan requires another settings in the site configuration: [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring) and [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare). This property configures the storage account where the function app code and configuration are stored, which are used for dynamic scale.
+
+For a sample Bicep file/Azure Resource Manager template, see [Azure Function App Hosted on Premium Plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-premium-plan).
+
+The settings required by a function app running in Premium plan differ between Windows and Linux.
+
+#### Windows
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionAppName_resource 'Microsoft.Web/sites@2022-03-01' = {
+ name: functionAppName
+ location: location
+ kind: 'functionapp'
+ properties: {
+ serverFarmId: hostingPlanName.id
+ siteConfig: {
+ appSettings: [
+ {
+ name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
+ value: applicationInsightsName.properties.InstrumentationKey
+ }
+ {
+ name: 'AzureWebJobsStorage'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ {
+ name: 'WEBSITE_CONTENTAZUREFILECONNECTIONSTRING'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ {
+ name: 'WEBSITE_CONTENTSHARE'
+ value: toLower(functionAppName)
+ }
+ {
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~4'
+ }
+ {
+ name: 'FUNCTIONS_WORKER_RUNTIME'
+ value: 'node'
+ }
+ {
+ name: 'WEBSITE_NODE_DEFAULT_VERSION'
+ value: '~14'
+ }
+ ]
+ }
+ }
+}
+```
+
+# [JSON](#tab/json)
```json
-{
- "type": "Microsoft.Web/sites",
- "apiVersion": "2021-02-01",
- "name": "[parameters('functionAppName')]",
- "location": "[parameters('location')]",
- "kind": "functionapp,linux",
- "dependsOn": [
- "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
- "[resourceId('Microsoft.Insights/components', variables('applicationInsightsName'))]"
- ],
- "properties": {
- "reserved": true,
- "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "siteConfig": {
- "linuxFxVersion": "node|14",
- "appSettings": [
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('microsoft.insights/components', variables('applicationInsightsName')), '2015-05-01').InstrumentationKey]"
- },
- {
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';EndpointSuffix=', environment().suffixes.storage, ';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value)]"
- },
- {
- "name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';EndpointSuffix=', environment().suffixes.storage, ';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value)]"
- },
- {
- "name": "WEBSITE_CONTENTSHARE",
- "value": "[toLower(parameters('functionAppName'))]"
- },
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~4"
- },
- {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
+"resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('functionAppName')]",
+ "location": "[parameters('location')]",
+ "kind": "functionapp",
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components', parameters('applicationInsightsName'))]",
+ "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
+ ],
+ "properties": {
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "siteConfig": {
+ "appSettings": [
+ {
+ "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
+ "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').InstrumentationKey]"
+ },
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};AccountKey={2}', parameters('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};AccountKey={2}', parameters('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "WEBSITE_CONTENTSHARE",
+ "value": "[toLower(parameters('functionAppName'))]"
+ },
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~4"
+ },
+ {
+ "name": "FUNCTIONS_WORKER_RUNTIME",
+ "value": "node"
+ },
+ {
+ "name": "WEBSITE_NODE_DEFAULT_VERSION",
+ "value": "~14"
+ }
+ ]
+ }
+ }
+ }
+]
+```
+++
+> [!IMPORTANT]
+> You don't need to set the [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) setting because it's generated for you when the site is first created.
+
+#### Linux
+
+The function app must have set `"kind": "functionapp,linux"`, and it must have set property `"reserved": true`. Linux apps should also include a `linuxFxVersion` property under siteConfig. If you're just deploying code, the value for this property is determined by your desired runtime stack in the format of runtime|runtimeVersion. For example: `python|3.7`, `node|14` and `dotnet|3.1`.
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2021-02-01' = {
+ name: functionAppName
+ location: location
+ kind: 'functionapp,linux'
+ properties: {
+ reserved: true
+ serverFarmId: hostingPlan.id
+ siteConfig: {
+ linuxFxVersion: 'node|14'
+ appSettings: [
+ {
+ name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
+ value: applicationInsightsName.properties.InstrumentationKey
+ }
+ {
+ name: 'AzureWebJobsStorage'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ {
+ name: 'WEBSITE_CONTENTAZUREFILECONNECTIONSTRING'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ {
+ name: 'WEBSITE_CONTENTSHARE'
+ value: toLower(functionAppName)
+ }
+ {
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~4'
+ }
+ {
+ name: 'FUNCTIONS_WORKER_RUNTIME'
+ value: 'node'
} ] }
The function app must have set `"kind": "functionapp,linux"`, and it must have s
} ```
+# [JSON](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2021-02-01",
+ "name": "[parameters('functionAppName')]",
+ "location": "[parameters('location')]",
+ "kind": "functionapp,linux",
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components', parameters('applicationInsightsName'))]",
+ "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
+ ],
+ "properties": {
+ "reserved": true,
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "siteConfig": {
+ "linuxFxVersion": "node|14",
+ "appSettings": [
+ {
+ "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
+ "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').InstrumentationKey]"
+ },
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};AccountKey={2}', parameters('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};AccountKey={2}', parameters('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "WEBSITE_CONTENTSHARE",
+ "value": "[toLower(parameters('functionAppName'))]"
+ },
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~4"
+ },
+ {
+ "name": "FUNCTIONS_WORKER_RUNTIME",
+ "value": "node"
+ }
+ ]
+ }
+ }
+ }
+]
+```
+ <a name="app-service-plan"></a>
The function app must have set `"kind": "functionapp,linux"`, and it must have s
In the App Service plan, your function app runs on dedicated VMs on Basic, Standard, and Premium SKUs, similar to web apps. For details about how the App Service plan works, see the [Azure App Service plans in-depth overview](../app-service/overview-hosting-plans.md).
-For a sample Azure Resource Manager template, see [Function app on Azure App Service plan].
+For a sample Bicep file/Azure Resource Manager template, see [Function app on Azure App Service plan].
-### Create an App Service plan
+### Create a Dedicated plan
-An App Service plan is defined by a "serverfarm" resource. You can specify the SKU as follows:
+In Functions, the Dedicated plan is just a regular App Service plan, which is defined by a `serverfarm` resource. You can specify the SKU as follows:
-# [Windows](#tab/windows)
+#### Windows
-```json
-{
- "type": "Microsoft.Web/serverfarms",
- "apiVersion": "2021-02-01",
- "name": "[variables('hostingPlanName')]",
- "location": "[parameters('location')]",
- "sku": {
- "tier": "Standard",
- "name": "S1",
- "size": "S1",
- "family": "S",
- "capacity": 1
+# [Bicep](#tab/bicep)
+
+```bicep
+resource hostingPlanName 'Microsoft.Web/serverfarms@2022-03-01' = {
+ name: hostingPlanName
+ location: location
+ sku: {
+ tier: 'Standard'
+ name: 'S1'
+ size: 'S1'
+ family: 'S'
+ capacity: 1
} } ```
-# [Linux](#tab/linux)
+# [JSON](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('hostingPlanName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "tier": "Standard",
+ "name": "S1",
+ "size": "S1",
+ "family": "S",
+ "capacity": 1
+ }
+ }
+]
+```
+++
+#### Linux
To run your app on Linux, you must also set property `"reserved": true` for the serverfarms resource:
-```json
-{
- "type": "Microsoft.Web/serverfarms",
- "apiVersion": "2021-02-01",
- "name": "[variables('hostingPlanName')]",
- "location": "[parameters('location')]",
- "sku": {
- "tier": "Standard",
- "name": "S1",
- "size": "S1",
- "family": "S",
- "capacity": 1
- },
- "properties": {
- "reserved": true
+# [Bicep](#tab/bicep)
+
+```bicep
+resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
+ name: hostingPlanName
+ location: location
+ sku: {
+ tier: 'Standard'
+ name: 'S1'
+ size: 'S1'
+ family: 'S'
+ capacity: 1
+ }
+ properties: {
+ reserved: true
} } ```
+# [JSON](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('hostingPlanName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "tier": "Standard",
+ "name": "S1",
+ "size": "S1",
+ "family": "S",
+ "capacity": 1
+ },
+ "properties": {
+ "reserved": true
+ }
+ }
+]
+```
+ ### Create a function app
On App Service plan, you should enable the `"alwaysOn": true` setting under site
The [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring) and [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) settings aren't supported on Dedicated plan.
-For a sample Azure Resource Manager template, see [Azure Function App Hosted on Dedicated Plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-dedicated-plan).
+For a sample Bicep file/Azure Resource Manager template, see [Azure Function App Hosted on Dedicated Plan](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-dedicated-plan).
The settings required by a function app running in Dedicated plan differ between Windows and Linux.
-# [Windows](#tab/windows)
+#### Windows
-```json
-{
- "type": "Microsoft.Web/sites",
- "apiVersion": "2021-02-01",
- "name": "[parameters('functionAppName')]",
- "location": "[parameters('location')]",
- "kind": "functionapp",
- "dependsOn": [
- "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
- "[resourceId('Microsoft.Insights/components', variables('applicationInsightsName'))]"
- ],
- "properties": {
- "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "siteConfig": {
- "alwaysOn": true,
- "appSettings": [
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('microsoft.insights/components', variables('applicationInsightsName')), '2015-05-01').InstrumentationKey]"
- },
- {
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';EndpointSuffix=', environment().suffixes.storage, ';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value)]"
- },
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~4"
- },
- {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
- },
- {
- "name": "WEBSITE_NODE_DEFAULT_VERSION",
- "value": "~14"
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ name: functionAppName
+ location: location
+ kind: 'functionapp'
+ properties: {
+ serverFarmId: hostingPlan.id
+ siteConfig: {
+ alwaysOn: true
+ appSettings: [
+ {
+ name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
+ value: applicationInsightsName.properties.InstrumentationKey
+ }
+ {
+ name: 'AzureWebJobsStorage'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ {
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~4'
+ }
+ {
+ name: 'FUNCTIONS_WORKER_RUNTIME'
+ value: 'node'
+ }
+ {
+ name: 'WEBSITE_NODE_DEFAULT_VERSION'
+ value: '~14'
} ] }
The settings required by a function app running in Dedicated plan differ between
} ```
-# [Linux](#tab/linux)
-
-The function app must have set `"kind": "functionapp,linux"`, and it must have set property `"reserved": true`. Linux apps should also include a `linuxFxVersion` property under siteConfig. If you are just deploying code, the value for this is determined by your desired runtime stack in the format of runtime|runtimeVersion. Examples of `linuxFxVersion` property include: `python|3.7`, `node|14` and `dotnet|3.1`.
+# [JSON](#tab/json)
```json
-{
- "type": "Microsoft.Web/sites",
- "apiVersion": "2021-02-01",
- "name": "[parameters('functionAppName')]",
- "location": "[parameters('location')]",
- "kind": "functionapp,linux",
- "dependsOn": [
- "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
- "[resourceId('Microsoft.Insights/components', variables('applicationInsightsName'))]"
- ],
- "properties": {
- "reserved": true,
- "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "siteConfig": {
- "alwaysOn": true,
- "linuxFxVersion": "node|14",
- "appSettings": [
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('microsoft.insights/components', variables('applicationInsightsName')), '2015-05-01').InstrumentationKey]"
- },
- {
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';EndpointSuffix=', environment().suffixes.storage, ';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value)]"
- },
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~4"
- },
- {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
+"resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('functionAppName')]",
+ "location": "[parameters('location')]",
+ "kind": "functionapp",
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components', parameters('applicationInsightsName'))]",
+ "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
+ ],
+ "properties": {
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "siteConfig": {
+ "alwaysOn": true,
+ "appSettings": [
+ {
+ "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
+ "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').InstrumentationKey]"
+ },
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};AccountKey={2}', parameters('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~4"
+ },
+ {
+ "name": "FUNCTIONS_WORKER_RUNTIME",
+ "value": "node"
+ },
+ {
+ "name": "WEBSITE_NODE_DEFAULT_VERSION",
+ "value": "~14"
+ }
+ ]
+ }
+ }
+ }
+]
+```
+++
+#### Linux
+
+The function app must have set `"kind": "functionapp,linux"`, and it must have set property `"reserved": true`. Linux apps should also include a `linuxFxVersion` property under siteConfig. If you're just deploying code, the value for this property is determined by your desired runtime stack in the format of runtime|runtimeVersion. Examples of `linuxFxVersion` property include: `python|3.7`, `node|14` and `dotnet|3.1`.
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ name: functionAppName
+ location: location
+ kind: 'functionapp,linux'
+ properties: {
+ reserved: true
+ serverFarmId: hostingPlan.id
+ siteConfig: {
+ alwaysOn: true
+ linuxFxVersion: 'node|14'
+ appSettings: [
+ {
+ name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
+ value: applicationInsightsName.properties.InstrumentationKey
+ }
+ {
+ name: 'AzureWebJobsStorage'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
+ {
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~4'
+ }
+ {
+ name: 'FUNCTIONS_WORKER_RUNTIME'
+ value: 'node'
} ] }
The function app must have set `"kind": "functionapp,linux"`, and it must have s
} ```
+# [JSON](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('functionAppName')]",
+ "location": "[parameters('location')]",
+ "kind": "functionapp,linux",
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components', parameters('applicationInsightsName'))]",
+ "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
+ ],
+ "properties": {
+ "reserved": true,
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "siteConfig": {
+ "alwaysOn": true,
+ "linuxFxVersion": "node|14",
+ "appSettings": [
+ {
+ "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
+ "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').InstrumentationKey]"
+ },
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};AccountKey={2}', parameters('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~4"
+ },
+ {
+ "name": "FUNCTIONS_WORKER_RUNTIME",
+ "value": "node"
+ }
+ ]
+ }
+ }
+ }
+]
+```
+ ### Custom Container Image
-If you are [deploying a custom container image](./functions-create-function-linux-custom-image.md), you must specify it with `linuxFxVersion` and include configuration that allows your image to be pulled, as in [Web App for Containers](../app-service/index.yml). Also, set `WEBSITES_ENABLE_APP_SERVICE_STORAGE` to `false`, since your app content is provided in the container itself:
+If you're [deploying a custom container image](./functions-create-function-linux-custom-image.md), you must specify it with `linuxFxVersion` and include configuration that allows your image to be pulled, as in [Web App for Containers](../app-service/index.yml). Also, set `WEBSITES_ENABLE_APP_SERVICE_STORAGE` to `false`, since your app content is provided in the container itself:
-```json
-{
- "apiVersion": "2016-03-01",
- "type": "Microsoft.Web/sites",
- "name": "[variables('functionAppName')]",
- "location": "[resourceGroup().location]",
- "kind": "functionapp",
- "dependsOn": [
- "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
- ],
- "properties": {
- "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "siteConfig": {
- "appSettings": [
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ name: functionAppName
+ location: location
+ kind: 'functionapp'
+ properties: {
+ serverFarmId: hostingPlan.id
+ siteConfig: {
+ appSettings: [
{
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]"
- },
+ name: 'AzureWebJobsStorage'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
{
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
- },
+ name: 'FUNCTIONS_WORKER_RUNTIME'
+ value: 'node'
+ }
{
- "name": "WEBSITE_NODE_DEFAULT_VERSION",
- "value": "~14"
- },
+ name: 'WEBSITE_NODE_DEFAULT_VERSION'
+ value: '~14'
+ }
{
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~3"
- },
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~3'
+ }
{
- "name": "DOCKER_REGISTRY_SERVER_URL",
- "value": "[parameters('dockerRegistryUrl')]"
- },
+ name: 'DOCKER_REGISTRY_SERVER_URL'
+ value: dockerRegistryUrl
+ }
{
- "name": "DOCKER_REGISTRY_SERVER_USERNAME",
- "value": "[parameters('dockerRegistryUsername')]"
- },
+ name: 'DOCKER_REGISTRY_SERVER_USERNAME'
+ value: dockerRegistryUsername
+ }
{
- "name": "DOCKER_REGISTRY_SERVER_PASSWORD",
- "value": "[parameters('dockerRegistryPassword')]"
- },
+ name: 'DOCKER_REGISTRY_SERVER_PASSWORD'
+ value: dockerRegistryPassword
+ }
{
- "name": "WEBSITES_ENABLE_APP_SERVICE_STORAGE",
- "value": "false"
+ name: 'WEBSITES_ENABLE_APP_SERVICE_STORAGE'
+ value: 'false'
}
- ],
- "linuxFxVersion": "DOCKER|myacr.azurecr.io/myimage:mytag"
+ ]
+ linuxFxVersion: 'DOCKER|myacr.azurecr.io/myimage:mytag'
} }
+ dependsOn: [
+ storageAccount
+ ]
} ```
+# [JSON](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('functionAppName')]",
+ "location": "[parameters('location')]",
+ "kind": "functionapp",
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
+ ],
+ "properties": {
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "siteConfig": {
+ "appSettings": [
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', parameters('storageAccountName'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "FUNCTIONS_WORKER_RUNTIME",
+ "value": "node"
+ },
+ {
+ "name": "WEBSITE_NODE_DEFAULT_VERSION",
+ "value": "~14"
+ },
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~3"
+ },
+ {
+ "name": "DOCKER_REGISTRY_SERVER_URL",
+ "value": "[parameters('dockerRegistryUrl')]"
+ },
+ {
+ "name": "DOCKER_REGISTRY_SERVER_USERNAME",
+ "value": "[parameters('dockerRegistryUsername')]"
+ },
+ {
+ "name": "DOCKER_REGISTRY_SERVER_PASSWORD",
+ "value": "[parameters('dockerRegistryPassword')]"
+ },
+ {
+ "name": "WEBSITES_ENABLE_APP_SERVICE_STORAGE",
+ "value": "false"
+ }
+ ],
+ "linuxFxVersion": "DOCKER|myacr.azurecr.io/myimage:mytag"
+ }
+ }
+ }
+]
+```
+++ ## Deploy to Azure Arc Azure Functions can be deployed to [Azure Arc-enabled Kubernetes](../app-service/overview-arc-integration.md). This process largely follows [deploying to an App Service plan](#deploy-on-app-service-plan), with a few differences to note.
-To create the app and plan resources, you must have already [created an App Service Kubernetes environment](../app-service/manage-create-arc-environment.md) for an Azure Arc-enabled Kubernetes cluster. These examples assume you have the resource ID of the custom location and App Service Kubernetes environment that you are deploying to. For most templates, you can supply these as parameters.
+To create the app and plan resources, you must have already [created an App Service Kubernetes environment](../app-service/manage-create-arc-environment.md) for an Azure Arc-enabled Kubernetes cluster. These examples assume you have the resource ID of the custom location and App Service Kubernetes environment that you're deploying to. For most Bicep files/ARM templates, you can supply these values as parameters.
+
+# [Bicep](#tab/bicep)
+
+```bicep
+param kubeEnvironmentId string
+param customLocationId string
+```
+
+# [JSON](#tab/json)
```json
-{
- "parameters": {
- "kubeEnvironmentId" : {
- "type": "string"
- },
- "customLocationId" : {
- "type": "string"
- }
+"parameters": {
+ "kubeEnvironmentId" : {
+ "type": "string"
+ },
+ "customLocationId" : {
+ "type": "string"
} } ``` ++ Both sites and plans must reference the custom location through an `extendedLocation` field. This block sits outside of `properties`, peer to `kind` and `location`:
-```json
-{
- "extendedLocation": {
- "type": "customlocation",
- "name": "[parameters('customLocationId')]"
- },
+# [Bicep](#tab/bicep)
+
+```bicep
+resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
+ ...
+ {
+ extendedLocation: {
+ name: customLocationId
+ }
+ }
} ```
-The plan resource should use the Kubernetes (K1) SKU, and its `kind` field should be "linux,kubernetes". Within `properties`, `reserved` should be "true", and `kubeEnvironmentProfile.id` should be set to the App Service Kubernetes environment resource ID. An example plan might look like the following:
+# [JSON](#tab/json)
```json { "type": "Microsoft.Web/serverfarms",
- "name": "[variables('hostingPlanName')]",
- "location": "[parameters('location')]",
- "apiVersion": "2020-12-01",
- "kind": "linux,kubernetes",
- "sku": {
- "name": "K1",
- "tier": "Kubernetes"
- },
- "extendedLocation": {
- "type": "customlocation",
- "name": "[parameters('customLocationId')]"
- },
- "properties": {
- "name": "[variables('hostingPlanName')]",
- "location": "[parameters('location')]",
- "workerSizeId": "0",
- "numberOfWorkers": "1",
- "kubeEnvironmentProfile": {
- "id": "[parameters('kubeEnvironmentId')]"
+ ...
+ {
+ "extendedLocation": {
+ "name": "[parameters('customLocationId')]"
},
- "reserved": true
} } ```
-The function app resource should have its `kind` field set to "functionapp,linux,kubernetes" or "functionapp,linux,kubernetes,container" depending on if you intend to deploy via code or container. An example function app might look like the following:
++
+The plan resource should use the Kubernetes (K1) SKU, and its `kind` field should be `linux,kubernetes`. Within `properties`, `reserved` should be `true`, and `kubeEnvironmentProfile.id` should be set to the App Service Kubernetes environment resource ID. An example plan might look like:
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource hostingPlan 'Microsoft.Web/serverfarms@2022-03-01' = {
+ name: hostingPlanName
+ location: location
+ kind: 'linux,kubernetes'
+ sku: {
+ name: 'K1'
+ tier: 'Kubernetes'
+ }
+ extendedLocation: {
+ name: customLocationId
+ }
+ properties: {
+ kubeEnvironmentProfile: {
+ id: kubeEnvironmentId
+ }
+ reserved: true
+ }
+}
+```
+
+# [JSON](#tab/json)
```json
- {
- "apiVersion": "2018-11-01",
- "type": "Microsoft.Web/sites",
- "name": "[variables('appName')]",
- "kind": "kubernetes,functionapp,linux,container",
- "location": "[parameters('location')]",
- "extendedLocation": {
- "type": "customlocation",
- "name": "[parameters('customLocationId')]"
- },
- "dependsOn": [
- "[resourceId('Microsoft.Insights/components', variables('appInsightsName'))]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
- "[variables('hostingPlanId')]"
- ],
- "properties": {
- "serverFarmId": "[variables('hostingPlanId')]",
- "siteConfig": {
- "linuxFxVersion": "DOCKER|mcr.microsoft.com/azure-functions/dotnet:3.0-appservice-quickstart",
- "appSettings": [
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('hostingPlanName')]",
+ "location": "[parameters('location')]",
+ "kind": "linux,kubernetes",
+ "sku": {
+ "name": "K1",
+ "tier": "Kubernetes"
+ },
+ "extendedLocation": {
+ "name": "[parameters('customLocationId')]"
+ },
+ "properties": {
+ "kubeEnvironmentProfile": {
+ "id": "[parameters('kubeEnvironmentId')]"
+ },
+ "reserved": true
+ }
+ }
+]
+```
+++
+The function app resource should have its `kind` field set to **functionapp,linux,kubernetes** or **functionapp,linux,kubernetes,container** depending on if you intend to deploy via code or container. An example .NET 6.0 function app might look like:
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ name: functionAppName
+ kind: 'kubernetes,functionapp,linux,container'
+ location: location
+ extendedLocation: {
+ name: customLocationId
+ }
+ properties: {
+ serverFarmId: hostingPlanName
+ siteConfig: {
+ linuxFxVersion: 'DOCKER|mcr.microsoft.com/azure-functions/dotnet:3.0-appservice-quickstart'
+ appSettings: [
{
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~3"
- },
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~3'
+ }
{
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2015-05-01-preview').key1)]"
-
- },
+ name: 'AzureWebJobsStorage'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ }
{
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('microsoft.insights/components/', variables('appInsightsName')), '2015-05-01').InstrumentationKey]"
+ name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
+ value: applicationInsightsName.properties.InstrumentationKey
}
- ],
- "alwaysOn": true
+ ]
+ alwaysOn: true
} }
+ dependsOn: [
+ storageAccount
+ hostingPlan
+ ]
} ```
+# [JSON](#tab/json)
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('functionAppName')]",
+ "kind": "kubernetes,functionapp,linux,container",
+ "location": "[parameters('location')]",
+ "extendedLocation": {
+ "name": "[parameters('customLocationId')]"
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components', parameters('applicationInsightsName'))]",
+ "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
+ ],
+ "properties": {
+ "serverFarmId": "[parameters('hostingPlanName')]",
+ "siteConfig": {
+ "linuxFxVersion": "DOCKER|mcr.microsoft.com/azure-functions/dotnet:3.0-appservice-quickstart",
+ "appSettings": [
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~3"
+ },
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', parameters('storageAccountName'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-09-01').keys[0].value)]"
+ },
+ {
+ "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
+ "value": "[reference(resourceId('Microsoft.Insights/components', parameters('applicationInsightsName')), '2020-02-02').InstrumentationKey]"
+ }
+ ],
+ "alwaysOn": true
+ }
+ }
+ }
+]
+```
+++ ## Customizing a deployment A function app has many child resources that you can use in your deployment, including app settings and source control options. You also might choose to remove the **sourcecontrols** child resource, and use a different [deployment option](functions-continuous-deployment.md) instead. > [!IMPORTANT]
-> To successfully deploy your application by using Azure Resource Manager, it's important to understand how resources are deployed in Azure. In the following example, top-level configurations are applied by using **siteConfig**. It's important to set these configurations at a top level, because they convey information to the Functions runtime and deployment engine. Top-level information is required before the child **sourcecontrols/web** resource is applied. Although it's possible to configure these settings in the child-level **config/appSettings** resource, in some cases your function app must be deployed *before* **config/appSettings** is applied. For example, when you are using functions with [Logic Apps](../logic-apps/index.yml), your functions are a dependency of another resource.
+> To successfully deploy your application by using Azure Resource Manager, it's important to understand how resources are deployed in Azure. In the following example, top-level configurations are applied by using `siteConfig`. It's important to set these configurations at a top level, because they convey information to the Functions runtime and deployment engine. Top-level information is required before the child **sourcecontrols/web** resource is applied. Although it's possible to configure these settings in the child-level **config/appSettings** resource, in some cases your function app must be deployed *before* **config/appSettings** is applied. For example, when you're using functions with [Logic Apps](../logic-apps/index.yml), your functions are a dependency of another resource.
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ name: functionAppName
+ location: location
+ kind: 'functionapp'
+ properties: {
+ serverFarmId: hostingPlan.id
+ siteConfig: {
+ alwaysOn: true
+ appSettings: [
+ {
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~3'
+ }
+ {
+ name: 'Project'
+ value: 'src'
+ }
+ ]
+ }
+ }
+ dependsOn: [
+ storageAccount
+ ]
+}
+
+resource config 'Microsoft.Web/sites/config@2022-03-01' = {
+ parent: functionApp
+ name: 'appsettings'
+ properties: {
+ AzureWebJobsStorage: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ AzureWebJobsDashboard: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ FUNCTIONS_EXTENSION_VERSION: '~3'
+ FUNCTIONS_WORKER_RUNTIME: 'dotnet'
+ Project: 'src'
+ }
+ dependsOn: [
+ sourcecontrol
+ storageAccount
+ ]
+}
+
+resource sourcecontrol 'Microsoft.Web/sites/sourcecontrols@2022-03-01' = {
+ parent: functionApp
+ name: 'web'
+ properties: {
+ repoUrl: repoUrl
+ branch: branch
+ isManualIntegration: true
+ }
+}
+```
+
+# [JSON](#tab/json)
```json
-{
- "apiVersion": "2015-08-01",
- "name": "[parameters('appName')]",
- "type": "Microsoft.Web/sites",
- "kind": "functionapp",
- "location": "[parameters('location')]",
- "dependsOn": [
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
- "[resourceId('Microsoft.Web/serverfarms', parameters('appName'))]"
- ],
- "properties": {
- "serverFarmId": "[variables('appServicePlanName')]",
- "siteConfig": {
+"resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2022-03-01",
+ "name": "[variables('functionAppName')]",
+ "location": "[parameters('location')]",
+ "kind": "functionapp",
+ "properties": {
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
+ "siteConfig": {
"alwaysOn": true, "appSettings": [
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~3"
- },
- {
- "name": "Project",
- "value": "src"
- }
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~3"
+ },
+ {
+ "name": "Project",
+ "value": "src"
+ }
]
- }
- },
- "resources": [
- {
- "apiVersion": "2015-08-01",
- "name": "appsettings",
- "type": "config",
- "dependsOn": [
- "[resourceId('Microsoft.Web/Sites', parameters('appName'))]",
- "[resourceId('Microsoft.Web/Sites/sourcecontrols', parameters('appName'), 'web')]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
- ],
- "properties": {
- "AzureWebJobsStorage": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]",
- "AzureWebJobsDashboard": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]",
- "FUNCTIONS_EXTENSION_VERSION": "~3",
- "FUNCTIONS_WORKER_RUNTIME": "dotnet",
- "Project": "src"
} },
- {
- "apiVersion": "2015-08-01",
- "name": "web",
- "type": "sourcecontrols",
- "dependsOn": [
- "[resourceId('Microsoft.Web/sites/', parameters('appName'))]"
- ],
- "properties": {
- "RepoUrl": "[parameters('sourceCodeRepositoryURL')]",
- "branch": "[parameters('sourceCodeBranch')]",
- "IsManualIntegration": "[parameters('sourceCodeManualIntegration')]"
- }
- }
- ]
-}
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
+ ]
+ },
+ {
+ "type": "Microsoft.Web/sites/config",
+ "apiVersion": "2022-03-01",
+ "name": "[format('{0}/{1}', variables('functionAppName'), 'appsettings')]",
+ "properties": {
+ "AzureWebJobsStorage": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', variables('storageAccountName'), listKeys(variables('storageAccountName'), '2021-09-01').keys[0].value)]",
+ "AzureWebJobsDashboard": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', variables('storageAccountName'), listKeys(variables('storageAccountName'), '2021-09-01').keys[0].value)]",
+ "FUNCTIONS_EXTENSION_VERSION": "~3",
+ "FUNCTIONS_WORKER_RUNTIME": "dotnet",
+ "Project": "src"
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/sites', variables('functionAppName'))]",
+ "[resourceId('Microsoft.Web/sites/sourcecontrols', variables('functionAppName'), 'web')]",
+ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
+ ]
+ },
+ {
+ "type": "Microsoft.Web/sites/sourcecontrols",
+ "apiVersion": "2022-03-01",
+ "name": "[format('{0}/{1}', variables('functionAppName'), 'web')]",
+ "properties": {
+ "repoUrl": "[parameters('repoURL')]",
+ "branch": "[parameters('branch')]",
+ "isManualIntegration": true
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/sites', variables('functionAppName'))]"
+ ]
+ }
+]
``` ++ > [!TIP]
-> This template uses the [Project](https://github.com/projectkudu/kudu/wiki/Customizing-deployments#using-app-settings-instead-of-a-deployment-file) app settings value, which sets the base directory in which the Functions deployment engine (Kudu) looks for deployable code. In our repository, our functions are in a subfolder of the **src** folder. So, in the preceding example, we set the app settings value to `src`. If your functions are in the root of your repository, or if you are not deploying from source control, you can remove this app settings value.
+> This Bicep/ARM template uses the [Project](https://github.com/projectkudu/kudu/wiki/Customizing-deployments#using-app-settings-instead-of-a-deployment-file) app settings value, which sets the base directory in which the Functions deployment engine (Kudu) looks for deployable code. In our repository, our functions are in a subfolder of the **src** folder. So, in the preceding example, we set the app settings value to `src`. If your functions are in the root of your repository, or if you're not deploying from source control, you can remove this app settings value.
## Deploy your template
-You can use any of the following ways to deploy your template:
+You can use any of the following ways to deploy your Bicep file and template:
+
+# [Bicep](#tab/bicep)
+
+- [Azure CLI](../azure-resource-manager/bicep/deploy-cli.md)
+- [PowerShell](../azure-resource-manager/bicep/deploy-powershell.md)
+
+# [JSON](#tab/json)
-- [PowerShell](../azure-resource-manager/templates/deploy-powershell.md)-- [Azure CLI](../azure-resource-manager/templates/deploy-cli.md) - [Azure portal](../azure-resource-manager/templates/deploy-portal.md)-- [REST API](../azure-resource-manager/templates/deploy-rest.md)
+- [Azure CLI](../azure-resource-manager/templates/deploy-cli.md)
+- [PowerShell](../azure-resource-manager/templates/deploy-powershell.md)
++ ### Deploy to Azure button
+> [!NOTE]
+> This method doesn't support deploying Bicep files currently.
+ Replace ```<url-encoded-path-to-azuredeploy-json>``` with a [URL-encoded](https://www.bing.com/search?q=url+encode) version of the raw path of your `azuredeploy.json` file in GitHub.
-Here is an example that uses markdown:
+Here's an example that uses markdown:
```markdown [![Deploy to Azure](https://azuredeploy.net/deploybutton.png)](https://portal.azure.com/#create/Microsoft.Template/uri/<url-encoded-path-to-azuredeploy-json>) ```
-Here is an example that uses HTML:
+Here's an example that uses HTML:
```html <a href="https://portal.azure.com/#create/Microsoft.Template/uri/<url-encoded-path-to-azuredeploy-json>" target="_blank"><img src="https://azuredeploy.net/deploybutton.png"></a>
Here is an example that uses HTML:
### Deploy using PowerShell
-The following PowerShell commands create a resource group and deploy a template that creates a function app with its required resources. To run locally, you must have [Azure PowerShell](/powershell/azure/install-az-ps) installed. Run [`Connect-AzAccount`](/powershell/module/az.accounts/connect-azaccount) to sign in.
+The following PowerShell commands create a resource group and deploy a Bicep file/ARM template that creates a function app with its required resources. To run locally, you must have [Azure PowerShell](/powershell/azure/install-az-ps) installed. Run [`Connect-AzAccount`](/powershell/module/az.accounts/connect-azaccount) to sign in.
+
+# [Bicep](#tab/bicep)
```powershell # Register Resource Providers if they're not already registered
Register-AzResourceProvider -ProviderNamespace "microsoft.storage"
# Create a resource group for the function app New-AzResourceGroup -Name "MyResourceGroup" -Location 'West Europe'
-# Create the parameters for the file, which for this template is the function app name.
-$TemplateParams = @{"appName" = "<function-app-name>"}
+# Deploy the template
+New-AzResourceGroupDeployment -ResourceGroupName "MyResourceGroup" -TemplateFile main.bicep -Verbose
+```
+
+# [JSON](#tab/json)
+
+```powershell
+# Register Resource Providers if they're not already registered
+Register-AzResourceProvider -ProviderNamespace "microsoft.web"
+Register-AzResourceProvider -ProviderNamespace "microsoft.storage"
+
+# Create a resource group for the function app
+New-AzResourceGroup -Name "MyResourceGroup" -Location 'West Europe'
# Deploy the template
-New-AzResourceGroupDeployment -ResourceGroupName "MyResourceGroup" -TemplateFile template.json -TemplateParameterObject $TemplateParams -Verbose
+New-AzResourceGroupDeployment -ResourceGroupName "MyResourceGroup" -TemplateFile azuredeploy.json -Verbose
```
-To test out this deployment, you can use a [template like this one](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.web/function-app-create-dynamic/azuredeploy.json) that creates a function app on Windows in a Consumption plan. Replace `<function-app-name>` with a unique name for your function app.
++
+To test out this deployment, you can use a [template like this one](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/function-app-create-dynamic) that creates a function app on Windows in a Consumption plan.
## Next steps
azure-functions Functions Premium Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-premium-plan.md
See the complete regional availability of Functions on the [Azure web site](http
|Brazil South| 100 | 20 | |Canada Central| 100 | 20 | |Central India| 100 | 20 |
-|Central US| 100 | 40 |
+|Central US| 100 | 80 |
|China East 2| 100 | 20 | |China North 2| 100 | 20 | |East Asia| 100 | 20 |
-|East US | 100 | 60 |
-|East US 2| 100 | 40 |
+|East US | 100 | 80 |
+|East US 2| 100 | 60 |
|France Central| 100 | 20 | |Germany West Central| 100 | 20 | |Japan East| 100 | 20 |
See the complete regional availability of Functions on the [Azure web site](http
|USGov Texas| 100 | Not Available | |USGov Virginia| 100 | 20 | |West Central US| 100 | 20 |
-|West Europe| 100 | 40 |
+|West Europe| 100 | 80 |
|West India| 100 | 20 | |West US| 100 | 20 | |West US 2| 100 | 20 |
azure-functions Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/language-support-policy.md
To learn more about specific language version support policy timeline, visit the
* .NET - [dotnet.microsoft.com](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) * Node - [github.com](https://github.com/nodejs/Release#release-schedule) * Java - [azul.com](https://www.azul.com/products/azul-support-roadmap/)
-* PowerShell - [docs.microsoft.com](/powershell/scripting/powershell-support-lifecycle#powershell-end-of-support-dates)
+* PowerShell - [Microsoft technical documentation](/powershell/scripting/powershell-support-lifecycle#powershell-end-of-support-dates)
* Python - [devguide.python.org](https://devguide.python.org/#status-of-python-branches) ## Configuring language versions
azure-functions Security Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/security-concepts.md
Connection strings and other credentials stored in application settings gives al
Managed identities can be used in place of secrets for connections from some triggers and bindings. See [Identity-based connections](#identity-based-connections).
-For more information, see [How to use managed identities for App Service and Azure Functions](../app-service/overview-managed-identity.md?toc=%2fazure%2fazure-functions%2ftoc.json).
+For more information, see [How to use managed identities for App Service and Azure Functions](../app-service/overview-managed-identity.md?toc=/azure/azure-functions/toc.json).
#### Restrict CORS access
You can also encrypt settings by default in the local.settings.json file when de
While application settings are sufficient for most many functions, you may want to share the same secrets across multiple services. In this case, redundant storage of secrets results in more potential vulnerabilities. A more secure approach is to a central secret storage service and use references to this service instead of the secrets themselves.
-[Azure Key Vault](../key-vault/general/overview.md) is a service that provides centralized secrets management, with full control over access policies and audit history. You can use a Key Vault reference in the place of a connection string or key in your application settings. To learn more, see [Use Key Vault references for App Service and Azure Functions](../app-service/app-service-key-vault-references.md?toc=%2fazure%2fazure-functions%2ftoc.json).
+[Azure Key Vault](../key-vault/general/overview.md) is a service that provides centralized secrets management, with full control over access policies and audit history. You can use a Key Vault reference in the place of a connection string or key in your application settings. To learn more, see [Use Key Vault references for App Service and Azure Functions](../app-service/app-service-key-vault-references.md?toc=/azure/azure-functions/toc.json).
### Identity-based connections
Restricting network access to your function app lets you control who can access
### Set access restrictions
-Access restrictions allow you to define lists of allow/deny rules to control traffic to your app. Rules are evaluated in priority order. If there are no rules defined, then your app will accept traffic from any address. To learn more, see [Azure App Service Access Restrictions](../app-service/app-service-ip-restrictions.md?toc=%2fazure%2fazure-functions%2ftoc.json).
+Access restrictions allow you to define lists of allow/deny rules to control traffic to your app. Rules are evaluated in priority order. If there are no rules defined, then your app will accept traffic from any address. To learn more, see [Azure App Service Access Restrictions](../app-service/app-service-ip-restrictions.md?toc=/azure/azure-functions/toc.json).
### Private site access
azure-functions Security Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-glossary-cloud-terminology.md
The compute resources that [Azure App Service](app-service/overview.md) provides
## availability set A collection of virtual machines that are managed together to provide application redundancy and reliability. The use of an availability set ensures that during either a planned or unplanned maintenance event at least one virtual machine is available.
-See [Manage the availability of Windows virtual machines](./virtual-machines/availability.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) and [Manage the availability of Linux virtual machines](./virtual-machines/availability.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json)
+See [Manage the availability of Windows virtual machines](./virtual-machines/availability.md?toc=/azure/virtual-machines/windows/toc.json) and [Manage the availability of Linux virtual machines](./virtual-machines/availability.md?toc=/azure/virtual-machines/linux/toc.json)
## <a name="classic-model"></a>Azure classic deployment model One of two [deployment models](./azure-resource-manager/management/deployment-models.md) used to deploy resources in Azure (the new model is Azure Resource Manager). Some Azure services support only the Resource Manager deployment model, some support only the classic deployment model, and some support both. The documentation for each Azure service specifies which model(s) they support.
One of two [deployment models](./azure-resource-manager/management/deployment-mo
## fault domain The collection of virtual machines in an availability set that can possibly fail at the same time. An example is a group of machines in a rack that share a common power source and network switch. In Azure, the virtual machines in an availability set are automatically separated across multiple fault domains.
-See [Manage the availability of Windows virtual machines](./virtual-machines/availability.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) or [Manage the availability of Linux virtual machines](./virtual-machines/availability.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json)
+See [Manage the availability of Windows virtual machines](./virtual-machines/availability.md?toc=/azure/virtual-machines/windows/toc.json) or [Manage the availability of Linux virtual machines](./virtual-machines/availability.md?toc=/azure/virtual-machines/linux/toc.json)
## geo A defined boundary for data residency that typically contains two or more regions. The boundaries may be within or beyond national borders and are influenced by tax regulation. Every geo has at least one region. Examples of geos are Asia Pacific and Japan. Also called *geography*.
See [Active Geo-Replication for Azure SQL Database](/azure/azure-sql/database/au
## image A file that contains the operating system and application configuration that can be used to create any number of virtual machines. In Azure there are two types of images: VM image and OS image. A VM image includes an operating system and all disks attached to a virtual machine when the image is created. An OS image contains only a generalized operating system with no data disk configurations.
-See [Navigate and select Windows virtual machine images in Azure with PowerShell or the CLI](virtual-machines/windows/cli-ps-findimage.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json)
+See [Navigate and select Windows virtual machine images in Azure with PowerShell or the CLI](virtual-machines/windows/cli-ps-findimage.md?toc=/azure/virtual-machines/windows/toc.json)
## limits The number of resources that can be created or the performance benchmark that can be achieved. Limits are typically associated with subscriptions, services, and offerings.
A tenant is a group of users or an organization that share access with specific
## update domain The collection of virtual machines in an availability set that are updated at the same time. Virtual machines in the same update domain are restarted together during planned maintenance. Azure never restarts more than one update domain at a time. Also referred to as an upgrade domain.
-See [Manage the availability of Windows virtual machines](./virtual-machines/availability.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) and [Manage the availability of Linux virtual machines](./virtual-machines/availability.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json)
+See [Manage the availability of Windows virtual machines](./virtual-machines/availability.md?toc=/azure/virtual-machines/windows/toc.json) and [Manage the availability of Linux virtual machines](./virtual-machines/availability.md?toc=/azure/virtual-machines/linux/toc.json)
## <a name="vm"></a>virtual machine
-The software implementation of a physical computer that runs an operating system. Multiple virtual machines can run simultaneously on the same hardware. In Azure, virtual machines are available in a variety of sizes.
-See [Virtual Machines documentation](https://azure.microsoft.com/documentation/services/virtual-machines/)
+The software implementation of a physical computer that runs an operating system. Multiple virtual machines can run simultaneously on the same hardware. In Azure, virtual machines are available in a variety of sizes. For more information, see [Virtual Machines documentation](/azure/virtual-machines/)
## <a name="vm-extension"></a>virtual machine extension A resource that implements behaviors or features that either help other programs work or provide the ability for you to interact with a running computer. For example, you could use the VM Access extension to reset or modify remote access values on an Azure virtual machine. <!-- This definition seems obscure to me; maybe a list of examples would work better than a conceptual definition? -->
-See [About virtual machine extensions and features (Windows)](./virtual-machines/extensions/features-windows.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) or [About virtual machine extensions and features (Linux)](./virtual-machines/extensions/features-linux.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json)
+See [About virtual machine extensions and features (Windows)](./virtual-machines/extensions/features-windows.md?toc=/azure/virtual-machines/windows/toc.json) or [About virtual machine extensions and features (Linux)](./virtual-machines/extensions/features-linux.md?toc=/azure/virtual-machines/linux/toc.json)
## <a name="vnet"></a>virtual network A network that provides connectivity between your Azure resources that is isolated from all other Azure tenants. An [Azure VPN Gateway](vpn-gateway/vpn-gateway-about-vpngateways.md) lets you establish connections between virtual networks and between a virtual network and an on-premises network. You can fully control the IP address blocks, DNS settings, security policies, and route tables within a virtual network.
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Clients First Business Solutions LLC](https://www.clientsfirst-us.com)| |[ClearShark](https://clearshark.com/)| |[CloudFit Software, LLC](https://www.cloudfitsoftware.com/)|
-|[Cloud Navigator, Inc - formerly ISC](https://www.cloudnav.com )|
+|[Cloud Navigator, Inc - formerly ISC](https://www.cloudnav.com)|
|[CNSS - Cherokee Nation System Solutions LLC](https://cherokee-federal.com/about/cherokee-nation-system-solutions)| |[CodeLynx, LLC](http://www.codelynx.com/)| |[Columbus US, Inc.](https://www.columbusglobal.com)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Norseman, Inc](https://www.norseman.com)| |[Nortec](https://www.nortec.com)| |[Northrop Grumman](https://www.northropgrumman.com)|
-|[NTS Cloud](http://ntscloud.com/ )|
+|[NTS Cloud](http://ntscloud.com/)|
|[NTT America, Inc.](https://www.us.ntt.net)| |[Nubelity LLC](http://www.nubelity.com)| |[NuSoft Solutions (Atrio Systems, Inc.)](https://nusoftsolutions.com)|
azure-monitor Agent Linux Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux-troubleshoot.md
-# How to troubleshoot issues with the Log Analytics agent for Linux
+# Troubleshoot issues with the Log Analytics agent for Linux
-This article provides help troubleshooting errors you might experience with the Log Analytics agent for Linux in Azure Monitor and suggests possible solutions to resolve them.
+This article provides help in troubleshooting errors you might experience with the Log Analytics agent for Linux in Azure Monitor and suggests possible solutions to resolve them.
## Log Analytics Troubleshooting Tool
-The Log Analytics Agent Linux Troubleshooting Tool is a script designed to help find and diagnose issues with the Log Analytics Agent. It is automatically included with the agent upon installation. Running the tool should be the first step in diagnosing an issue.
+The Log Analytics agent for Linux Troubleshooting Tool is a script designed to help find and diagnose issues with the Log Analytics agent. It's automatically included with the agent upon installation. Running the tool should be the first step in diagnosing an issue.
-### How to Use
+### Use the Troubleshooting Tool
+
+To run the Troubleshooting Tool, paste the following command into a terminal window on a machine with the Log Analytics agent:
-The Troubleshooting Tool can be run by pasting the following command into a terminal window on a machine with the Log Analytics agent:
`sudo /opt/microsoft/omsagent/bin/troubleshooter`
-### Manual Installation
+### Manual installation
-The Troubleshooting Tool is automatically included upon installation of the Log Analytics Agent. However, if installation fails in any way, it can also be installed manually by following the steps below.
+The Troubleshooting Tool is automatically included when the Log Analytics agent is installed. If installation fails in any way, you can also install the tool manually:
-1. Ensure that the [GNU Project Debugger (GDB)](https://www.gnu.org/software/gdb/) is installed on the machine since the troubleshooter relies on it.
-2. Copy the troubleshooter bundle onto your machine: `wget https://raw.github.com/microsoft/OMS-Agent-for-Linux/master/source/code/troubleshooter/omsagent_tst.tar.gz`
-3. Unpack the bundle: `tar -xzvf omsagent_tst.tar.gz`
-4. Run the manual installation: `sudo ./install_tst`
+1. Ensure that the [GNU Project Debugger (GDB)](https://www.gnu.org/software/gdb/) is installed on the machine because the troubleshooter relies on it.
+1. Copy the troubleshooter bundle onto your machine: `wget https://raw.github.com/microsoft/OMS-Agent-for-Linux/master/source/code/troubleshooter/omsagent_tst.tar.gz`
+1. Unpack the bundle: `tar -xzvf omsagent_tst.tar.gz`
+1. Run the manual installation: `sudo ./install_tst`
-### Scenarios Covered
+### Scenarios covered
-Below is a list of scenarios checked by the Troubleshooting Tool:
+The Troubleshooting Tool checks the following scenarios:
-1. Agent is unhealthy, heartbeat doesn't work properly
-2. Agent doesn't start, can't connect to Log Analytic Services
-3. Agent syslog isn't working
-4. Agent has high CPU / memory usage
-5. Agent having installation issues
-6. Agent custom logs aren't working
-7. Collect Agent logs
+- The agent is unhealthy; the heartbeat doesn't work properly.
+- The agent doesn't start or can't connect to Log Analytics.
+- The agent Syslog isn't working.
+- The agent has high CPU or memory usage.
+- The agent has installation issues.
+- The agent custom logs aren't working.
+- Agent logs can't be collected.
-For more details, please check out our [GitHub documentation](https://github.com/microsoft/OMS-Agent-for-Linux/blob/master/docs/Troubleshooting-Tool.md).
+For more information, see the [Troubleshooting Tool documentation on GitHub](https://github.com/microsoft/OMS-Agent-for-Linux/blob/master/docs/Troubleshooting-Tool.md).
> [!NOTE]
- > Please run the Log Collector tool when you experience an issue. Having the logs initially will greatly help our support team troubleshoot your issue quicker.
+ > Run the Log Collector tool when you experience an issue. Having the logs initially will help our support team troubleshoot your issue faster.
-## Purge and Re-Install the Linux Agent
+## Purge and reinstall the Linux agent
-We've seen that a clean re-install of the Agent will fix most issues. In fact this may be the first suggestion from Support to get the Agent into a uncorrupted state from our support team. Running the troubleshooter, log collect, and attempting a clean re-install will help solve issues more quickly.
+A clean reinstall of the agent fixes most issues. This task might be the first suggestion from our support team to get the agent into an uncorrupted state. Running the Troubleshooting Tool and Log Collector tool and attempting a clean reinstall helps to solve issues more quickly.
1. Download the purge script:-- `$ wget https://raw.githubusercontent.com/microsoft/OMS-Agent-for-Linux/master/tools/purge_omsagent.sh`
-2. Run the purge script (with sudo permissions):
-- `$ sudo sh purge_omsagent.sh`
+
+ `$ wget https://raw.githubusercontent.com/microsoft/OMS-Agent-for-Linux/master/tools/purge_omsagent.sh`
+1. Run the purge script (with sudo permissions):
+
+ `$ sudo sh purge_omsagent.sh`
-## Important log locations and Log Collector tool
+## Important log locations and the Log Collector tool
File | Path - | -- Log Analytics agent for Linux log file | `/var/opt/microsoft/omsagent/<workspace id>/log/omsagent.log` Log Analytics agent configuration log file | `/var/opt/microsoft/omsconfig/omsconfig.log`
- We recommend you to use our log collector tool to retrieve important logs for troubleshooting or before submitting a GitHub issue. You can read more about the tool and how to run it [here](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/tools/LogCollector/OMS_Linux_Agent_Log_Collector.md).
+ We recommend that you use the Log Collector tool to retrieve important logs for troubleshooting or before you submit a GitHub issue. For more information about the tool and how to run it, see [OMS Linux Agent Log Collector](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/tools/LogCollector/OMS_Linux_Agent_Log_Collector.md).
## Important configuration files
- Category | File Location
+ Category | File location
-- | -- Syslog | `/etc/syslog-ng/syslog-ng.conf` or `/etc/rsyslog.conf` or `/etc/rsyslog.d/95-omsagent.conf` Performance, Nagios, Zabbix, Log Analytics output and general agent | `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.conf`
- Additional configurations | `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.d/*.conf`
+ Extra configurations | `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.d/*.conf`
> [!NOTE]
- > Editing configuration files for performance counters and Syslog is overwritten if the collection is configured from the [Agents configuration](../agents/agent-data-sources.md#configuring-data-sources) in the Azure portal for your workspace. To disable configuration for all agents, disable collection from **Agents configuration** or for a single agent run the following:
+ > Editing configuration files for performance counters and Syslog is overwritten if the collection is configured from the [agent's configuration](../agents/agent-data-sources.md#configuring-data-sources) in the Azure portal for your workspace. To disable configuration for all agents, disable collection from **Agents configuration**. For a single agent, run the following script:
+>
> `sudo /opt/microsoft/omsconfig/Scripts/OMS_MetaConfigHelper.py --disable && sudo rm /etc/opt/omi/conf/omsconfig/configuration/Current.mof* /etc/opt/omi/conf/omsconfig/configuration/Pending.mof*` ## Installation error codes
-| Error Code | Meaning |
+| Error code | Meaning |
| | |
-| NOT_DEFINED | Because the necessary dependencies are not installed, the auoms auditd plugin will not be installed. Installation of auoms failed, install package auditd. |
-| 2 | Invalid option provided to the shell bundle. Run `sudo sh ./omsagent-*.universal*.sh --help` for usage |
+| NOT_DEFINED | Because the necessary dependencies aren't installed, the auoms auditd plug-in won't be installed. Installation of auoms failed. Install package auditd. |
+| 2 | Invalid option provided to the shell bundle. Run `sudo sh ./omsagent-*.universal*.sh --help` for usage. |
| 3 | No option provided to the shell bundle. Run `sudo sh ./omsagent-*.universal*.sh --help` for usage. |
-| 4 | Invalid package type OR invalid proxy settings; omsagent-*rpm*.sh packages can only be installed on RPM-based systems, and omsagent-*deb*.sh packages can only be installed on Debian-based systems. It is recommend you use the universal installer from the [latest release](../vm/monitor-virtual-machine.md#agents). Also review to verify your proxy settings. |
-| 5 | The shell bundle must be executed as root OR there was 403 error returned during onboarding. Run your command using `sudo`. |
-| 6 | Invalid package architecture OR there was error 200 error returned during onboarding; omsagent-\*x64.sh packages can only be installed on 64-bit systems, and omsagent-\*x86.sh packages can only be installed on 32-bit systems. Download the correct package for your architecture from the [latest release](https://github.com/Microsoft/OMS-Agent-for-Linux/releases/latest). |
+| 4 | Invalid package type *or* invalid proxy settings. The omsagent-*rpm*.sh packages can only be installed on RPM-based systems. The omsagent-*deb*.sh packages can only be installed on Debian-based systems. We recommend that you use the universal installer from the [latest release](../vm/monitor-virtual-machine.md#agents). Also review to verify your proxy settings. |
+| 5 | The shell bundle must be executed as root *or* there was a 403 error returned during onboarding. Run your command by using `sudo`. |
+| 6 | Invalid package architecture *or* there was a 200 error returned during onboarding. The omsagent-\*x64.sh packages can only be installed on 64-bit systems. The omsagent-\*x86.sh packages can only be installed on 32-bit systems. Download the correct package for your architecture from the [latest release](https://github.com/Microsoft/OMS-Agent-for-Linux/releases/latest). |
| 17 | Installation of OMS package failed. Look through the command output for the root failure. | | 18 | Installation of OMSConfig package failed. Look through the command output for the root failure. | | 19 | Installation of OMI package failed. Look through the command output for the root failure. |
We've seen that a clean re-install of the Agent will fix most issues. In fact th
| 21 | Installation of Provider kits failed. Look through the command output for the root failure. | | 22 | Installation of bundled package failed. Look through the command output for the root failure | | 23 | SCX or OMI package already installed. Use `--upgrade` instead of `--install` to install the shell bundle. |
-| 30 | Internal bundle error. File a [GitHub Issue](https://github.com/Microsoft/OMS-Agent-for-Linux/issues) with details from the output. |
-| 55 | Unsupported openssl version OR Cannot connect to Azure Monitor OR dpkg is locked OR missing curl program. |
+| 30 | Internal bundle error. File a [GitHub issue](https://github.com/Microsoft/OMS-Agent-for-Linux/issues) with details from the output. |
+| 55 | Unsupported openssl version *or* can't connect to Azure Monitor *or* dpkg is locked *or* missing curl program. |
| 61 | Missing Python ctypes library. Install the Python ctypes library or package (python-ctypes). |
-| 62 | Missing tar program, install tar. |
-| 63 | Missing sed program, install sed. |
-| 64 | Missing curl program, install curl. |
-| 65 | Missing gpg program, install gpg. |
+| 62 | Missing tar program. Install tar. |
+| 63 | Missing sed program. Install sed. |
+| 64 | Missing curl program. Install curl. |
+| 65 | Missing gpg program. Install gpg. |
## Onboarding error codes
-| Error Code | Meaning |
+| Error code | Meaning |
| | | | 2 | Invalid option provided to the omsadmin script. Run `sudo sh /opt/microsoft/omsagent/bin/omsadmin.sh -h` for usage. | | 3 | Invalid configuration provided to the omsadmin script. Run `sudo sh /opt/microsoft/omsagent/bin/omsadmin.sh -h` for usage. |
We've seen that a clean re-install of the Agent will fix most issues. In fact th
| 6 | Non-200 HTTP error received from Azure Monitor. See the full output of the omsadmin script for details. | | 7 | Unable to connect to Azure Monitor. See the full output of the omsadmin script for details. | | 8 | Error onboarding to Log Analytics workspace. See the full output of the omsadmin script for details. |
-| 30 | Internal script error. File a [GitHub Issue](https://github.com/Microsoft/OMS-Agent-for-Linux/issues) with details from the output. |
-| 31 | Error generating agent ID. File a [GitHub Issue](https://github.com/Microsoft/OMS-Agent-for-Linux/issues) with details from the output. |
+| 30 | Internal script error. File a [GitHub issue](https://github.com/Microsoft/OMS-Agent-for-Linux/issues) with details from the output. |
+| 31 | Error generating agent ID. File a [GitHub issue](https://github.com/Microsoft/OMS-Agent-for-Linux/issues) with details from the output. |
| 32 | Error generating certificates. See the full output of the omsadmin script for details. |
-| 33 | Error generating metaconfiguration for omsconfig. File a [GitHub Issue](https://github.com/Microsoft/OMS-Agent-for-Linux/issues) with details from the output. |
+| 33 | Error generating metaconfiguration for omsconfig. File a [GitHub issue](https://github.com/Microsoft/OMS-Agent-for-Linux/issues) with details from the output. |
| 34 | Metaconfiguration generation script not present. Retry onboarding with `sudo sh /opt/microsoft/omsagent/bin/omsadmin.sh -w <Workspace ID> -s <Workspace Key>`. | ## Enable debug logging
-### OMS output plugin debug
+### OMS output plug-in debug
- FluentD allows for plugin-specific logging levels allowing you to specify different log levels for inputs and outputs. To specify a different log level for OMS output, edit the general agent configuration at `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.conf`.
+ FluentD allows for plug-in-specific logging levels that allow you to specify different log levels for inputs and outputs. To specify a different log level for OMS output, edit the general agent configuration at `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.conf`.
- In the OMS output plugin, before the end of the configuration file, change the `log_level` property from `info` to `debug`:
+ In the OMS output plug-in, before the end of the configuration file, change the `log_level` property from `info` to `debug`:
``` <match oms.** docker.**>
We've seen that a clean re-install of the Agent will fix most issues. In fact th
</match> ```
-Debug logging allows you to see batched uploads to Azure Monitor separated by type, number of data items, and time taken to send:
+Debug logging allows you to see batched uploads to Azure Monitor separated by type, number of data items, and time taken to send.
-*Example debug enabled log:*
+Here's an example debug-enabled log:
``` Success sending oms.nagios x 1 in 0.14s
Success sending oms.syslog.authpriv.info x 1 in 0.91s
### Verbose output
-Instead of using the OMS output plugin you can also output data items directly to `stdout`, which is visible in the Log Analytics agent for Linux log file.
+Instead of using the OMS output plug-in, you can output data items directly to `stdout`, which is visible in the Log Analytics agent for Linux log file.
-In the Log Analytics general agent configuration file at `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.conf`, comment out the OMS output plugin by adding a `#` in front of each line:
+In the Log Analytics general agent configuration file at `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.conf`, comment out the OMS output plug-in by adding a `#` in front of each line:
``` #<match oms.** docker.**>
In the Log Analytics general agent configuration file at `/etc/opt/microsoft/oms
#</match> ```
-Below the output plugin, uncomment the following section by removing the `#` in front of each line:
+Below the output plug-in, uncomment the following section by removing the `#` in front of each line:
``` <match **>
Below the output plugin, uncomment the following section by removing the `#` in
</match> ```
-## Issue: Unable to connect through proxy to Azure Monitor
+## Issue: Unable to connect through proxy to Azure Monitor
### Probable causes
-* The proxy specified during onboarding was incorrect
-* The Azure Monitor and Azure Automation Service Endpoints are not included in the approved list in your datacenter
+* The proxy specified during onboarding was incorrect.
+* The Azure Monitor and Azure Automation service endpoints aren't included in the approved list in your datacenter.
### Resolution
-1. Reonboard to Azure Monitor with the Log Analytics agent for Linux by using the following command with the option `-v` enabled. It allows verbose output of the agent connecting through the proxy to Azure Monitor.
+1. Reonboard to Azure Monitor with the Log Analytics agent for Linux by using the following command with the option `-v` enabled. It allows verbose output of the agent connecting through the proxy to Azure Monitor:
`/opt/microsoft/omsagent/bin/omsadmin.sh -w <Workspace ID> -s <Workspace Key> -p <Proxy Conf> -v`
-2. Review the section [Update proxy settings](agent-manage.md#update-proxy-settings) to verify you have properly configured the agent to communicate through a proxy server.
+1. Review the section [Update proxy settings](agent-manage.md#update-proxy-settings) to verify you've properly configured the agent to communicate through a proxy server.
-3. Double-check that the endpoints outlined in the Azure Monitor [network firewall requirements](./log-analytics-agent.md#firewall-requirements) list are added to an allow list correctly. If you use Azure Automation, the necessary network configuration steps are linked above as well.
+1. Double-check that the endpoints outlined in the Azure Monitor [network firewall requirements](./log-analytics-agent.md#firewall-requirements) list are added to an allow list correctly. If you use Azure Automation, the necessary network configuration steps are also linked above.
## Issue: You receive a 403 error when trying to onboard ### Probable causes
-* Date and Time is incorrect on Linux Server
-* Workspace ID and Workspace Key used are not correct
+* Date and time are incorrect on the Linux server.
+* The workspace ID and workspace key aren't correct.
### Resolution
-1. Check the time on your Linux server with the command date. If the time is +/- 15 minutes from current time, then onboarding fails. To correct this update the date and/or timezone of your Linux server.
-2. Verify you have installed the latest version of the Log Analytics agent for Linux. The newest version now notifies you if time skew is causing the onboarding failure.
-3. Reonboard using correct Workspace ID and Workspace Key following the installation instructions earlier in this article.
+1. Check the time on your Linux server with the command date. If the time is +/- 15 minutes from the current time, onboarding fails. To correct this situation, update the date and/or time zone of your Linux server.
+1. Verify that you've installed the latest version of the Log Analytics agent for Linux. The newest version now notifies you if time skew is causing the onboarding failure.
+1. Reonboard by using the correct workspace ID and workspace key in the installation instructions earlier in this article.
## Issue: You see a 500 and 404 error in the log file right after onboarding
-This is a known issue that occurs on first upload of Linux data into a Log Analytics workspace. This does not affect data being sent or service experience.
+This is a known issue that occurs on the first upload of Linux data into a Log Analytics workspace. This issue doesn't affect data being sent or service experience.
## Issue: You see omiagent using 100% CPU ### Probable causes
-A regression in nss-pem package [v1.0.3-5.el7](https://centos.pkgs.org/7/centos-x86_64/nss-pem-1.0.3-7.el7.x86_64.rpm.html) caused a severe performance issue, that we've been seeing come up a lot in Redhat/Centos 7.x distributions. To learn more about this issue, check the following documentation: Bug [1667121 Performance regression in libcurl](https://bugzilla.redhat.com/show_bug.cgi?id=1667121).
+A regression in nss-pem package [v1.0.3-5.el7](https://centos.pkgs.org/7/centos-x86_64/nss-pem-1.0.3-7.el7.x86_64.rpm.html) caused a severe performance issue. We've been seeing this issue come up a lot in Redhat/Centos 7.x distributions. To learn more about this issue, see [1667121 Performance regression in libcurl](https://bugzilla.redhat.com/show_bug.cgi?id=1667121).
-Performance related bugs don't happen all the time, and they are very difficult to reproduce. If you experience such issue with omiagent you should use the script omiHighCPUDiagnostics.sh which will collect the stack trace of the omiagent when exceeding a certain threshold.
+Performance-related bugs don't happen all the time, and they're difficult to reproduce. If you experience such an issue with omiagent, use the script `omiHighCPUDiagnostics.sh`, which will collect the stack trace of the omiagent when it exceeds a certain threshold.
-1. Download the script <br/>
+1. Download the script: <br/>
`wget https://raw.githubusercontent.com/microsoft/OMS-Agent-for-Linux/master/tools/LogCollector/source/omiHighCPUDiagnostics.sh`
-2. Run diagnostics for 24 hours with 30% CPU threshold <br/>
+1. Run diagnostics for 24 hours with 30% CPU threshold: <br/>
`bash omiHighCPUDiagnostics.sh --runtime-in-min 1440 --cpu-threshold 30`
-3. Callstack will be dumped in omiagent_trace file, If you notice many Curl and NSS function calls, follow resolution steps below.
+1. Callstack will be dumped in the omiagent_trace file. If you notice many curl and NSS function calls, follow these resolution steps.
-### Resolution (step by step)
+### Resolution
-1. Upgrade the nss-pem package to [v1.0.3-5.el7_6.1](https://centos.pkgs.org/7/centos-x86_64/nss-pem-1.0.3-7.el7.x86_64.rpm.html). <br/>
+1. Upgrade the nss-pem package to [v1.0.3-5.el7_6.1](https://centos.pkgs.org/7/centos-x86_64/nss-pem-1.0.3-7.el7.x86_64.rpm.html): <br/>
`sudo yum upgrade nss-pem`
-2. If nss-pem is not available for upgrade (mostly happens on Centos), then downgrade curl to 7.29.0-46. If by mistake you run "yum update", then curl will be upgraded to 7.29.0-51 and the issue will happen again. <br/>
+1. If nss-pem isn't available for upgrade, which mostly happens on Centos, downgrade curl to 7.29.0-46. If you run "yum update" by mistake, curl will be upgraded to 7.29.0-51 and the issue will happen again: <br/>
`sudo yum downgrade curl libcurl`
-3. Restart OMI: <br/>
+1. Restart OMI: <br/>
`sudo scxadmin -restart`
-## Issue: You are not seeing forwarded Syslog messages
+## Issue: You're not seeing forwarded Syslog messages
### Probable causes
-* The configuration applied to the Linux server does not allow collection of the sent facilities and/or log levels
-* Syslog is not being forwarded correctly to the Linux server
-* The number of messages being forwarded per second are too great for the base configuration of the Log Analytics agent for Linux to handle
+* The configuration applied to the Linux server doesn't allow collection of the sent facilities or log levels.
+* Syslog isn't being forwarded correctly to the Linux server.
+* The number of messages being forwarded per second is too great for the base configuration of the Log Analytics agent for Linux to handle.
### Resolution
-* Verify the configuration in the Log Analytics workspace for Syslog has all the facilities and the correct log levels. Review [configure Syslog collection in the Azure portal](data-sources-syslog.md#configure-syslog-in-the-azure-portal)
-* Verify the native syslog messaging daemons (`rsyslog`, `syslog-ng`) are able to receive the forwarded messages
-* Check firewall settings on the Syslog server to ensure that messages are not being blocked
-* Simulate a Syslog message to Log Analytics using `logger` command
- * `logger -p local0.err "This is my test message"`
+* Verify the configuration in the Log Analytics workspace for Syslog has all the facilities and the correct log levels. Review [configure Syslog collection in the Azure portal](data-sources-syslog.md#configure-syslog-in-the-azure-portal).
+* Verify the native Syslog messaging daemons (`rsyslog`, `syslog-ng`) can receive the forwarded messages.
+* Check firewall settings on the Syslog server to ensure that messages aren't being blocked.
+* Simulate a Syslog message to Log Analytics by using a `logger` command: <br/>
+ `logger -p local0.err "This is my test message"`
-## Issue: You are receiving Errno address already in use in omsagent log file
+## Issue: You're receiving Errno address already in use in omsagent log file
-If you see `[error]: unexpected error error_class=Errno::EADDRINUSE error=#<Errno::EADDRINUSE: Address already in use - bind(2) for "127.0.0.1" port 25224>` in omsagent.log.
+You see `[error]: unexpected error error_class=Errno::EADDRINUSE error=#<Errno::EADDRINUSE: Address already in use - bind(2) for "127.0.0.1" port 25224>` in omsagent.log.
### Probable causes
-This error indicates that the Linux Diagnostic extension (LAD) is installed side by side with the Log Analytics Linux VM extension, and it is using same port for syslog data collection as omsagent.
+This error indicates that the Linux diagnostic extension (LAD) is installed side by side with the Log Analytics Linux VM extension. It's using the same port for Syslog data collection as omsagent.
### Resolution
-1. As root, execute the following commands (note that 25224 is an example and it is possible that in your environment you see a different port number used by LAD):
+1. As root, execute the following commands. Note that 25224 is an example, and it's possible that in your environment you see a different port number used by LAD.
``` /opt/microsoft/omsagent/bin/configure_syslog.sh configure LAD 25229
This error indicates that the Linux Diagnostic extension (LAD) is installed side
You then need to edit the correct `rsyslogd` or `syslog_ng` config file and change the LAD-related configuration to write to port 25229.
-2. If the VM is running `rsyslogd`, the file to be modified is: `/etc/rsyslog.d/95-omsagent.conf` (if it exists, else `/etc/rsyslog`). If the VM is running `syslog_ng`, the file to be modified is: `/etc/syslog-ng/syslog-ng.conf`.
-3. Restart omsagent `sudo /opt/microsoft/omsagent/bin/service_control restart`.
-4. Restart syslog service.
+1. If the VM is running `rsyslogd`, the file to be modified is `/etc/rsyslog.d/95-omsagent.conf` (if it exists, else `/etc/rsyslog`). If the VM is running `syslog_ng`, the file to be modified is `/etc/syslog-ng/syslog-ng.conf`.
+1. Restart omsagent `sudo /opt/microsoft/omsagent/bin/service_control restart`.
+1. Restart the Syslog service.
-## Issue: You are unable to uninstall omsagent using purge option
+## Issue: You're unable to uninstall omsagent using the purge option
### Probable causes
-* Linux Diagnostic Extension is installed
-* Linux Diagnostic Extension was installed and uninstalled, but you still see an error about omsagent being used by mdsd and cannot be removed.
+* The Linux diagnostic extension is installed.
+* The Linux diagnostic extension was installed and uninstalled, but you still see an error about omsagent being used by mdsd and it can't be removed.
### Resolution
-1. Uninstall the Linux Diagnostic Extension (LAD).
-2. Remove Linux Diagnostic Extension files from the machine if they are present in the following location: `/var/lib/waagent/Microsoft.Azure.Diagnostics.LinuxDiagnostic-<version>/` and `/var/opt/microsoft/omsagent/LAD/`.
+1. Uninstall the Linux diagnostic extension.
+1. Remove Linux diagnostic extension files from the machine if they're present in the following location: `/var/lib/waagent/Microsoft.Azure.Diagnostics.LinuxDiagnostic-<version>/` and `/var/opt/microsoft/omsagent/LAD/`.
-## Issue: You cannot see data any Nagios data
+## Issue: You can't see any Nagios data
### Probable causes
-* Omsagent user does not have permissions to read from Nagios log file
-* Nagios source and filter have not been uncommented from omsagent.conf file
+* The omsagent user doesn't have permissions to read from the Nagios log file.
+* The Nagios source and filter haven't been uncommented from the omsagent.conf file.
### Resolution
-1. Add omsagent user to read from Nagios file by following these [instructions](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/docs/OMS-Agent-for-Linux.md#nagios-alerts).
-2. In the Log Analytics agent for Linux general configuration file at `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.conf`, ensure that **both** the Nagios source and filter are uncommented.
+1. Add the omsagent user to read from the Nagios file by following these [instructions](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/docs/OMS-Agent-for-Linux.md#nagios-alerts).
+1. In the Log Analytics agent for Linux general configuration file at `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.conf`, ensure that *both* the Nagios source and filter are uncommented.
``` <source>
This error indicates that the Linux Diagnostic extension (LAD) is installed side
</filter> ```
-## Issue: You are not seeing any Linux data
+## Issue: You aren't seeing any Linux data
### Probable causes
-* Onboarding to Azure Monitor failed
-* Connection to Azure Monitor is blocked
-* Virtual machine was rebooted
-* OMI package was manually upgraded to a newer version compared to what was installed by Log Analytics agent for Linux package
-* OMI is frozen, blocking OMS agent
-* DSC resource logs *class not found* error in `omsconfig.log` log file
-* Log Analytics agent for data is backed up
+* Onboarding to Azure Monitor failed.
+* Connection to Azure Monitor is blocked.
+* Virtual machine was rebooted.
+* OMI package was manually upgraded to a newer version compared to what was installed by the Log Analytics agent for Linux package.
+* OMI is frozen, blocking the OMS agent.
+* DSC resource logs *class not found* error in `omsconfig.log` log file.
+* Log Analytics agent for data is backed up.
* DSC logs *Current configuration does not exist. Execute Start-DscConfiguration command with -Path parameter to specify a configuration file and create a current configuration first.* in `omsconfig.log` log file, but no log message exists about `PerformRequiredConfigurationChecks` operations. ### Resolution
-1. Install all dependencies like auditd package.
-2. Check if onboarding to Azure Monitor was successful by checking if the following file exists: `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsadmin.conf`. If it was not, reonboard using the omsadmin.sh command line [instructions](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/docs/OMS-Agent-for-Linux.md#onboarding-using-the-command-line).
-4. If using a proxy, check proxy troubleshooting steps above.
-5. In some Azure distribution systems, omid OMI server daemon does not start after the virtual machine is rebooted. This will result in not seeing Audit, ChangeTracking, or UpdateManagement solution-related data. The workaround is to manually start omi server by running `sudo /opt/omi/bin/service_control restart`.
-6. After OMI package is manually upgraded to a newer version, it has to be manually restarted for Log Analytics agent to continue functioning. This step is required for some distros where OMI server does not automatically start after it is upgraded. Run `sudo /opt/omi/bin/service_control restart` to restart OMI.
-* In some situations, OMI can become frozen. The OMS agent may enter a blocked state waiting for OMI, blocking all data collection. The OMS agent process will be running but there will be no activity, evidenced by no new log lines (such as sent heartbeats) present in `omsagent.log`. Restart OMI with `sudo /opt/omi/bin/service_control restart` to recover the agent.
-7. If you see DSC resource *class not found* error in omsconfig.log, run `sudo /opt/omi/bin/service_control restart`.
-8. In some cases, when the Log Analytics agent for Linux cannot talk to Azure Monitor, data on the agent is backed up to the full buffer size: 50 MB. The agent should be restarted by running the following command `/opt/microsoft/omsagent/bin/service_control restart`.
+1. Install all dependencies like the auditd package.
+1. Check if onboarding to Azure Monitor was successful by checking if the following file exists: `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsadmin.conf`. If it wasn't, reonboard by using the omsadmin.sh command-line [instructions](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/docs/OMS-Agent-for-Linux.md#onboarding-using-the-command-line).
+1. If you're using a proxy, check the preceding proxy troubleshooting steps.
+1. In some Azure distribution systems, the omid OMI server daemon doesn't start after the virtual machine is rebooted. If this is the case, you won't see Audit, ChangeTracking, or UpdateManagement solution-related data. The workaround is to manually start the OMI server by running `sudo /opt/omi/bin/service_control restart`.
+1. After the OMI package is manually upgraded to a newer version, it must be manually restarted for the Log Analytics agent to continue functioning. This step is required for some distros where the OMI server doesn't automatically start after it's upgraded. Run `sudo /opt/omi/bin/service_control restart` to restart the OMI.
+
+ In some situations, the OMI can become frozen. The OMS agent might enter a blocked state waiting for the OMI, which blocks all data collection. The OMS agent process will be running but there will be no activity, which is evidenced by no new log lines (such as sent heartbeats) present in `omsagent.log`. Restart the OMI with `sudo /opt/omi/bin/service_control restart` to recover the agent.
+1. If you see a DSC resource *class not found* error in omsconfig.log, run `sudo /opt/omi/bin/service_control restart`.
+1. In some cases, when the Log Analytics agent for Linux can't talk to Azure Monitor, data on the agent is backed up to the full buffer size of 50 MB. The agent should be restarted by running the following command: `/opt/microsoft/omsagent/bin/service_control restart`.
> [!NOTE]
- > This issue is fixed in Agent version 1.1.0-28 or later
+ > This issue is fixed in agent version 1.1.0-28 or later.
>
-* If `omsconfig.log` log file does not indicate that `PerformRequiredConfigurationChecks` operations are running periodically on the system, there might be a problem with the cron job/service. Make sure cron job exists under `/etc/cron.d/OMSConsistencyInvoker`. If needed run the following commands to create the cron job:
-
- ```
- mkdir -p /etc/cron.d/
- echo "*/15 * * * * omsagent /opt/omi/bin/OMSConsistencyInvoker > 2>&1" | sudo tee /etc/cron.d/OMSConsistencyInvoker
- ```
-
- Also, make sure the cron service is running. You can use `service cron status` with Debian, Ubuntu, SUSE, or `service crond status` with RHEL, CentOS, Oracle Linux to check the status of this service. If the service does not exist, you can install the binaries and start the service using the following:
-
- **Ubuntu/Debian**
-
- ```
- # To Install the service binaries
- sudo apt-get install -y cron
- # To start the service
- sudo service cron start
- ```
-
- **SUSE**
-
- ```
- # To Install the service binaries
- sudo zypper in cron -y
- # To start the service
- sudo systemctl enable cron
- sudo systemctl start cron
- ```
-
- **RHEL/CeonOS**
-
- ```
- # To Install the service binaries
- sudo yum install -y crond
- # To start the service
- sudo service crond start
- ```
-
- **Oracle Linux**
-
- ```
- # To Install the service binaries
- sudo yum install -y cronie
- # To start the service
- sudo service crond start
- ```
-
-## Issue: When configuring collection from the portal for Syslog or Linux performance counters, the settings are not applied
+ * If the `omsconfig.log` log file doesn't indicate that `PerformRequiredConfigurationChecks` operations are running periodically on the system, there might be a problem with the cron job/service. Make sure the cron job exists under `/etc/cron.d/OMSConsistencyInvoker`. If needed, run the following commands to create the cron job:
+
+ ```
+ mkdir -p /etc/cron.d/
+ echo "*/15 * * * * omsagent /opt/omi/bin/OMSConsistencyInvoker > 2>&1" | sudo tee /etc/cron.d/OMSConsistencyInvoker
+ ```
+
+ * Also, make sure the cron service is running. You can use `service cron status` with Debian, Ubuntu, and SUSE or `service crond status` with RHEL, CentOS, and Oracle Linux to check the status of this service. If the service doesn't exist, you can install the binaries and start the service by using the following instructions:
+
+ **Ubuntu/Debian**
+
+ ```
+ # To Install the service binaries
+ sudo apt-get install -y cron
+ # To start the service
+ sudo service cron start
+ ```
+
+ **SUSE**
+
+ ```
+ # To Install the service binaries
+ sudo zypper in cron -y
+ # To start the service
+ sudo systemctl enable cron
+ sudo systemctl start cron
+ ```
+
+ **RHEL/CeonOS**
+
+ ```
+ # To Install the service binaries
+ sudo yum install -y crond
+ # To start the service
+ sudo service crond start
+ ```
+
+ **Oracle Linux**
+
+ ```
+ # To Install the service binaries
+ sudo yum install -y cronie
+ # To start the service
+ sudo service crond start
+ ```
+
+## Issue: When you configure collection from the portal for Syslog or Linux performance counters, the settings aren't applied
### Probable causes
-* The Log Analytics agent for Linux has not picked up the latest configuration
-* The changed settings in the portal were not applied
+* The Log Analytics agent for Linux hasn't picked up the latest configuration.
+* The changed settings in the portal weren't applied.
### Resolution **Background:** `omsconfig` is the Log Analytics agent for Linux configuration agent that looks for new portal-side configuration every five minutes. This configuration is then applied to the Log Analytics agent for Linux configuration files located at /etc/opt/microsoft/omsagent/conf/omsagent.conf.
-* In some cases, the Log Analytics agent for Linux configuration agent might not be able to communicate with the portal configuration service resulting in latest configuration not being applied.
- 1. Check that the `omsconfig` agent is installed by running `dpkg --list omsconfig` or `rpm -qi omsconfig`. If it is not installed, reinstall the latest version of the Log Analytics agent for Linux.
+In some cases, the Log Analytics agent for Linux configuration agent might not be able to communicate with the portal configuration service. This scenario results in the latest configuration not being applied.
+
+1. Check that the `omsconfig` agent is installed by running `dpkg --list omsconfig` or `rpm -qi omsconfig`. If it isn't installed, reinstall the latest version of the Log Analytics agent for Linux.
- 2. Check that the `omsconfig` agent can communicate with Azure Monitor by running the following command `sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/GetDscConfiguration.py'`. This command returns the configuration that agent receives from the service, including Syslog settings, Linux performance counters, and custom logs. If this command fails, run the following command `sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/PerformRequiredConfigurationChecks.py'`. This command forces the omsconfig agent to talk to Azure Monitor and retrieve the latest configuration.
+1. Check that the `omsconfig` agent can communicate with Azure Monitor by running the following command: `sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/GetDscConfiguration.py'`. This command returns the configuration that the agent receives from the service, including Syslog settings, Linux performance counters, and custom logs. If this command fails, run the following command: `sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/PerformRequiredConfigurationChecks.py'`. This command forces the omsconfig agent to talk to Azure Monitor and retrieve the latest configuration.
-## Issue: You are not seeing any custom log data
+## Issue: You aren't seeing any custom log data
### Probable causes * Onboarding to Azure Monitor failed.
-* The setting **Apply the following configuration to my Linux Servers** has not been selected.
-* omsconfig has not picked up the latest custom log configuration from the service.
-* Log Analytics agent for Linux user `omsagent` is unable to access the custom log due to permissions or not being found. You may see the following errors:
-* `[DATETIME] [warn]: file not found. Continuing without tailing it.`
-* `[DATETIME] [error]: file not accessible by omsagent.`
-* Known Issue with Race Condition fixed in Log Analytics agent for Linux version 1.1.0-217
+* The setting **Apply the following configuration to my Linux Servers** hasn't been selected.
+* `omsconfig` hasn't picked up the latest custom log configuration from the service.
+* The Log Analytics agent for Linux user `omsagent` is unable to access the custom log due to permissions or not being found. You might see the following errors:
+ * `[DATETIME] [warn]: file not found. Continuing without tailing it.`
+ * `[DATETIME] [error]: file not accessible by omsagent.`
+* Known issue with race condition fixed in Log Analytics agent for Linux version 1.1.0-217.
### Resolution 1. Verify onboarding to Azure Monitor was successful by checking if the following file exists: `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsadmin.conf`. If not, either:
- 1. Reonboard using the omsadmin.sh command line [instructions](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/docs/OMS-Agent-for-Linux.md#onboarding-using-the-command-line).
- 2. Under **Advanced Settings** in the Azure portal, ensure that the setting **Apply the following configuration to my Linux Servers** is enabled.
+ 1. Reonboard by using the omsadmin.sh command line [instructions](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/docs/OMS-Agent-for-Linux.md#onboarding-using-the-command-line).
+ 1. Under **Advanced Settings** in the Azure portal, ensure that the setting **Apply the following configuration to my Linux Servers** is enabled.
-2. Check that the `omsconfig` agent can communicate with Azure Monitor by running the following command `sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/GetDscConfiguration.py'`. This command returns the configuration that agent receives from the service, including Syslog settings, Linux performance counters, and custom logs. If this command fails, run the following command `sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/PerformRequiredConfigurationChecks.py'`. This command forces the omsconfig agent to talk to Azure Monitor and retrieve the latest configuration.
+1. Check that the `omsconfig` agent can communicate with Azure Monitor by running the following command: `sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/GetDscConfiguration.py'`. This command returns the configuration that the agent receives from the service, including Syslog settings, Linux performance counters, and custom logs. If this command fails, run the following command: `sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/PerformRequiredConfigurationChecks.py'`. This command forces the `omsconfig` agent to talk to Azure Monitor and retrieve the latest configuration.
-**Background:** Instead of the Log Analytics agent for Linux running as a privileged user - `root`, the agent runs as the `omsagent` user. In most cases, explicit permission must be granted to this user in order for certain files to be read. To grant permission to `omsagent` user, run the following commands:
+**Background:** Instead of the Log Analytics agent for Linux running as a privileged user - `root`, the agent runs as the `omsagent` user. In most cases, explicit permission must be granted to this user for certain files to be read. To grant permission to `omsagent` user, run the following commands:
-1. Add the `omsagent` user to specific group `sudo usermod -a -G <GROUPNAME> <USERNAME>`
-2. Grant universal read access to the required file `sudo chmod -R ugo+rx <FILE DIRECTORY>`
+1. Add the `omsagent` user to the specific group: `sudo usermod -a -G <GROUPNAME> <USERNAME>`.
+1. Grant universal read access to the required file: `sudo chmod -R ugo+rx <FILE DIRECTORY>`.
-There is a known issue with a race condition with the Log Analytics agent for Linux version earlier than 1.1.0-217. After updating to the latest agent, run the following command to get the latest version of the output plugin `sudo cp /etc/opt/microsoft/omsagent/sysconf/omsagent.conf /etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.conf`.
+There's a known issue with a race condition with the Log Analytics agent for Linux version earlier than 1.1.0-217. After you update to the latest agent, run the following command to get the latest version of the output plug-in: `sudo cp /etc/opt/microsoft/omsagent/sysconf/omsagent.conf /etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.conf`.
-## Issue: You are trying to reonboard to a new workspace
+## Issue: You're trying to reonboard to a new workspace
-When you try to reonboard an agent to a new workspace, the Log Analytics agent configuration needs to be cleaned up before reonboarding. To clean up old configuration from the agent, run the shell bundle with `--purge`
+When you try to reonboard an agent to a new workspace, the Log Analytics agent configuration needs to be cleaned up before reonboarding. To clean up old configuration from the agent, run the shell bundle with `--purge`:
``` sudo sh ./omsagent-*.universal.x64.sh --purge
Or
sudo sh ./onboard_agent.sh --purge ```
-You can continue reonboard after using the `--purge` option
+You can continue to reonboard after you use the `--purge` option.
-## Log Analytics agent extension in the Azure portal is marked with a failed state: Provisioning failed
+## Issue: Log Analytics agent extension in the Azure portal is marked with a failed state: Provisioning failed
### Probable causes
-* Log Analytics agent has been removed from the operating system
-* Log Analytics agent service is down, disabled, or not configured
+* The Log Analytics agent has been removed from the operating system.
+* The Log Analytics agent service is down, disabled, or not configured.
### Resolution
-Perform the following steps to correct the issue.
-1. Remove extension from Azure portal.
-2. Install the agent following the [instructions](../vm/monitor-virtual-machine.md).
-3. Restart the agent by running the following command: `sudo /opt/microsoft/omsagent/bin/service_control restart`.
-* Wait several minutes and the provisioning state changes to **Provisioning succeeded**.
+1. Remove the extension from the Azure portal.
+1. Install the agent by following the [instructions](../vm/monitor-virtual-machine.md).
+1. Restart the agent by running the following command: <br/> `sudo /opt/microsoft/omsagent/bin/service_control restart`.
+1. Wait several minutes until the provisioning state changes to **Provisioning succeeded**.
## Issue: The Log Analytics agent upgrade on-demand
The Log Analytics agent packages on the host are outdated.
### Resolution
-Perform the following steps to correct the issue.
-
-1. Check for the latest release on [page](https://github.com/Microsoft/OMS-Agent-for-Linux/releases/).
-2. Download install script (1.4.2-124 as example version):
+1. Check for the latest release on [this GitHub page](https://github.com/Microsoft/OMS-Agent-for-Linux/releases/).
+1. Download the installation script (1.4.2-124 is an example version):
``` wget https://github.com/Microsoft/OMS-Agent-for-Linux/releases/download/OMSAgent_GA_v1.4.2-124/omsagent-1.4.2-124.universal.x64.sh ```
-3. Upgrade packages by executing `sudo sh ./omsagent-*.universal.x64.sh --upgrade`.
+1. Upgrade packages by executing `sudo sh ./omsagent-*.universal.x64.sh --upgrade`.
-## Issue: Installation is failing saying Python2 cannot support ctypes, even though Python3 is being used
+## Issue: Installation is failing and says Python2 can't support ctypes, even though Python3 is being used
### Probable causes
-There is a known issue where, if the VM's language isn't English, a check will fail when verifying which version of Python is being used. This leads to the agent always assuming Python2 is being used, and failing if there is no Python2.
+For this known issue, if the VM's language isn't English, a check will fail when verifying which version of Python is being used. This issue leads to the agent always assuming Python2 is being used and failing if there's no Python2.
### Resolution
azure-monitor Agent Windows Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-windows-troubleshoot.md
If the query returns results, then you need to determine if a particular data ty
||-|| |8000 |HealthService |This event will specify if a workflow related to performance, event, or other data type collected is unable to forward to the service for ingestion to the workspace. | Event ID 2136 from source HealthService is written together with this event and can indicate the agent is unable to communicate with the service, possibly due to misconfiguration of the proxy and authentication settings, network outage, or the network firewall/proxy does not allow TCP traffic from the computer to the service.| |10102 and 10103 |Health Service Modules |Workflow could not resolve data source. |This can occur if the specified performance counter or instance does not exist on the computer or is incorrectly defined in the workspace data settings. If this is a user-specified [performance counter](data-sources-performance-counters.md#configuring-performance-counters), verify the information specified is following the correct format and exists on the target computers. |
- |26002 |Health Service Modules |Workflow could not resolve data source. |This can occur if the specified Windows event log does not exist on the computer. This error can be safely ignored if the computer is not expected to have this event log registered, otherwise if this is a user-specified [event log](data-sources-windows-events.md#configuring-windows-event-logs), verify the information specified is correct. |
+ |26002 |Health Service Modules |Workflow could not resolve data source. |This can occur if the specified Windows event log does not exist on the computer. This error can be safely ignored if the computer is not expected to have this event log registered, otherwise if this is a user-specified [event log](data-sources-windows-events.md#configure-windows-event-logs), verify the information specified is correct. |
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
Title: Manage the Azure Monitor agent
-description: Options for managing the Azure Monitor agent (AMA) on Azure virtual machines and Azure Arc-enabled servers.
+description: Options for managing the Azure Monitor agent on Azure virtual machines and Azure Arc-enabled servers.
# Manage the Azure Monitor agent
-This article provides the different options currently available to install, uninstall and update the [Azure Monitor agent](azure-monitor-agent-overview.md). This agent extension can be installed on Azure virtual machines, scale sets and Azure Arc-enabled servers. It also lists the options to create [associations with data collection rules](data-collection-rule-azure-monitor-agent.md) that define which data the agent should collect. Installing, upgrading, or uninstalling the Azure Monitor Agent will not require you to restart your server.
+
+This article provides the different options currently available to install, uninstall, and update the [Azure Monitor agent](azure-monitor-agent-overview.md). This agent extension can be installed on Azure virtual machines, scale sets, and Azure Arc-enabled servers. It also lists the options to create [associations with data collection rules](data-collection-rule-azure-monitor-agent.md) that define which data the agent should collect. Installing, upgrading, or uninstalling the Azure Monitor agent won't require you to restart your server.
## Virtual machine extension details
-The Azure Monitor agent is implemented as an [Azure VM extension](../../virtual-machines/extensions/overview.md) with the details in the following table. It can be installed using any of the methods to install virtual machine extensions including those described in this article.
+
+The Azure Monitor agent is implemented as an [Azure VM extension](../../virtual-machines/extensions/overview.md) with the details in the following table. You can install it by using any of the methods to install virtual machine extensions including the methods described in this article.
| Property | Windows | Linux | |:|:|:| | Publisher | Microsoft.Azure.Monitor | Microsoft.Azure.Monitor | | Type | AzureMonitorWindowsAgent | AzureMonitorLinuxAgent |
-| TypeHandlerVersion | See [Azure Monitor Agent extension versions](./azure-monitor-agent-extension-versions.md) | [Azure Monitor Agent extension versions](./azure-monitor-agent-extension-versions.md) |
+| TypeHandlerVersion | See [Azure Monitor agent extension versions](./azure-monitor-agent-extension-versions.md) | [Azure Monitor agent extension versions](./azure-monitor-agent-extension-versions.md) |
## Extension versions
-[View Azure Monitor Agent extension versions](./azure-monitor-agent-extension-versions.md).
+
+View [Azure Monitor agent extension versions](./azure-monitor-agent-extension-versions.md).
## Prerequisites+ The following prerequisites must be met prior to installing the Azure Monitor agent. -- **Permissions**: For methods other than Azure portal, you must have the following role assignments to install the agent:
+- **Permissions**: For methods other than using the Azure portal, you must have the following role assignments to install the agent:
- | Built-in Role | Scope(s) | Reason |
+ | Built-in role | Scopes | Reason |
|:|:|:|
- | <ul><li>[Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)</li><li>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, scale sets</li><li>Arc-enabled servers</li></ul> | To deploy the agent |
- | Any role that includes the action *Microsoft.Resources/deployments/** | <ul><li>Subscription and/or</li><li>Resource group and/or </li></ul> | To deploy ARM templates |
-- **Non-Azure**: For installing the agent on physical servers and virtual machines hosted *outside* of Azure (i.e. on-premises) or in other clouds, you must [install the Azure Arc Connected Machine agent](../../azure-arc/servers/agent-overview.md) first (at no added cost)-- **Authentication**: [Managed identity](../../active-directory/managed-identities-azure-resources/overview.md) must be enabled on Azure virtual machines. Both system-assigned and user-assigned managed identities are supported.
- - **User-assigned**: This is recommended for large-scale deployments, configurable via [built-in Azure policies](#using-azure-policy). You can create a user-assigned managed identity once and share it across multiple VMs, and is thus more scalable than a system-assigned managed identity. If you use a user-assigned managed identity, you must pass the managed identity details to Azure Monitor Agent via extension settings:
+ | <ul><li>[Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)</li><li>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, scale sets,</li><li>Azure Arc-enabled servers</li></ul> | To deploy the agent |
+ | Any role that includes the action *Microsoft.Resources/deployments/** | <ul><li>Subscription and/or</li><li>Resource group and/or </li></ul> | To deploy Azure Resource Manager templates |
+- **Non-Azure**: To install the agent on physical servers and virtual machines hosted *outside* of Azure (that is, on-premises) or in other clouds, you must [install the Azure Arc Connected Machine agent](../../azure-arc/servers/agent-overview.md) first, at no added cost.
+- **Authentication**: [Managed identity](../../active-directory/managed-identities-azure-resources/overview.md) must be enabled on Azure virtual machines. Both user-assigned and system-assigned managed identities are supported.
+ - **User-assigned**: This managed identity is recommended for large-scale deployments, configurable via [built-in Azure policies](#use-azure-policy). You can create a user-assigned managed identity once and share it across multiple VMs, which means it's more scalable than a system-assigned managed identity. If you use a user-assigned managed identity, you must pass the managed identity details to the Azure Monitor agent via extension settings:
+
```json { "authentication": {
The following prerequisites must be met prior to installing the Azure Monitor ag
} } ```
- We recommend using `mi_res_id` as the `identifier-name`. The sample commands below only show usage with `mi_res_id` for the sake of brevity. For more details on `mi_res_id`, `object_id`, and `client_id`, see the [managed identity documentation](../../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md#get-a-token-using-http).
- - **System-assigned**: This is suited for initial testing or small deployments. When used at scale (for example, for all VMs in a subscription) it results in substantial number of identities created (and deleted) in Azure AD (Azure Active Directory). To avoid this churn of identities, it is recommended to use user-assigned managed identities instead. **For Arc-enabled servers, system-assigned managed identity is enabled automatically** (as soon as you install the Arc agent) as it's the only supported type for Arc-enabled servers.
- - This is not required for Azure Arc-enabled servers. The system identity will be enabled automatically if the agent is installed via [creating and assigning a data collection rule using the Azure portal](data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association).
-- **Networking**: If using network firewalls, the [AzureResourceManager service tag](../../virtual-network/service-tags-overview.md) must be enabled on the virtual network for the virtual machine. Additionally, the virtual machine must have access to the following HTTPS endpoints:
+ We recommend that you use `mi_res_id` as the `identifier-name`. The following sample commands only show usage with `mi_res_id` for the sake of brevity. For more information on `mi_res_id`, `object_id`, and `client_id`, see the [Managed identity documentation](../../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md#get-a-token-using-http).
+ - **System-assigned**: This managed identity is suited for initial testing or small deployments. When used at scale, for example, for all VMs in a subscription, it results in a substantial number of identities created (and deleted) in Azure Active Directory. To avoid this churn of identities, use user-assigned managed identities instead. *For Azure Arc-enabled servers, system-assigned managed identity is enabled automatically* as soon as you install the Azure Arc agent. It's the only supported type for Azure Arc-enabled servers.
+ - **Not required for Azure Arc-enabled servers**: The system identity is enabled automatically if the agent is installed via [creating and assigning a data collection rule by using the Azure portal](data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association).
+- **Networking**: If you use network firewalls, the [AzureResourceManager service tag](../../virtual-network/service-tags-overview.md) must be enabled on the virtual network for the virtual machine. The virtual machine must also have access to the following HTTPS endpoints:
+ - global.handler.control.monitor.azure.com - `<virtual-machine-region-name>`.handler.control.monitor.azure.com (example: westus.handler.control.azure.com) - `<log-analytics-workspace-id>`.ods.opinsights.azure.com (example: 12345a01-b1cd-1234-e1f2-1234567g8h99.ods.opsinsights.azure.com)
- (If using private links on the agent, you must also add the [dce endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint))
-
+ (If you use private links on the agent, you must also add the [dce endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint)).
> [!NOTE]
-> This article only pertains to agent installation or management. After you install the agent, you must review the next article to [configure data collection rules and associate them with the machines](./data-collection-rule-azure-monitor-agent.md) with agents installed.
-> **The Azure Monitor agents cannot function without being associated with data collection rules.**
+> This article only pertains to agent installation or management. After you install the agent, you must review the next article to [configure data collection rules and associate them with the machines](./data-collection-rule-azure-monitor-agent.md) with agents installed. *The Azure Monitor agents can't function without being associated with data collection rules.*
+## Use the Azure portal
-## Using the Azure portal
+Follow these instructions to use the Azure portal.
### Install
-To install the Azure Monitor agent using the Azure portal, follow the process to [create a data collection rule](data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) in the Azure portal. This not only creates the rule, but it also associates it to the selected resources and installs the Azure Monitor agent on them if not already installed.
+
+To install the Azure Monitor agent by using the Azure portal, follow the process to [create a data collection rule](data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) in the Azure portal. This process creates the rule, associates it to the selected resources, and installs the Azure Monitor agent on them if it's not already installed.
### Uninstall
-To uninstall the Azure Monitor agent using the Azure portal, navigate to your virtual machine, scale set or Arc-enabled server, select the **Extensions** tab and click on **AzureMonitorWindowsAgent** or **AzureMonitorLinuxAgent**. In the dialog that pops up, click **Uninstall**.
+
+To uninstall the Azure Monitor agent by using the Azure portal, go to your virtual machine, scale set, or Azure Arc-enabled server. Select the **Extensions** tab and select **AzureMonitorWindowsAgent** or **AzureMonitorLinuxAgent**. In the dialog that opens, select **Uninstall**.
### Update
-To perform a **one time update** of the agent, you must first uninstall the existing agent version and then install the new version as described above.
-The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature. Navigate to your virtual machine or scale set, select the **Extensions** tab and click on **AzureMonitorWindowsAgent** or **AzureMonitorLinuxAgent**. In the dialog that pops up, click **Enable automatic upgrade**.
-## Using Resource Manager templates
+To perform a one-time update of the agent, you must first uninstall the existing agent version. Then install the new version as described.
+
+We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature. Go to your virtual machine or scale set, select the **Extensions** tab and select **AzureMonitorWindowsAgent** or **AzureMonitorLinuxAgent**. In the dialog that opens, select **Enable automatic upgrade**.
+
+## Use Resource Manager templates
+
+Follow these instructions to use Azure Resource Manager templates.
### Install+ You can use Resource Manager templates to install the Azure Monitor agent on Azure virtual machines and on Azure Arc-enabled servers and to create an association with data collection rules. You must create any data collection rule prior to creating the association.
-Get sample templates for installing the agent and creating the association from the following:
+Get sample templates for installing the agent and creating the association from the following resources:
-- [Template to install Azure Monitor agent (Azure and Azure Arc)](../agents/resource-manager-agent.md#azure-monitor-agent)
+- [Template to install Azure Monitor agent (Azure and Azure Arc)](../agents/resource-manager-agent.md#azure-monitor-agent)
- [Template to create association with data collection rule](./resource-manager-data-collection-rules.md)
-Install the templates using [any deployment method for Resource Manager templates](../../azure-resource-manager/templates/deploy-powershell.md) such as the following commands.
+Install the templates by using [any deployment method for Resource Manager templates](../../azure-resource-manager/templates/deploy-powershell.md), such as the following commands.
# [PowerShell](#tab/ARMAgentPowerShell)+ ```powershell New-AzResourceGroupDeployment -ResourceGroupName "<resource-group-name>" -TemplateFile "<template-filename.json>" -TemplateParameterFile "<parameter-filename.json>" ```+ # [CLI](#tab/ARMAgentCLI)+ ```azurecli az deployment group create --resource-group "<resource-group-name>" --template-file "<path-to-template>" --parameters "@<parameter-filename.json>" ```
-## Using PowerShell
-You can install the Azure Monitor agent on Azure virtual machines and on Azure Arc-enabled servers using the PowerShell command for adding a virtual machine extension.
+## Use PowerShell
+
+You can install the Azure Monitor agent on Azure virtual machines and on Azure Arc-enabled servers by using the PowerShell command for adding a virtual machine extension.
### Install on Azure virtual machines+ Use the following PowerShell commands to install the Azure Monitor agent on Azure virtual machines. Choose the appropriate command based on your chosen authentication method. #### User-assigned managed identity+ # [Windows](#tab/PowerShellWindows)+ ```powershell Set-AzVMExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> -SettingString '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}' ``` # [Linux](#tab/PowerShellLinux)+ ```powershell Set-AzVMExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> -SettingString '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}' ``` #### System-assigned managed identity+ # [Windows](#tab/PowerShellWindows)+ ```powershell Set-AzVMExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> ``` # [Linux](#tab/PowerShellLinux)+ ```powershell Set-AzVMExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> ``` ### Uninstall on Azure virtual machines
-Use the following PowerShell commands to install the Azure Monitor agent on Azure virtual machines.
+
+Use the following PowerShell commands to uninstall the Azure Monitor agent on Azure virtual machines.
+ # [Windows](#tab/PowerShellWindows)+ ```powershell Remove-AzVMExtension -Name AzureMonitorWindowsAgent -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> ``` # [Linux](#tab/PowerShellLinux)+ ```powershell Remove-AzVMExtension -Name AzureMonitorLinuxAgent -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> ``` ### Update on Azure virtual machines
-To perform a **one time update** of the agent, you must first uninstall the existing agent version and then install the new version as described above.
-The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature, using the following PowerShell commands.
+
+To perform a one-time update of the agent, you must first uninstall the existing agent version,. Then install the new version as described.
+
+We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature by using the following PowerShell commands.
+ # [Windows](#tab/PowerShellWindows)+ ```powershell Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Publisher Microsoft.Azure.Monitor -ExtensionType AzureMonitorWindowsAgent -TypeHandlerVersion <version-number> -Location <location> -EnableAutomaticUpgrade $true ```+ # [Linux](#tab/PowerShellLinux)+ ```powershell Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Publisher Microsoft.Azure.Monitor -ExtensionType AzureMonitorLinuxAgent -TypeHandlerVersion <version-number> -Location <location> -EnableAutomaticUpgrade $true ```
Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ResourceGroupName <reso
### Install on Azure Arc-enabled servers+ Use the following PowerShell commands to install the Azure Monitor agent on Azure Arc-enabled servers.+ # [Windows](#tab/PowerShellWindowsArc)+ ```powershell New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> ``` # [Linux](#tab/PowerShellLinuxArc)+ ```powershell New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> ``` ### Uninstall on Azure Arc-enabled servers
-Use the following PowerShell commands to install the Azure Monitor agent on Azure Arc-enabled servers.
+
+Use the following PowerShell commands to uninstall the Azure Monitor agent on Azure Arc-enabled servers.
+ # [Windows](#tab/PowerShellWindowsArc)+ ```powershell Remove-AzConnectedMachineExtension -MachineName <arc-server-name> -ResourceGroupName <resource-group-name> -Name AzureMonitorWindowsAgent ```+ # [Linux](#tab/PowerShellLinuxArc)+ ```powershell Remove-AzConnectedMachineExtension -MachineName <arc-server-name> -ResourceGroupName <resource-group-name> -Name AzureMonitorLinuxAgent ``` ### Upgrade on Azure Arc-enabled servers
-To perform a **one time** upgrade of the agent, use the following PowerShell commands:
+
+To perform a one-time upgrade of the agent, use the following PowerShell commands.
# [Windows](#tab/PowerShellWindowsArc)+ ```powershell $target = @{"Microsoft.Azure.Monitor.AzureMonitorWindowsAgent" = @{"targetVersion"=<target-version-number>}} Update-AzConnectedExtension -ResourceGroupName $env.ResourceGroupName -MachineName <arc-server-name> -ExtensionTarget $target ```+ # [Linux](#tab/PowerShellLinuxArc)+ ```powershell $target = @{"Microsoft.Azure.Monitor.AzureMonitorLinuxAgent" = @{"targetVersion"=<target-version-number>}} Update-AzConnectedExtension -ResourceGroupName $env.ResourceGroupName -MachineName <arc-server-name> -ExtensionTarget $target ```
-The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade (preview)](../../azure-arc/servers/manage-automatic-vm-extension-upgrade.md#enable-automatic-extension-upgrade) feature, using the following PowerShell commands.
+We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade (preview)](../../azure-arc/servers/manage-automatic-vm-extension-upgrade.md#enable-automatic-extension-upgrade) feature by using the following PowerShell commands.
+ # [Windows](#tab/PowerShellWindowsArc)+ ```powershell Update-AzConnectedMachineExtension -ResourceGroup <resource-group-name> -MachineName <arc-server-name> -Name AzureMonitorWindowsAgent -EnableAutomaticUpgrade ```+ # [Linux](#tab/PowerShellLinuxArc)+ ```powershell Update-AzConnectedMachineExtension -ResourceGroup <resource-group-name> -MachineName <arc-server-name> -Name AzureMonitorLinuxAgent -EnableAutomaticUpgrade ```
+## Use the Azure CLI
-## Using Azure CLI
-You can install the Azure Monitor agent on Azure virtual machines and on Azure Arc-enabled servers using the Azure CLI command for adding a virtual machine extension.
+You can install the Azure Monitor agent on Azure virtual machines and on Azure Arc-enabled servers by using the Azure CLI command for adding a virtual machine extension.
### Install on Azure virtual machines+ Use the following CLI commands to install the Azure Monitor agent on Azure virtual machines. Choose the appropriate command based on your chosen authentication method.+ #### User-assigned managed identity+ # [Windows](#tab/CLIWindows)+ ```azurecli az vm extension set --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --settings '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}' ```+ # [Linux](#tab/CLILinux)+ ```azurecli az vm extension set --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --settings '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}' ``` #### System-assigned managed identity+ # [Windows](#tab/CLIWindows)+ ```azurecli az vm extension set --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> ```+ # [Linux](#tab/CLILinux)+ ```azurecli az vm extension set --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> ``` ### Uninstall on Azure virtual machines
-Use the following CLI commands to install the Azure Monitor agent on Azure virtual machines.
+
+Use the following CLI commands to uninstall the Azure Monitor agent on Azure virtual machines.
+ # [Windows](#tab/CLIWindows)+ ```azurecli az vm extension delete --resource-group <resource-group-name> --vm-name <virtual-machine-name> -name AzureMonitorWindowsAgent ```+ # [Linux](#tab/CLILinux)++ ```azurecli az vm extension delete --resource-group <resource-group-name> --vm-name <virtual-machine-name> -name AzureMonitorLinuxAgent ``` ### Update on Azure virtual machines
-To perform a **one time update** of the agent, you must first uninstall the existing agent version and then install the new version as described above.
-The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature, using the following CLI commands.
+
+To perform a one-time update of the agent, you must first uninstall the existing agent version,. Then install the new version as described.
+
+We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature by using the following CLI commands.
+ # [Windows](#tab/CLIWindows)+ ```azurecli az vm extension set -name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --vm-name <virtual-machine-name> --resource-group <resource-group-name> --enable-auto-upgrade true ``` # [Linux](#tab/CLILinux)+ ```azurecli az vm extension set -name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --vm-name <virtual-machine-name> --resource-group <resource-group-name> --enable-auto-upgrade true ``` - ### Install on Azure Arc-enabled servers
-Use the following CLI commands to install the Azure Monitor agent onAzure Azure Arc-enabled servers.
+
+Use the following CLI commands to install the Azure Monitor agent on Azure Arc-enabled servers.
# [Windows](#tab/CLIWindowsArc)+ ```azurecli az connectedmachine extension create --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --type AzureMonitorWindowsAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> ```+ # [Linux](#tab/CLILinuxArc)+ ```azurecli az connectedmachine extension create --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --type AzureMonitorLinuxAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> ``` ### Uninstall on Azure Arc-enabled servers
-Use the following CLI commands to install the Azure Monitor agent onAzure Azure Arc-enabled servers.
+
+Use the following CLI commands to uninstall the Azure Monitor agent on Azure Arc-enabled servers.
# [Windows](#tab/CLIWindowsArc)+ ```azurecli az connectedmachine extension delete --name AzureMonitorWindowsAgent --machine-name <arc-server-name> --resource-group <resource-group-name> ``` # [Linux](#tab/CLILinuxArc)+ ```azurecli az connectedmachine extension delete --name AzureMonitorLinuxAgent --machine-name <arc-server-name> --resource-group <resource-group-name> ``` ### Upgrade on Azure Arc-enabled servers
-To perform a **one time upgrade** of the agent, use the following CLI commands:
+
+To perform a one-time upgrade of the agent, use the following CLI commands.
+ # [Windows](#tab/CLIWindowsArc)+ ```azurecli az connectedmachine upgrade-extension --extension-targets "{\"Microsoft.Azure.Monitor.AzureMonitorWindowsAgent\":{\"targetVersion\":\"<target-version-number>\"}}" --machine-name <arc-server-name> --resource-group <resource-group-name> ```+ # [Linux](#tab/CLILinuxArc)+ ```azurecli az connectedmachine upgrade-extension --extension-targets "{\"Microsoft.Azure.Monitor.AzureMonitorWindowsAgent\":{\"targetVersion\":\"<target-version-number>\"}}" --machine-name <arc-server-name> --resource-group <resource-group-name> ```
-The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade (preview)](../../azure-arc/servers/manage-automatic-vm-extension-upgrade.md#enable-automatic-extension-upgrade) feature, using the following PowerShell commands.
+ We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade (preview)](../../azure-arc/servers/manage-automatic-vm-extension-upgrade.md#enable-automatic-extension-upgrade) feature by using the following PowerShell commands.
+ # [Windows](#tab/CLIWindowsArc)+ ```azurecli az connectedmachine extension update --name AzureMonitorWindowsAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --enable-auto-upgrade true ```+ # [Linux](#tab/CLILinuxArc)+ ```azurecli az connectedmachine extension update --name AzureMonitorLinuxAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --enable-auto-upgrade true ```
+## Use Azure Policy
-## Using Azure Policy
-Use the following policies and policy initiatives to **automatically install the agent and associate it with a data collection rule**, every time you create a virtual machine, scale set, or Arc-enabled server.
+Use the following policies and policy initiatives to automatically install the agent and associate it with a data collection rule every time you create a virtual machine, scale set, or Azure Arc-enabled server.
> [!NOTE]
-> As per Microsoft Identity best practices, policies for installing Azure Monitor agent on **virtual machines and scale-sets** rely on **user-assigned managed identity**. This is the more scalable and resilient managed identity options for these resources.
-> For **Arc-enabled servers**, policies rely on only **system-assigned managed identity** as the only supported option today.
+> As per Microsoft Identity best practices, policies for installing the Azure Monitor agent on virtual machines and scale sets rely on user-assigned managed identity. This option is the more scalable and resilient managed identity for these resources.
+> For Azure Arc-enabled servers, policies rely on system-assigned managed identity as the only supported option today.
### Built-in policy initiatives
-Before proceeding, review [prerequisites for agent installation](azure-monitor-agent-manage.md#prerequisites).
-Policy initiatives for Windows and Linux **virtual machines, scale-sets** consist of individual policies that:
+Before you proceed, review [prerequisites for agent installation](azure-monitor-agent-manage.md#prerequisites).
+
+Policy initiatives for Windows and Linux virtual machines, scale sets consist of individual policies that:
-- (Optional) Create and assign built-in user-assigned managed identity, per subscription, per region. [Learn more](../../active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md#policy-definition-and-details).
- - `Bring Your Own User-Assigned Identity`: If set of `true`, it creates the built-in user-assigned managed identity in the predefined resource group, and assigns it to all machines that the policy is applied to. If set to `false`, you can instead use existing user-assigned identity that **you must assign** to the machines beforehand.
-- Install the Azure Monitor agent extension on the machine, and configure it to use user-assigned identity as specified by the parameters below
- - `Bring Your Own User-Assigned Managed Identity`: If set to `false`, it configures the agent to use the built-in user-assigned managed identity created by the policy above. If set to `true`, it configures the agent to use an existing user-assigned identity that **you must assign** to the machine(s) in scope beforehand.
- - `User-Assigned Managed Identity Name`: If using your own identity (selected `true`), specify the name of the identity that's assigned to the machine(s)
- - `User-Assigned Managed Identity Resource Group`: If using your own identity (selected `true`), specify the resource group where the identity exists
- - `Additional Virtual Machine Images`: Pass additional VM image names that you want to apply the policy to, if not already included
+- (Optional) Create and assign built-in user-assigned managed identity, per subscription, per region. [Learn more](../../active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md#policy-definition-and-details).
+ - `Bring Your Own User-Assigned Identity`: If set to `true`, it creates the built-in user-assigned managed identity in the predefined resource group and assigns it to all machines that the policy is applied to. If set to `false`, you can instead use existing user-assigned identity that *you must assign* to the machines beforehand.
+- Install the Azure Monitor agent extension on the machine, and configure it to use user-assigned identity as specified by the following parameters.
+ - `Bring Your Own User-Assigned Managed Identity`: If set to `false`, it configures the agent to use the built-in user-assigned managed identity created by the preceding policy. If set to `true`, it configures the agent to use an existing user-assigned identity that *you must assign* to the machines in scope beforehand.
+ - `User-Assigned Managed Identity Name`: If you use your own identity (selected `true`), specify the name of the identity that's assigned to the machines.
+ - `User-Assigned Managed Identity Resource Group`: If you use your own identity (selected `true`), specify the resource group where the identity exists.
+ - `Additional Virtual Machine Images`: Pass additional VM image names that you want to apply the policy to, if not already included.
- Create and deploy the association to link the machine to specified data collection rule.
- - `Data Collection Rule Resource Id`: The ARM resourceId of the rule you want to associate via this policy, to all machines the policy is applied to.
+ - `Data Collection Rule Resource Id`: The Azure Resource Manager resourceId of the rule you want to associate via this policy to all machines the policy is applied to.
+
+ ![Partial screenshot from the Azure Policy Definitions page that shows two built-in policy initiatives for configuring the Azure Monitor agent.](media/azure-monitor-agent-install/built-in-ama-dcr-initiatives.png)
-![Partial screenshot from the Azure Policy Definitions page showing two built-in policy initiatives for configuring the Azure Monitor agent.](media/azure-monitor-agent-install/built-in-ama-dcr-initiatives.png)
+#### Known issues
-#### Known issues:
-- Managed Identity default behavior: [Learn more](../../active-directory/managed-identities-azure-resources/managed-identities-faq.md#what-identity-will-imds-default-to-if-dont-specify-the-identity-in-the-request)-- Possible race condition with using built-in user-assigned identity creation policy above. [Learn more](../../active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md#known-issues)-- Assigning policy to resource groups: If the assignment scope of the policy is a resource group and not a subscription, the identity used by policy assignment (different from the user-assigned identity used by agent) must be manually granted [these roles](../../active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md#required-authorization) prior to assignment/remediation. Failing to do this will result in **deployment failures**.-- Other [Managed Identity limitations](../../active-directory/managed-identities-azure-resources/managed-identities-faq.md#limitations)
+- Managed Identity default behavior. [Learn more](../../active-directory/managed-identities-azure-resources/managed-identities-faq.md#what-identity-will-imds-default-to-if-dont-specify-the-identity-in-the-request).
+- Possible race condition with using built-in user-assigned identity creation policy. [Learn more](../../active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md#known-issues).
+- Assigning policy to resource groups. If the assignment scope of the policy is a resource group and not a subscription, the identity used by policy assignment (different from the user-assigned identity used by agent) must be manually granted [these roles](../../active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md#required-authorization) prior to assignment/remediation. Failing to do this step will result in *deployment failures*.
+- Other [Managed Identity limitations](../../active-directory/managed-identities-azure-resources/managed-identities-faq.md#limitations).
-### Built-in policies
-You can choose to use the individual policies from the policy initiative above to perform a single action at scale. For example, if you *only* want to automatically install the agent, use the second agent installation policy from the initiative as shown below.
+### Built-in policies
-![Partial screenshot from the Azure Policy Definitions page showing policies contained within the initiative for configuring the Azure Monitor agent.](media/azure-monitor-agent-install/built-in-ama-dcr-policy.png)
+You can choose to use the individual policies from the preceding policy initiative to perform a single action at scale. For example, if you *only* want to automatically install the agent, use the second agent installation policy from the initiative, as shown.
+
+![Partial screenshot from the Azure Policy Definitions page that shows policies contained within the initiative for configuring the Azure Monitor agent.](media/azure-monitor-agent-install/built-in-ama-dcr-policy.png)
### Remediation
-The initiatives or policies will apply to each virtual machine as it's created. A [remediation task](../../governance/policy/how-to/remediate-resources.md) deploys the policy definitions in the initiative to *existing resources*, so you can configure the Azure Monitor agent for any resources that were already created.
-When you create the assignment by using the Azure portal, you have the option of creating a remediation task at the same time. See [Remediate non-compliant resources with Azure Policy](../../governance/policy/how-to/remediate-resources.md) for details on the remediation.
+The initiatives or policies will apply to each virtual machine as it's created. A [remediation task](../../governance/policy/how-to/remediate-resources.md) deploys the policy definitions in the initiative to existing resources, so you can configure the Azure Monitor agent for any resources that were already created.
+
+When you create the assignment by using the Azure portal, you have the option of creating a remediation task at the same time. For information on the remediation, see [Remediate non-compliant resources with Azure Policy](../../governance/policy/how-to/remediate-resources.md).
![Screenshot that shows initiative remediation for the Azure Monitor agent.](media/azure-monitor-agent-install/built-in-ama-dcr-remediation.png) ## Next steps -- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
+[Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Data Sources Windows Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-windows-events.md
Title: Collect Windows event log data sources with Log Analytics agent in Azure Monitor
-description: Describes how to configure the collection of Windows Event logs by Azure Monitor and details of the records they create.
+description: The article describes how to configure the collection of Windows event logs by Azure Monitor and details of the records they create.
Last updated 04/06/2022
# Collect Windows event log data sources with Log Analytics agent
-Windows Event logs are one of the most common [data sources](../agents/agent-data-sources.md) for Log Analytics agents on Windows virtual machines since many applications write to the Windows event log. You can collect events from standard logs, such as System and Application, and any custom logs created by applications you need to monitor.
-![Diagram that shows the Log Analytics agent sending Windows events to the Event table in Azure Monitor.](media/data-sources-windows-events/overview.png)
+Windows event logs are one of the most common [data sources](../agents/agent-data-sources.md) for Log Analytics agents on Windows virtual machines because many applications write to the Windows event log. You can collect events from standard logs, such as System and Application, and any custom logs created by applications you need to monitor.
+
+![Diagram that shows the Log Analytics agent sending Windows events to the Event table in Azure Monitor.](media/data-sources-windows-events/overview.png)
[!INCLUDE [Log Analytics agent deprecation](../../../includes/log-analytics-agent-deprecation.md)]
-## Configuring Windows Event logs
-Configure Windows Event logs from the [Agents configuration menu](../agents/agent-data-sources.md#configuring-data-sources) for the Log Analytics workspace.
+## Configure Windows event logs
-Azure Monitor only collects events from the Windows event logs that are specified in the settings. You can add an event log by typing in the name of the log and clicking **+**. For each log, only the events with the selected severities are collected. Check the severities for the particular log that you want to collect. You can't provide any additional criteria to filter events.
+Configure Windows event logs from the [Agents configuration menu](../agents/agent-data-sources.md#configuring-data-sources) for the Log Analytics workspace.
-As you type the name of an event log, Azure Monitor provides suggestions of common event log names. If the log you want to add doesn't appear in the list, you can still add it by typing in the full name of the log. You can find the full name of the log by using event viewer. In event viewer, open the *Properties* page for the log and copy the string from the *Full Name* field.
+Azure Monitor only collects events from Windows event logs that are specified in the settings. You can add an event log by entering the name of the log and selecting **+**. For each log, only the events with the selected severities are collected. Check the severities for the particular log that you want to collect. You can't provide any other criteria to filter events.
-[![Screenshot showing the Windows event logs tab on the Agents configuration screen.](media/data-sources-windows-events/configure.png)](media/data-sources-windows-events/configure.png#lightbox)
+As you enter the name of an event log, Azure Monitor provides suggestions of common event log names. If the log you want to add doesn't appear in the list, you can still add it by entering the full name of the log. You can find the full name of the log by using event viewer. In event viewer, open the **Properties** page for the log and copy the string from the **Full Name** field.
-> [!IMPORTANT]
-> You can't configure collection of security events from the workspace using Log Analytics agent. You must use [Microsoft Defender for Cloud](../../security-center/security-center-enable-data-collection.md) or [Microsoft Sentinel](../../sentinel/connect-windows-security-events.md) to collect security events. [Azure Monitor agent](azure-monitor-agent-overview.md) can also be used to collect security events.
+[![Screenshot that shows the Windows event logs tab on the Agents configuration screen.](media/data-sources-windows-events/configure.png)](media/data-sources-windows-events/configure.png#lightbox)
+> [!IMPORTANT]
+> You can't configure collection of security events from the workspace by using the Log Analytics agent. You must use [Microsoft Defender for Cloud](../../security-center/security-center-enable-data-collection.md) or [Microsoft Sentinel](../../sentinel/connect-windows-security-events.md) to collect security events. The [Azure Monitor agent](azure-monitor-agent-overview.md) can also be used to collect security events.
-> [!NOTE]
-> Critical events from the Windows event log will have a severity of "Error" in Azure Monitor Logs.
+Critical events from the Windows event log will have a severity of "Error" in Azure Monitor Logs.
## Data collection
-Azure Monitor collects each event that matches a selected severity from a monitored event log as the event is created. The agent records its place in each event log that it collects from. If the agent goes offline for a while, it collects events from where it last left off, even if those events were created while the agent was offline. There's a potential for these events to not be collected if the event log wraps with uncollected events being overwritten while the agent is offline.
+
+Azure Monitor collects each event that matches a selected severity from a monitored event log as the event is created. The agent records its place in each event log that it collects from. If the agent goes offline for a while, it collects events from where it last left off, even if those events were created while the agent was offline. There's a potential for these events to not be collected if the event log wraps with uncollected events being overwritten while the agent is offline.
>[!NOTE]
->Azure Monitor does not collect audit events created by SQL Server from source *MSSQLSERVER* with event ID 18453 that contains keywords - *Classic* or *Audit Success* and keyword *0xa0000000000000*.
+>Azure Monitor doesn't collect audit events created by SQL Server from source *MSSQLSERVER* with event ID 18453 that contains keywords *Classic* or *Audit Success* and keyword *0xa0000000000000*.
> ## Windows event records properties
-Windows event records have a type of **Event** and have the properties in the following table:
+
+Windows event records have a type of event and have the properties in the following table:
| Property | Description | |: |: |
Windows event records have a type of **Event** and have the properties in the fo
| EventLevelName |Severity of the event in text form. | | EventLog |Name of the event log that the event was collected from. | | ParameterXml |Event parameter values in XML format. |
-| ManagementGroupName |Name of the management group for System Center Operations Manager agents. For other agents, this value is `AOI-<workspace ID>` |
-| RenderedDescription |Event description with parameter values |
+| ManagementGroupName |Name of the management group for System Center Operations Manager agents. For other agents, this value is `AOI-<workspace ID>`. |
+| RenderedDescription |Event description with parameter values. |
| Source |Source of the event. |
-| SourceSystem |Type of agent the event was collected from. <br> OpsManager ΓÇô Windows agent, either direct connect or Operations Manager managed <br> Linux ΓÇô All Linux agents <br> AzureStorage ΓÇô Azure Diagnostics |
+| SourceSystem |Type of agent the event was collected from. <br> OpsManager ΓÇô Windows agent, either direct connect or Operations Manager managed. <br> Linux ΓÇô All Linux agents. <br> AzureStorage ΓÇô Azure Diagnostics. |
| TimeGenerated |Date and time the event was created in Windows. | | UserName |User name of the account that logged the event. |
-## Log queries with Windows Events
-The following table provides different examples of log queries that retrieve Windows Event records.
+## Log queries with Windows events
+
+The following table provides different examples of log queries that retrieve Windows event records.
| Query | Description | |:|:|
The following table provides different examples of log queries that retrieve Win
| Event &#124; summarize count() by Source |Count of Windows events by source. | | Event &#124; where EventLevelName == "error" &#124; summarize count() by Source |Count of Windows error events by source. | - ## Next steps+ * Configure Log Analytics to collect other [data sources](../agents/agent-data-sources.md) for analysis.
-* Learn about [log queries](../logs/log-query-overview.md) to analyze the data collected from data sources and solutions.
+* Learn about [log queries](../logs/log-query-overview.md) to analyze the data collected from data sources and solutions.
* Configure [collection of performance counters](data-sources-performance-counters.md) from your Windows agents.
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
Last updated 10/12/2021 - # Application Insights for ASP.NET Core applications This article describes how to enable Application Insights for an [ASP.NET Core](/aspnet/core) application.
azure-monitor Asp Net Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-exceptions.md
# Diagnose exceptions in web apps with Application Insights
-Exceptions in web applications can be reported with [Application Insights](./app-insights-overview.md). You can correlate failed requests with exceptions and other events on both the client and server, so that you can quickly diagnose the causes. In this article, you'll learn how to set up exception reporting, report exceptions explicitly, diagnose failures, and more.
+Exceptions in web applications can be reported with [Application Insights](./app-insights-overview.md). You can correlate failed requests with exceptions and other events on both the client and server so that you can quickly diagnose the causes. In this article, you'll learn how to set up exception reporting, report exceptions explicitly, diagnose failures, and more.
## Set up exception reporting
-You can set up Application Insights to report exceptions that occur in either the server, or the client. Depending on the platform you're application is dependent on, you'll need the appropriate extension or SDK.
+You can set up Application Insights to report exceptions that occur in either the server or the client. Depending on the platform your application is dependent on, you'll need the appropriate extension or SDK.
### Server side
-To have exceptions reported from your server side application, consider the following scenarios:
+To have exceptions reported from your server-side application, consider the following scenarios:
- * **Azure web apps**: Add the [Application Insights Extension](./azure-web-apps.md)
- * **Azure VM and Azure virtual machine scale set IIS-hosted apps**: Add the [Application Monitoring Extension](./azure-vm-vmss-apps.md)
- * Install [Application Insights SDK](./asp-net.md) in your app code, or
- * **IIS web servers**: Run [Application Insights Agent](./status-monitor-v2-overview.md), or
- * **Java web apps**: Enable the [Java agent](./java-in-process-agent.md)
+ * Add the [Application Insights Extension](./azure-web-apps.md) for Azure web apps.
+ * Add the [Application Monitoring Extension](./azure-vm-vmss-apps.md) for Azure Virtual Machines and Azure Virtual Machine Scale Sets IIS-hosted apps.
+ * Install [Application Insights SDK](./asp-net.md) in your app code, run [Application Insights Agent](./status-monitor-v2-overview.md) for IIS web servers, or enable the [Java agent](./java-in-process-agent.md) for Java web apps.
### Client side
-The JavaScript SDK provides the ability for client side reporting of exceptions that occur in web browsers. To set up exception reporting on the client, see [Application Insights for web pages](./javascript.md).
+The JavaScript SDK provides the ability for client-side reporting of exceptions that occur in web browsers. To set up exception reporting on the client, see [Application Insights for webpages](./javascript.md).
### Application frameworks
-With some application frameworks there is a bit more configuration required, consider the following technologies:
+With some application frameworks, more configuration is required. Consider the following technologies:
* [Web forms](#web-forms) * [MVC](#mvc)
With some application frameworks there is a bit more configuration required, con
* [WCF](#wcf) > [!IMPORTANT]
-> This article is specifically focused on .NET Framework apps from a code example perspective. Some of the methods that work for .NET Framework are obsolete in the .NET Core SDK. For more information, see [.NET Core SDK documentation](./asp-net-core.md) when building apps with .NET Core.
+> This article is specifically focused on .NET Framework apps from a code example perspective. Some of the methods that work for .NET Framework are obsolete in the .NET Core SDK. For more information, see [.NET Core SDK documentation](./asp-net-core.md) when you build apps with .NET Core.
## Diagnose exceptions using Visual Studio
-Open the app solution in Visual Studio. Run the app, either on your server or on your development machine by using <kbd>F5</kbd>. Recreate the exception.
+Open the app solution in Visual Studio. Run the app, either on your server or on your development machine by using <kbd>F5</kbd>. Re-create the exception.
-Open the **Application Insights Search** telemetry window in Visual Studio. While debugging, select the **Application Insights** dropdown.
+Open the **Application Insights Search** telemetry window in Visual Studio. While debugging, select the **Application Insights** dropdown box.
-![Right-click the project and choose Application Insights, Open.](./media/asp-net-exceptions/34.png)
+![Screenshot that shows right-clicking the project and choosing Application Insights.](./media/asp-net-exceptions/34.png)
Select an exception report to show its stack trace. To open the relevant code file, select a line reference in the stack trace. If CodeLens is enabled, you'll see data about the exceptions:
-![CodeLens notification of exceptions.](./media/asp-net-exceptions/35.png)
+![Screenshot that shows CodeLens notification of exceptions.](./media/asp-net-exceptions/35.png)
## Diagnose failures using the Azure portal
-Application Insights comes with a curated Application Performance Management (APM) experience to help you diagnose failures in your monitored applications. To start, select on the **Failures** option in the Application Insights resource menu located in the **Investigate** section.
-You will see the failure rate trends for your requests, how many of them are failing, and how many users are impacted. As an **Overall** view, you'll see some of the most useful distributions specific to the selected failing operation, including top three response codes, top three exception types, and top three failing dependency types.
+Application Insights comes with a curated Application Performance Management experience to help you diagnose failures in your monitored applications. To start, in the Application Insights resource menu on the left, under **Investigate**, select the **Failures** option.
-![Failures triage view (operations tab)](./media/asp-net-exceptions/failures0719.png)
+You'll see the failure rate trends for your requests, how many of them are failing, and how many users are affected. The **Overall** view shows some of the most useful distributions specific to the selected failing operation. You'll see the top three response codes, the top three exception types, and the top three failing dependency types.
-To review representative samples for each of these subsets of operations, select the corresponding link. As an example, to diagnose exceptions, you can select the count of a particular exception to be presented with the **End-to-end transaction** details tab:
+![Screenshot that shows a failures triage view on the Operations tab.](./media/asp-net-exceptions/failures0719.png)
-![End-to-end transaction details tab](./media/asp-net-exceptions/end-to-end.png)
+To review representative samples for each of these subsets of operations, select the corresponding link. As an example, to diagnose exceptions, you can select the count of a particular exception to be presented with the **End-to-end transaction details** tab.
-Alternatively, instead of looking at exceptions of a specific failing operation, you can start from the **Overall** view of exceptions, by switching to the **Exceptions** tab at the top. Here you can see all the exceptions collected for your monitored app.
+![Screenshot that shows the End-to-end transaction details tab.](./media/asp-net-exceptions/end-to-end.png)
+
+Alternatively, instead of looking at exceptions of a specific failing operation, you can start from the **Overall** view of exceptions by switching to the **Exceptions** tab at the top. Here you can see all the exceptions collected for your monitored app.
## Custom tracing and log data
To get diagnostic data specific to your app, you can insert code to send your ow
Using the <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient?displayProperty=fullName>, you have several APIs available:
-* <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackEvent%2A?displayProperty=nameWithType> is typically used for monitoring usage patterns, but the data it sends also appears under **Custom Events** in diagnostic search. Events are named, and can carry string properties and numeric metrics on which you can [filter your diagnostic searches](./diagnostic-search.md).
+* <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackEvent%2A?displayProperty=nameWithType> is typically used for monitoring usage patterns, but the data it sends also appears under **Custom Events** in diagnostic search. Events are named and can carry string properties and numeric metrics on which you can [filter your diagnostic searches](./diagnostic-search.md).
* <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackTrace%2A?displayProperty=nameWithType> lets you send longer data such as POST information. * <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackException%2A?displayProperty=nameWithType> sends exception details, such as stack traces to Application Insights.
-To see these events, open [Search](./diagnostic-search.md) from the left menu, select the drop-down menu **Event types**, and then choose **Custom Event**, **Trace**, or **Exception**.
+To see these events, on the left menu, open [Search](./diagnostic-search.md). Select the dropdown menu **Event types**, and then choose **Custom Event**, **Trace**, or **Exception**.
-![Drill through](./media/asp-net-exceptions/customevents.png)
+![Screenshot that shows the Search screen.](./media/asp-net-exceptions/customevents.png)
> [!NOTE]
-> If your app generates a lot of telemetry, the adaptive sampling module will automatically reduce the volume that is sent to the portal by sending only a representative fraction of events. Events that are part of the same operation will be selected or deselected as a group, so that you can navigate between related events. For more information, see [Sampling in Application Insights](./sampling.md).
+> If your app generates a lot of telemetry, the adaptive sampling module will automatically reduce the volume that's sent to the portal by sending only a representative fraction of events. Events that are part of the same operation will be selected or deselected as a group so that you can navigate between related events. For more information, see [Sampling in Application Insights](./sampling.md).
-### How to see request POST data
+### See request POST data
Request details don't include the data sent to your app in a POST call. To have this data reported: * [Install the SDK](./asp-net.md) in your application project.
-* Insert code in your application to call [Microsoft.ApplicationInsights.TrackTrace()](./api-custom-events-metrics.md#tracktrace). Send the POST data in the message parameter. There is a limit to the permitted size, so you should try to send just the essential data.
+* Insert code in your application to call [Microsoft.ApplicationInsights.TrackTrace()](./api-custom-events-metrics.md#tracktrace). Send the POST data in the message parameter. There's a limit to the permitted size, so you should try to send only the essential data.
* When you investigate a failed request, find the associated traces.
-## <a name="exceptions"></a> Capturing exceptions and related diagnostic data
-At first, you won't see in the portal all the exceptions that cause failures in your app. You'll see any browser exceptions (if you're using the [JavaScript SDK](./javascript.md) in your web pages). But most server exceptions are caught by IIS and you have to write a bit of code to see them.
+## <a name="exceptions"></a> Capture exceptions and related diagnostic data
+
+At first, you won't see in the portal all the exceptions that cause failures in your app. You'll see any browser exceptions, if you're using the [JavaScript SDK](./javascript.md) in your webpages. But most server exceptions are caught by IIS and you have to write a bit of code to see them.
You can: * **Log exceptions explicitly** by inserting code in exception handlers to report the exceptions. * **Capture exceptions automatically** by configuring your ASP.NET framework. The necessary additions are different for different types of framework.
-## Reporting exceptions explicitly
+## Report exceptions explicitly
-The simplest way is to insert a call to `trackException()` in an exception handler.
+The simplest way to report is to insert a call to `trackException()` in an exception handler.
```javascript try
Catch ex as Exception
End Try ```
-The properties and measurements parameters are optional, but are useful for [filtering and adding](./diagnostic-search.md) extra information. For example, if you have an app that can run several games, you could find all the exception reports related to a particular game. You can add as many items as you like to each dictionary.
+The properties and measurements parameters are optional, but they're useful for [filtering and adding](./diagnostic-search.md) extra information. For example, if you have an app that can run several games, you could find all the exception reports related to a particular game. You can add as many items as you want to each dictionary.
## Browser exceptions Most browser exceptions are reported.
-If your web page includes script files from content delivery networks or other domains, ensure your script tag has the attribute `crossorigin="anonymous"`, and that the server sends [CORS headers](https://enable-cors.org/). This will allow you to get a stack trace and detail for unhandled JavaScript exceptions from these resources.
+If your webpage includes script files from content delivery networks or other domains, ensure your script tag has the attribute `crossorigin="anonymous"` and that the server sends [CORS headers](https://enable-cors.org/). This behavior will allow you to get a stack trace and detail for unhandled JavaScript exceptions from these resources.
## Reuse your telemetry client > [!NOTE]
-> The `TelemetryClient` is recommended to be instantiated once, and re-used throughout the life of an application.
+> We recommend that you instantiate the `TelemetryClient` once and reuse it throughout the life of an application.
With [Dependency Injection (DI) in .NET](/dotnet/core/extensions/dependency-injection), the appropriate .NET SDK, and correctly configuring Application Insights for DI, you can require the <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient> as a constructor parameter.
In the preceding example, the `_telemetryClient` is a class-scoped variable of t
## MVC
-Starting with Application Insights Web SDK version 2.6 (beta3 and later), Application Insights collects unhandled exceptions thrown in the MVC 5+ controllers methods automatically. If you have previously added a custom handler to track such exceptions, you may remove it to prevent double tracking of exceptions.
+Starting with Application Insights Web SDK version 2.6 (beta 3 and later), Application Insights collects unhandled exceptions thrown in the MVC 5+ controllers methods automatically. If you've previously added a custom handler to track such exceptions, you can remove it to prevent double tracking of exceptions.
-There are a number of scenarios when an exception filter cannot correctly handle errors, when exceptions are thrown:
+There are several scenarios when an exception filter can't correctly handle errors when exceptions are thrown:
-* From controller constructors.
-* From message handlers.
-* During routing.
-* During response content serialization.
-* During application start-up.
-* In background tasks.
+* From controller constructors
+* From message handlers
+* During routing
+* During response content serialization
+* During application start-up
+* In background tasks
-All exceptions *handled* by application still need to be tracked manually.
-Unhandled exceptions originating from controllers typically result in 500 "Internal Server Error" response. If such response is manually constructed as a result of handled exception (or no exception at all) it is tracked in corresponding request telemetry with `ResultCode` 500, however Application Insights SDK is unable to track corresponding exception.
+All exceptions *handled* by application still need to be tracked manually. Unhandled exceptions originating from controllers typically result in a 500 "Internal Server Error" response. If such response is manually constructed as a result of a handled exception, or no exception at all, it's tracked in corresponding request telemetry with `ResultCode` 500. However, the Application Insights SDK is unable to track a corresponding exception.
### Prior versions support If you use MVC 4 (and prior) of Application Insights Web SDK 2.5 (and prior), refer to the following examples to track exceptions.
-If the [CustomErrors](/previous-versions/dotnet/netframework-4.0/h0hfz6fc(v=vs.100)) configuration is `Off`, then exceptions will be available for the [HTTP Module](/previous-versions/dotnet/netframework-3.0/ms178468(v=vs.85)) to collect. However, if it is `RemoteOnly` (default), or `On`, then the exception will be cleared and not available for Application Insights to automatically collect. You can fix that by overriding the [System.Web.Mvc.HandleErrorAttribute class](/dotnet/api/system.web.mvc.handleerrorattribute), and applying the overridden class as shown for the different MVC versions below ([GitHub source](https://github.com/AppInsightsSamples/Mvc2UnhandledExceptions/blob/master/MVC2App/Controllers/AiHandleErrorAttribute.cs)):
+If the [CustomErrors](/previous-versions/dotnet/netframework-4.0/h0hfz6fc(v=vs.100)) configuration is `Off`, exceptions will be available for the [HTTP Module](/previous-versions/dotnet/netframework-3.0/ms178468(v=vs.85)) to collect. However, if it's `RemoteOnly` (default), or `On`, the exception will be cleared and not available for Application Insights to automatically collect. You can fix that behavior by overriding the [System.Web.Mvc.HandleErrorAttribute class](/dotnet/api/system.web.mvc.handleerrorattribute) and applying the overridden class as shown for the different MVC versions here (see the [GitHub source](https://github.com/AppInsightsSamples/Mvc2UnhandledExceptions/blob/master/MVC2App/Controllers/AiHandleErrorAttribute.cs)):
```csharp using System;
namespace MVC2App.Controllers
//The attribute should track exceptions only when CustomErrors setting is On //if CustomErrors is Off, exceptions will be caught by AI HTTP Module if (filterContext.HttpContext.IsCustomErrorEnabled)
- { //or reuse instance (recommended!). see note above
+ { //Or reuse instance (recommended!). See note above.
var ai = new TelemetryClient(); ai.TrackException(filterContext.Exception); }
namespace MVC2App.Controllers
#### MVC 2
-Replace the HandleError attribute with your new attribute in your controllers.
+Replace the HandleError attribute with your new attribute in your controllers:
```csharp namespace MVC2App.Controllers
public class MyMvcApplication : System.Web.HttpApplication
[Sample](https://github.com/AppInsightsSamples/Mvc3UnhandledExceptionTelemetry)
-#### MVC 4, MVC5
+#### MVC 4, MVC 5
Register `AiHandleErrorAttribute` as a global filter in *FilterConfig.cs*:
public class FilterConfig
## Web API
-Starting with Application Insights Web SDK version 2.6 (beta3 and later), Application Insights collects unhandled exceptions thrown in the controller methods automatically for WebAPI 2+. If you have previously added a custom handler to track such exceptions (as described in following examples), you may remove it to prevent double tracking of exceptions.
+Starting with Application Insights Web SDK version 2.6 (beta 3 and later), Application Insights collects unhandled exceptions thrown in the controller methods automatically for Web API 2+. If you've previously added a custom handler to track such exceptions, as described in the following examples, you can remove it to prevent double tracking of exceptions.
-There are a number of cases that the exception filters cannot handle. For example:
+There are several cases that the exception filters can't handle. For example:
* Exceptions thrown from controller constructors. * Exceptions thrown from message handlers. * Exceptions thrown during routing. * Exceptions thrown during response content serialization.
-* Exception thrown during application start-up.
+* Exception thrown during application startup.
* Exception thrown in background tasks.
-All exceptions *handled* by application still need to be tracked manually.
-Unhandled exceptions originating from controllers typically result in 500 "Internal Server Error" response. If such response is manually constructed as a result of handled exception (or no exception at all) it is tracked in a corresponding request telemetry with `ResultCode` 500, however Application Insights SDK is unable to track corresponding exception.
+All exceptions *handled* by application still need to be tracked manually. Unhandled exceptions originating from controllers typically result in a 500 "Internal Server Error" response. If such a response is manually constructed as a result of a handled exception, or no exception at all, it's tracked in a corresponding request telemetry with `ResultCode` 500. However, the Application Insights SDK can't track a corresponding exception.
### Prior versions support
-If you use WebAPI 1 (and prior) of Application Insights Web SDK 2.5 (and prior), refer to the following examples to track exceptions.
+If you use Web API 1 (and earlier) of Application Insights Web SDK 2.5 (and earlier), refer to the following examples to track exceptions.
#### Web API 1.x
namespace WebAPI.App_Start
public override void OnException(HttpActionExecutedContext actionExecutedContext) { if (actionExecutedContext != null && actionExecutedContext.Exception != null)
- { //or reuse instance (recommended!). see note above
+ { //Or reuse instance (recommended!). See note above.
var ai = new TelemetryClient(); ai.TrackException(actionExecutedContext.Exception); }
namespace ProductsAppPureWebAPI.App_Start
} ```
-Add this to the services in WebApiConfig:
+Add this snippet to the services in `WebApiConfig`:
```csharp using System.Web.Http;
namespace WebApi2WithMVC
As alternatives, you could:
-1. Replace the only ExceptionHandler with a custom implementation of IExceptionHandler. This is only called when the framework is still able to choose which response message to send (not when the connection is aborted for instance)
-2. Exception Filters (as described in the section on Web API 1.x controllers above) - not called in all cases.
+- Replace the only `ExceptionHandler` instance with a custom implementation of `IExceptionHandler`. This exception handler is only called when the framework is still able to choose which response message to send, not when the connection is aborted, for instance.
+- Use exception filters, as described in the preceding section on Web API 1.x controllers, which aren't called in all cases.
## WCF
-Add a class that extends Attribute and implements IErrorHandler and IServiceBehavior.
+Add a class that extends `Attribute` and implements `IErrorHandler` and `IServiceBehavior`.
```csharp using System;
namespace WcfService4
## Exception performance counters
-If you have [installed the Azure Monitor Application Insights Agent](./status-monitor-v2-overview.md) on your server, you can get a chart of the exceptions rate, measured by .NET. This includes both handled and unhandled .NET exceptions.
+If you've [installed the Azure Monitor Application Insights Agent](./status-monitor-v2-overview.md) on your server, you can get a chart of the exceptions rate, measured by .NET. Both handled and unhandled .NET exceptions are included.
-Open a Metric Explorer tab, add a new chart, and select **Exception rate**, listed under Performance Counters.
+Open a metrics explorer tab, add a new chart. Under **Performance Counters**, select **Exception rate**.
-The .NET framework calculates the rate by counting the number of exceptions in an interval and dividing by the length of the interval.
+The .NET Framework calculates the rate by counting the number of exceptions in an interval and dividing by the length of the interval.
-This is different from the 'Exceptions' count calculated by the Application Insights portal counting TrackException reports. The sampling intervals are different, and the SDK doesn't send TrackException reports for all handled and unhandled exceptions.
+This count is different from the Exceptions count calculated by the Application Insights portal counting `TrackException` reports. The sampling intervals are different, and the SDK doesn't send `TrackException` reports for all handled and unhandled exceptions.
## Next steps
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
# Explore .NET/.NET Core and Python trace logs in Application Insights
-Send diagnostic tracing logs for your ASP.NET/ASP.NET Core application from ILogger, NLog, log4Net, or System.Diagnostics.Trace to [Azure Application Insights][start]. For Python applications, send diagnostic tracing logs using AzureLogHandler in OpenCensus Python for Azure Monitor. You can then explore and search them. Those logs are merged with the other log files from your application, so you can identify traces that are associated with each user request and correlate them with other events and exception reports.
+Send diagnostic tracing logs for your ASP.NET/ASP.NET Core application from ILogger, NLog, log4Net, or System.Diagnostics.Trace to [Azure Application Insights][start]. For Python applications, send diagnostic tracing logs by using AzureLogHandler in OpenCensus Python for Azure Monitor. You can then explore and search for them. Those logs are merged with the other log files from your application. You can use them to identify traces that are associated with each user request and correlate them with other events and exception reports.
> [!NOTE]
-> Do you need the log-capture module? It's a useful adapter for third-party loggers. But if you aren't already using NLog, log4Net, or System.Diagnostics.Trace, consider just calling [**Application Insights TrackTrace()**](./api-custom-events-metrics.md#tracktrace) directly.
+> Do you need the log-capture module? It's a useful adapter for third-party loggers. But if you aren't already using NLog, log4Net, or System.Diagnostics.Trace, consider calling [**Application Insights TrackTrace()**](./api-custom-events-metrics.md#tracktrace) directly.
> > [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## Install logging on your app
-Install your chosen logging framework in your project, which should result in an entry in app.config or web.config.
+
+Install your chosen logging framework in your project, which should result in an entry in *app.config* or *web.config*.
```xml <configuration>
Install your chosen logging framework in your project, which should result in an
``` ## Configure Application Insights to collect logs+ [Add Application Insights to your project](./asp-net.md) if you haven't done that yet. You'll see an option to include the log collector. Or right-click your project in Solution Explorer to **Configure Application Insights**. Select the **Configure trace collection** option.
Or right-click your project in Solution Explorer to **Configure Application Insi
> No Application Insights menu or log collector option? Try [Troubleshooting](#troubleshooting). ## Manual installation
-Use this method if your project type isn't supported by the Application Insights installer (for example a Windows desktop project).
-1. If you plan to use log4Net or NLog, install it in your project.
-2. In Solution Explorer, right-click your project, and select **Manage NuGet Packages**.
-3. Search for "Application Insights."
-4. Select one of the following packages:
+Use this method if your project type isn't supported by the Application Insights installer. For example, if it's a Windows desktop project.
+
+1. If you plan to use log4net or NLog, install it in your project.
+1. In Solution Explorer, right-click your project, and select **Manage NuGet Packages**.
+1. Search for **Application Insights**.
+1. Select one of the following packages:
- - For ILogger: [Microsoft.Extensions.Logging.ApplicationInsights](https://www.nuget.org/packages/Microsoft.Extensions.Logging.ApplicationInsights/)
+ - **ILogger**: [Microsoft.Extensions.Logging.ApplicationInsights](https://www.nuget.org/packages/Microsoft.Extensions.Logging.ApplicationInsights/)
[![NuGet iLogger banner](https://img.shields.io/nuget/vpre/Microsoft.Extensions.Logging.ApplicationInsights.svg)](https://www.nuget.org/packages/Microsoft.Extensions.Logging.ApplicationInsights/)
- - For NLog: [Microsoft.ApplicationInsights.NLogTarget](https://www.nuget.org/packages/Microsoft.ApplicationInsights.NLogTarget/)
+ - **NLog**: [Microsoft.ApplicationInsights.NLogTarget](https://www.nuget.org/packages/Microsoft.ApplicationInsights.NLogTarget/)
[![NuGet NLog banner](https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.NLogTarget.svg)](https://www.nuget.org/packages/Microsoft.ApplicationInsights.NLogTarget/)
- - For Log4Net: [Microsoft.ApplicationInsights.Log4NetAppender](https://www.nuget.org/packages/Microsoft.ApplicationInsights.Log4NetAppender/)
+ - **log4net**: [Microsoft.ApplicationInsights.Log4NetAppender](https://www.nuget.org/packages/Microsoft.ApplicationInsights.Log4NetAppender/)
[![NuGet Log4Net banner](https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.Log4NetAppender.svg)](https://www.nuget.org/packages/Microsoft.ApplicationInsights.Log4NetAppender/)
- - For System.Diagnostics: [Microsoft.ApplicationInsights.TraceListener](https://www.nuget.org/packages/Microsoft.ApplicationInsights.TraceListener/)
+ - **System.Diagnostics**: [Microsoft.ApplicationInsights.TraceListener](https://www.nuget.org/packages/Microsoft.ApplicationInsights.TraceListener/)
[![NuGet System.Diagnostics banner](https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.TraceListener.svg)](https://www.nuget.org/packages/Microsoft.ApplicationInsights.TraceListener/) - [Microsoft.ApplicationInsights.DiagnosticSourceListener](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DiagnosticSourceListener/) [![NuGet Diagnostic Source Listener banner](https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.DiagnosticSourceListener.svg)](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DiagnosticSourceListener/)
The NuGet package installs the necessary assemblies and modifies web.config or a
For examples of using the Application Insights ILogger implementation with console applications and ASP.NET Core, see [ApplicationInsightsLoggerProvider for .NET Core ILogger logs](ilogger.md). ## Insert diagnostic log calls+ If you use System.Diagnostics.Trace, a typical call would be: ```csharp
If you prefer log4net or NLog, use:
``` ## Use EventSource events+ You can configure [System.Diagnostics.Tracing.EventSource](/dotnet/api/system.diagnostics.tracing.eventsource) events to be sent to Application Insights as traces. First, install the `Microsoft.ApplicationInsights.EventSourceListener` NuGet package. Then edit the `TelemetryModules` section of the [ApplicationInsights.config](./configuration-with-applicationinsights-config.md) file. ```xml
You can configure [System.Diagnostics.Tracing.EventSource](/dotnet/api/system.di
``` For each source, you can set the following parameters:+ * **Name** specifies the name of the EventSource to collect. * **Level** specifies the logging level to collect: *Critical*, *Error*, *Informational*, *LogAlways*, *Verbose*, or *Warning*. * **Keywords** (optional) specify the integer value of keyword combinations to use. ## Use DiagnosticSource events+ You can configure [System.Diagnostics.DiagnosticSource](https://github.com/dotnet/corefx/blob/master/src/System.Diagnostics.DiagnosticSource/src/DiagnosticSourceUsersGuide.md) events to be sent to Application Insights as traces. First, install the [`Microsoft.ApplicationInsights.DiagnosticSourceListener`](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DiagnosticSourceListener) NuGet package. Then edit the "TelemetryModules" section of the [ApplicationInsights.config](./configuration-with-applicationinsights-config.md) file. ```xml
You can configure [System.Diagnostics.DiagnosticSource](https://github.com/dotne
</Add> ```
-For each DiagnosticSource you want to trace, add an entry with the **Name** attribute set to the name of your DiagnosticSource.
+For each diagnostic source you want to trace, add an entry with the `Name` attribute set to the name of your diagnostic source.
## Use ETW events+ You can configure Event Tracing for Windows (ETW) events to be sent to Application Insights as traces. First, install the `Microsoft.ApplicationInsights.EtwCollector` NuGet package. Then edit the "TelemetryModules" section of the [ApplicationInsights.config](./configuration-with-applicationinsights-config.md) file. > [!NOTE]
You can configure Event Tracing for Windows (ETW) events to be sent to Applicati
``` For each source, you can set the following parameters:+ * **ProviderName** is the name of the ETW provider to collect. * **ProviderGuid** specifies the GUID of the ETW provider to collect. It can be used instead of `ProviderName`. * **Level** sets the logging level to collect. It can be *Critical*, *Error*, *Informational*, *LogAlways*, *Verbose*, or *Warning*. * **Keywords** (optional) set the integer value of keyword combinations to use. ## Use the Trace API directly+ You can call the Application Insights trace API directly. The logging adapters use this API. For example:
var telemetryClient = new TelemetryClient(configuration);
telemetryClient.TrackTrace("Slow response - database01"); ```
-An advantage of TrackTrace is that you can put relatively long data in the message. For example, you can encode POST data there.
+An advantage of `TrackTrace` is that you can put relatively long data in the message. For example, you can encode POST data there.
You can also add a severity level to your message. And, like other telemetry, you can add property values to help filter or search for different sets of traces. For example:
You can also add a severity level to your message. And, like other telemetry, yo
new Dictionary<string, string> { { "database", "db.ID" } }); ```
-This would enable you to easily filter out in [Search][diagnostic] all the messages of a particular severity level that relate to a particular database.
+Now you can easily filter out in [Search][diagnostic] all the messages of a particular severity level that relate to a particular database.
## AzureLogHandler for OpenCensus Python+ The Azure Monitor Log Handler allows you to export Python logs to Azure Monitor. Instrument your application with the [OpenCensus Python SDK](./opencensus-python.md) for Azure Monitor.
logger.warning('Hello, World!')
``` ## Explore your logs+ Run your app in debug mode or deploy it live.
-In your app's overview pane in [the Application Insights portal][portal], select [Search][diagnostic].
+In your app's overview pane in the [Application Insights portal][portal], select [Search][diagnostic].
You can, for example: * Filter on log traces or on items with specific properties. * Inspect a specific item in detail.
-* Find other system log data that relates to the same user request (has the same OperationId).
+* Find other system log data that relates to the same user request (has the same operation ID).
* Save the configuration of a page as a favorite. > [!NOTE]
-> If your application sends a lot of data and you're using the Application Insights SDK for ASP.NET version 2.0.0-beta3 or later, the *adaptive sampling* feature may operate and send only a portion of your telemetry. [Learn more about sampling.](./sampling.md)
+> If your application sends a lot of data and you're using the Application Insights SDK for ASP.NET version 2.0.0-beta3 or later, the *adaptive sampling* feature might operate and send only a portion of your telemetry. Learn more about [sampling](./sampling.md).
> ## Troubleshooting
-### Delayed telemetry, overloading network, or inefficient transmission
-System.Diagnostics.Tracing has an [Autoflush feature](/dotnet/api/system.diagnostics.trace.autoflush). This causes SDK to flush with every telemetry item, which is undesirable, and can cause logging adapter issues like delayed telemetry, overloading network, inefficient transmission, etc.
+Find answers to common questions.
+### What causes delayed telemetry, an overloaded network, and inefficient transmission?
+System.Diagnostics.Tracing has an [Autoflush feature](/dotnet/api/system.diagnostics.trace.autoflush). This feature causes SDK to flush with every telemetry item, which is undesirable, and can cause logging adapter issues like delayed telemetry, an overloaded network, and inefficient transmission.
### How do I do this for Java?
-The Application Insights Java agent collects logs from Log4j, Logback and java.util.logging out of the box.
+In Java codeless instrumentation, which is recommended, the logs are collected out of the box. Use [Java 3.0 agent](./java-in-process-agent.md).
+
+The Application Insights Java agent collects logs from Log4j, Logback, and java.util.logging out of the box.
+
+### Why is there no Application Insights option on the project context menu?
+
+* Make sure that Developer Analytics Tools is installed on the development machine. In Visual Studio, go to **Tools** > **Extensions and Updates**, and look for **Developer Analytics Tools**. If it isn't on the **Installed** tab, open the **Online** tab and install it.
+* This project type might be one that Developer Analytics Tools doesn't support. Use [manual installation](#manual-installation).
-### There's no Application Insights option on the project context menu
-* Make sure that Developer Analytics Tools is installed on the development machine. At Visual Studio **Tools** > **Extensions and Updates**, look for **Developer Analytics Tools**. If it isn't on the **Installed** tab, open the **Online** tab and install it.
-* This might be a project type that Developer Analytics Tools doesn't support. Use [manual installation](#manual-installation).
+### Why is there no log adapter option in the configuration tool?
-### There's no log adapter option in the configuration tool
* Install the logging framework first.
-* If you're using System.Diagnostics.Trace, make sure that you've it [configured in *web.config*](/dotnet/api/system.diagnostics.eventlogtracelistener).
-* Make sure that you have the latest version of Application Insights. In Visual Studio, go to **Tools** > **Extensions and Updates**, and open the **Updates** tab. If **Developer Analytics Tools** is there, select it to update it.
+* If you're using System.Diagnostics.Trace, make sure that you've [configured it in *web.config*](/dotnet/api/system.diagnostics.eventlogtracelistener).
+* Make sure that you have the latest version of Application Insights. In Visual Studio, go to **Tools** > **Extensions and Updates** and open the **Updates** tab. If **Developer Analytics Tools** is there, select it to update it.
+
+### <a name="emptykey"></a>Why do I get the "Instrumentation key cannot be empty" error message?
-### <a name="emptykey"></a>I get the "Instrumentation key cannot be empty" error message
You probably installed the logging adapter NuGet package without installing Application Insights. In Solution Explorer, right-click *ApplicationInsights.config*, and select **Update Application Insights**. You'll be prompted to sign in to Azure and create an Application Insights resource or reuse an existing one. That should fix the problem.
-### I can see traces but not other events in diagnostic search
+### Why can I see traces but not other events in diagnostic search?
+ It can take a while for all the events and requests to get through the pipeline. ### <a name="limits"></a>How much data is retained?
-Several factors affect the amount of data that's retained. For more information, see the [limits](./api-custom-events-metrics.md#limits) section of the customer event metrics page.
-### I don't see some log entries that I expected
-If your application sends voluminous amounts of data and you're using the Application Insights SDK for ASP.NET version 2.0.0-beta3 or later, the adaptive sampling feature may operate and send only a portion of your telemetry. [Learn more about sampling.](./sampling.md)
+Several factors affect the amount of data that's retained. For more information, see the [Limits](./api-custom-events-metrics.md#limits) section of the customer event metrics page.
+
+### Why don't I see some log entries that I expected?
+
+Perhaps your application sends voluminous amounts of data and you're using the Application Insights SDK for ASP.NET version 2.0.0-beta3 or later. In this case, the adaptive sampling feature might operate and send only a portion of your telemetry. Learn more about [sampling](./sampling.md).
## <a name="add"></a>Next steps
If your application sends voluminous amounts of data and you're using the Applic
[exceptions]: asp-net-exceptions.md [portal]: https://portal.azure.com/ [qna]: ../faq.yml
-[start]: ./app-insights-overview.md
+[start]: ./app-insights-overview.md
azure-monitor Azure Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps.md
Title: Monitor Azure app services performance | Microsoft Docs
-description: Application performance monitoring for Azure app services. Chart load and response time, dependency information, and set alerts on performance.
+ Title: Monitor Azure App Service performance | Microsoft Docs
+description: Application performance monitoring for Azure App Service. Chart load and response time, dependency information, and set alerts on performance.
Last updated 08/05/2021
-# Application Monitoring for Azure App Service Overview
+# Application monitoring for Azure App Service overview
-Enabling monitoring on your ASP.NET, ASP.NET Core, Java, and Node.js based web applications running on [Azure App Services](../../app-service/index.yml) is now easier than ever. Whereas previously you needed to manually instrument your app, the latest extension/agent is now built into the App Service image by default.
+It's now easier than ever to enable monitoring on your web applications based on ASP.NET, ASP.NET Core, Java, and Node.js running on [Azure App Service](../../app-service/index.yml). Previously, you needed to manually instrument your app, but the latest extension/agent is now built into the App Service image by default.
## Enable Application Insights
-There are two ways to enable application monitoring for Azure App Services hosted applications:
+There are two ways to enable monitoring for applications hosted on App Service:
-- **Auto-instrumentation application monitoring** (ApplicationInsightsAgent).
-
- - This method is the easiest to enable, and no code change or advanced configurations are required. It is often referred to as "runtime" monitoring. For Azure App Services we recommend at a minimum enabling this level of monitoring, and then based on your specific scenario you can evaluate whether more advanced monitoring through manual instrumentation is needed.
+- **Auto-instrumentation application monitoring** (ApplicationInsightsAgent).
+
+ This method is the easiest to enable, and no code change or advanced configurations are required. It's often referred to as "runtime" monitoring. For App Service, we recommend that at a minimum you enable this level of monitoring. Based on your specific scenario, you can evaluate whether more advanced monitoring through manual instrumentation is needed.
+
+ The following platforms are supported for auto-instrumentation monitoring:
+
+ - [.NET Core](./azure-web-apps-net-core.md)
+ - [.NET](./azure-web-apps-net.md)
+ - [Java](./azure-web-apps-java.md)
+ - [Node.js](./azure-web-apps-nodejs.md)
- - The following are supported for auto-instrumentation monitoring:
- - [.NET Core](./azure-web-apps-net-core.md)
- - [.NET](./azure-web-apps-net.md)
- - [Java](./azure-web-apps-java.md)
- - [Nodejs](./azure-web-apps-nodejs.md)
-
* **Manually instrumenting the application through code** by installing the Application Insights SDK.
- * This approach is much more customizable, but it requires the following approaches: SDK for [.NET Core](./asp-net-core.md), [.NET](./asp-net.md), [Node.js](./nodejs.md), [Python](./opencensus-python.md), and a standalone agent for [Java](./java-in-process-agent.md). This method, also means you have to manage the updates to the latest version of the packages yourself.
+ This approach is much more customizable, but it requires the following approaches: SDK for [.NET Core](./asp-net-core.md), [.NET](./asp-net.md), [Node.js](./nodejs.md), [Python](./opencensus-python.md), and a standalone agent for [Java](./java-in-process-agent.md). This method also means you must manage the updates to the latest version of the packages yourself.
+
+ If you need to make custom API calls to track events/dependencies not captured by default with auto-instrumentation monitoring, you'll need to use this method. To learn more, see [Application Insights API for custom events and metrics](./api-custom-events-metrics.md).
- * If you need to make custom API calls to track events/dependencies not captured by default with auto-instrumentation monitoring, you would need to use this method. Check out the [API for custom events and metrics article](./api-custom-events-metrics.md) to learn more.
+If both auto-instrumentation monitoring and manual SDK-based instrumentation are detected, in .NET only the manual instrumentation settings will be honored, while in Java only the auto-instrumentation will be emitting the telemetry. This practice is to prevent duplicate data from being sent.
> [!NOTE]
-> If both auto-instrumentation monitoring and manual SDK-based instrumentation are detected, in .NET only the manual instrumentation settings will be honored, while in Java only the auto-instrumentation will be emitting the telemetry. This is to prevent duplicate data from being sent.
+> Snapshot Debugger and Profiler are only available in .NET and .NET Core.
-> [!NOTE]
-> Snapshot debugger and profiler are only available in .NET and .NET Core
+## Next steps
-## Next Steps
-- Learn how to enable auto-instrumentation application monitoring for your [.NET Core](./azure-web-apps-net-core.md), [.NET](./azure-web-apps-net.md), [Java](./azure-web-apps-java.md) or [Nodejs](./azure-web-apps-nodejs.md) application running on App Service.
+Learn how to enable auto-instrumentation application monitoring for your [.NET Core](./azure-web-apps-net-core.md), [.NET](./azure-web-apps-net.md), [Java](./azure-web-apps-java.md), or [Nodejs](./azure-web-apps-nodejs.md) application running on App Service.
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Once the migration is complete, you can use [diagnostic settings](../essentials/
- Check your current retention settings under **General** > **Usage and estimated costs** > **Data Retention** for your Log Analytics workspace. This setting will affect how long any new ingested data is stored once you migrate your Application Insights resource. > [!NOTE]
- > - If you currently store Application Insights data for longer than the default 90 days and want to retain this larger retention period after migration, you will need to adjust your [workspace retention settings](https://docs.microsoft.com/azure/azure-monitor/logs/data-retention-archive?tabs=portal-1%2Cportal-2#set-retention-and-archive-policy-by-table) from the default 90 days to the desired longer retention period.
+ > - If you currently store Application Insights data for longer than the default 90 days and want to retain this larger retention period after migration, you will need to adjust your [workspace retention settings](/azure/azure-monitor/logs/data-retention-archive?tabs=portal-1%2Cportal-2#set-retention-and-archive-policy-by-table) from the default 90 days to the desired longer retention period.
> - If youΓÇÖve selected data retention greater than 90 days on data ingested into the Classic Application Insights resource prior to migration, data retention will continue to be billed to through that Application Insights resource until that data exceeds the retention period. > - If the retention setting for your Application Insights instance under **Configure** > **Usage and estimated costs** > **Data Retention** is enabled, then use that setting to control the retention days for the telemetry data still saved in your classic resource's storage.
azure-monitor Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/correlation.md
Title: Azure Application Insights telemetry correlation | Microsoft Docs
-description: Application Insights telemetry correlation
+description: This article explains Application Insights telemetry correlation.
Last updated 06/07/2019 ms.devlang: csharp, java, javascript, python
This article explains the data model used by Application Insights to correlate t
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] - ## Data model for telemetry correlation Application Insights defines a [data model](../../azure-monitor/app/data-model.md) for distributed telemetry correlation. To associate telemetry with a logical operation, every telemetry item has a context field called `operation_Id`. This identifier is shared by every telemetry item in the distributed trace. So even if you lose telemetry from a single layer, you can still associate telemetry reported by other components.
Every outgoing operation, such as an HTTP call to another component, is represen
You can build a view of the distributed logical operation by using `operation_Id`, `operation_parentId`, and `request.id` with `dependency.id`. These fields also define the causality order of telemetry calls.
-In a microservices environment, traces from components can go to different storage items. Every component can have its own connection string in Application Insights. To get telemetry for the logical operation, Application Insights queries data from every storage item. When the number of storage items is large, you'll need a hint about where to look next. The Application Insights data model defines two fields to solve this problem: `request.source` and `dependency.target`. The first field identifies the component that initiated the dependency request. The second field identifies which component returned the response of the dependency call.
+In a microservices environment, traces from components can go to different storage items. Every component can have its own connection string in Application Insights. To get telemetry for the logical operation, Application Insights queries data from every storage item.
+
+When the number of storage items is large, you'll need a hint about where to look next. The Application Insights data model defines two fields to solve this problem: `request.source` and `dependency.target`. The first field identifies the component that initiated the dependency request. The second field identifies which component returned the response of the dependency call.
-For information on querying from multiple disparate instances using the `app` query expression, see [app() expression in Azure Monitor query](../logs/app-expression.md#app-expression-in-azure-monitor-query).
+For information on querying from multiple disparate instances by using the `app` query expression, see [app() expression in Azure Monitor query](../logs/app-expression.md#app-expression-in-azure-monitor-query).
## Example
In the results, all telemetry items share the root `operation_Id`. When an Ajax
| request | GET Home/Stock | KqKwlrSt9PA= | qJSXU | STYz | | dependency | GET /api/stock/value | bBrf2L7mm2g= | KqKwlrSt9PA= | STYz |
-When the call `GET /api/stock/value` is made to an external service, you need to know the identity of that server so you can set the `dependency.target` field appropriately. When the external service doesn't support monitoring, `target` is set to the host name of the service (for example, `stock-prices-api.com`). But if the service identifies itself by returning a predefined HTTP header, `target` contains the service identity that allows Application Insights to build a distributed trace by querying telemetry from that service.
+When the call `GET /api/stock/value` is made to an external service, you need to know the identity of that server so you can set the `dependency.target` field appropriately. When the external service doesn't support monitoring, `target` is set to the host name of the service. An example is `stock-prices-api.com`. But if the service identifies itself by returning a predefined HTTP header, `target` contains the service identity that allows Application Insights to build a distributed trace by querying telemetry from that service.
## Correlation headers using W3C TraceContext
For more information, see [Application Insights telemetry data model](../../azur
### Enable W3C distributed tracing support for .NET apps
-W3C TraceContext based distributed tracing is enabled by default in all recent
+W3C TraceContext-based distributed tracing is enabled by default in all recent
.NET Framework/.NET Core SDKs, along with backward compatibility with legacy Request-Id protocol. ### Enable W3C distributed tracing support for Java apps #### Java 3.0 agent
- Java 3.0 agent supports W3C out of the box and no more configuration is needed.
+ Java 3.0 agent supports W3C out of the box, and no more configuration is needed.
#### Java SDK+ - **Incoming configuration**
- - For Java EE apps, add the following to the `<TelemetryModules>` tag in ApplicationInsights.xml:
+ For Java EE apps, add the following code to the `<TelemetryModules>` tag in *ApplicationInsights.xml*:
- ```xml
- <Add type="com.microsoft.applicationinsights.web.extensibility.modules.WebRequestTrackingTelemetryModule>
- <Param name = "W3CEnabled" value ="true"/>
- <Param name ="enableW3CBackCompat" value = "true" />
- </Add>
- ```
+ ```xml
+ <Add type="com.microsoft.applicationinsights.web.extensibility.modules.WebRequestTrackingTelemetryModule>
+ <Param name = "W3CEnabled" value ="true"/>
+ <Param name ="enableW3CBackCompat" value = "true" />
+ </Add>
+ ```
- - For Spring Boot apps, add these properties:
+ For Spring Boot apps, add these properties:
- - `azure.application-insights.web.enable-W3C=true`
- - `azure.application-insights.web.enable-W3C-backcompat-mode=true`
+ - `azure.application-insights.web.enable-W3C=true`
+ - `azure.application-insights.web.enable-W3C-backcompat-mode=true`
- **Outgoing configuration**
- Add the following to AI-Agent.xml:
+ Add the following code to *AI-Agent.xml*:
```xml <Instrumentation>
W3C TraceContext based distributed tracing is enabled by default in all recent
> [!NOTE] > Backward compatibility mode is enabled by default, and the `enableW3CBackCompat` parameter is optional. Use it only when you want to turn backward compatibility off. >
- > Ideally, you would turn this off when all your services have been updated to newer versions of SDKs that support the W3C protocol. We highly recommend that you move to these newer SDKs as soon as possible.
+ > Ideally, you'll' turn off this mode when all your services are updated to newer versions of SDKs that support the W3C protocol. We highly recommend that you move to these newer SDKs as soon as possible.
-> [!IMPORTANT]
-> Make sure the incoming and outgoing configurations are exactly the same.
+It's important to make sure the incoming and outgoing configurations are exactly the same.
-### Enable W3C distributed tracing support for Web apps
+### Enable W3C distributed tracing support for web apps
This feature is in `Microsoft.ApplicationInsights.JavaScript`. It's disabled by default. To enable it, use `distributedTracingMode` config. AI_AND_W3C is provided for backward compatibility with any legacy services instrumented by Application Insights. -- **[npm based setup](./javascript.md#npm-based-setup)**
+- **[npm-based setup](./javascript.md#npm-based-setup)**
-Add the following configuration:
+ Add the following configuration:
```JavaScript distributedTracingMode: DistributedTracingModes.W3C ``` -- **[Snippet based setup](./javascript.md#snippet-based-setup)**
+- **[Snippet-based setup](./javascript.md#snippet-based-setup)**
-Add the following configuration:
+ Add the following configuration:
``` distributedTracingMode: 2 // DistributedTracingModes.W3C ```
Add the following configuration:
OpenCensus Python supports [W3C Trace-Context](https://w3c.github.io/trace-context/) without requiring extra configuration.
-As a reference, the OpenCensus data model can be found [here](https://github.com/census-instrumentation/opencensus-specs/tree/master/trace).
+For a reference, you can find the OpenCensus data model on [this GitHub page](https://github.com/census-instrumentation/opencensus-specs/tree/master/trace).
### Incoming request correlation
if __name__ == '__main__':
``` This code runs a sample Flask application on your local machine, listening to port `8080`. To correlate trace context, you send a request to the endpoint. In this example, you can use a `curl` command:+ ``` curl --header "traceparent: 00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01" localhost:8080 ```+ By looking at the [Trace-Context header format](https://www.w3.org/TR/trace-context/#trace-context-http-headers-format), you can derive the following information: `version`: `00`
By looking at the [Trace-Context header format](https://www.w3.org/TR/trace-cont
`trace-flags`: `01`
-If you look at the request entry that was sent to Azure Monitor, you can see fields populated with the trace header information. You can find the data under Logs (Analytics) in the Azure Monitor Application Insights resource.
+If you look at the request entry that was sent to Azure Monitor, you can see fields populated with the trace header information. You can find the data under **Logs (Analytics)** in the Azure Monitor Application Insights resource.
-![Request telemetry in Logs (Analytics)](./media/opencensus-python/0011-correlation.png)
+![Screenshot that shows Request telemetry in Logs (Analytics).](./media/opencensus-python/0011-correlation.png)
-The `id` field is in the format `<trace-id>.<span-id>`, where the `trace-id` is taken from the trace header that was passed in the request and the `span-id` is a generated 8-byte array for this span.
+The `id` field is in the format `<trace-id>.<span-id>`, where `trace-id` is taken from the trace header that was passed in the request and `span-id` is a generated 8-byte array for this span.
-The `operation_ParentId` field is in the format `<trace-id>.<parent-id>`, where both the `trace-id` and the `parent-id` are taken from the trace header that was passed in the request.
+The `operation_ParentId` field is in the format `<trace-id>.<parent-id>`, where both `trace-id` and `parent-id` are taken from the trace header that was passed in the request.
### Log correlation
-OpenCensus Python enables you to correlate logs by adding a trace ID, a span ID, and a sampling flag to log records. You add these attributes by installing OpenCensus [logging integration](https://pypi.org/project/opencensus-ext-logging/). The following attributes will be added to Python `LogRecord` objects: `traceId`, `spanId`, and `traceSampled`. (applicable only for loggers that are created after the integration)
+OpenCensus Python enables you to correlate logs by adding a trace ID, a span ID, and a sampling flag to log records. You add these attributes by installing OpenCensus [logging integration](https://pypi.org/project/opencensus-ext-logging/). The following attributes will be added to Python `LogRecord` objects: `traceId`, `spanId`, and `traceSampled` (applicable only for loggers that are created after the integration).
Install the OpenCensus logging integration:
When this code runs, the following prints in the console:
2019-10-17 11:25:59,384 traceId=c54cb1d4bbbec5864bf0917c64aeacdc spanId=70da28f5a4831014 In the span 2019-10-17 11:25:59,385 traceId=c54cb1d4bbbec5864bf0917c64aeacdc spanId=0000000000000000 After the span ```+ Notice that there's a `spanId` present for the log message that's within the span. The `spanId` is the same as that which belongs to the span named `hello`.
-You can export the log data by using `AzureLogHandler`. For more information, see [this article](./opencensus-python.md#logs).
+You can export the log data by using `AzureLogHandler`. For more information, see [Set up Azure Monitor for your Python application](./opencensus-python.md#logs).
-We can also pass trace information from one component to another for proper correlation. For example, consider a scenario where there are two components `module1` and `module2`. Module1 calls functions in Module2 and to get logs from both `module1` and `module2` in a single trace we can use following approach:
+We can also pass trace information from one component to another for proper correlation. For example, consider a scenario where there are two components, `module1` and `module2`. Module1 calls functions in Module2. To get logs from both `module1` and `module2` in a single trace, we can use the following approach:
```python # module1.py
The Application Insights .NET SDK uses `DiagnosticSource` and `Activity` to coll
<a name="java-correlation"></a> ## Telemetry correlation in Java
-[Application Insights Java](./java-in-process-agent.md) supports automatic correlation of telemetry.
-It automatically populates `operation_id` for all telemetry (like traces, exceptions, and custom events) issued within the scope of a request. It also propagates the correlation headers (described earlier) for service-to-service calls via HTTP, RPC, and messaging. See the list of Application Insights Java's
-[autocollected dependencies which support distributed trace propagation](java-in-process-agent.md#autocollected-dependencies).
+[Java agent](./java-in-process-agent.md) supports automatic correlation of telemetry. It automatically populates `operation_id` for all telemetry (like traces, exceptions, and custom events) issued within the scope of a request. It also propagates the correlation headers that were described earlier for service-to-service calls via HTTP, if the [Java SDK agent](java-2x-agent.md) is configured.
> [!NOTE]
-> See [custom telemetry](./java-in-process-agent.md#custom-telemetry) if the auto-instrumentation does not cover all
-> of your needs.
+> Application Insights Java agent autocollects requests and dependencies for JMS, Kafka, Netty/Webflux, and more. For Java SDK, only calls made via Apache HttpClient are supported for the correlation feature. Automatic context propagation across messaging technologies like Kafka, RabbitMQ, and Azure Service Bus isn't supported in the SDK.
+
+To collect custom telemetry, you need to instrument the application with Java 2.6 SDK.
### Role names
-You might want to customize the way component names are displayed in the [Application Map](../../azure-monitor/app/app-map.md). To do so, you can manually set the `cloud_RoleName` by taking one of the following actions:
+You might want to customize the way component names are displayed in [Application Map](../../azure-monitor/app/app-map.md). To do so, you can manually set `cloud_RoleName` by taking one of the following actions:
- For Application Insights Java, set the cloud role name as follows:
You might want to customize the way component names are displayed in the [Applic
} ```
- You can also set the cloud role name using via environment variable or system property,
- see [configuring cloud role name](./java-standalone-config.md#cloud-role-name) for details.
+ You can also set the cloud role name by using the environment variable `APPLICATIONINSIGHTS_ROLE_NAME`.
+
+- With Application Insights Java SDK 2.5.0 and later, you can specify `cloud_RoleName`
+ by adding `<RoleName>` to your *ApplicationInsights.xml* file:
+
+ :::image type="content" source="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png" alt-text="Screenshot that shows Application Insights overview and connection string." lightbox="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png":::
+
+ ```xml
+ <?xml version="1.0" encoding="utf-8"?>
+ <ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings" schemaVersion="2014-05-30">
+ <ConnectionString>InstrumentationKey=00000000-0000-0000-0000-000000000000</ConnectionString>
+ <RoleName>** Your role name **</RoleName>
+ ...
+ </ApplicationInsights>
+ ```
+
+- If you use Spring Boot with the Application Insights Spring Boot Starter, set your custom name for the application in the *application.properties* file:
+
+ `spring.application.name=<name-of-app>`
+
+You can also set the cloud role name via environment variable or system property. See [Configuring cloud role name](./java-standalone-config.md#cloud-role-name) for details.
## Next steps
azure-monitor Diagnostic Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/diagnostic-search.md
Title: Using Search in Azure Application Insights | Microsoft Docs
+ Title: Use Search in Azure Application Insights | Microsoft Docs
description: Search and filter raw telemetry sent by your web app. Last updated 07/30/2019
-# Using Search in Application Insights
+# Use Search in Application Insights
-Transaction search is a feature of [Application Insights](./app-insights-overview.md) that you use to find and explore individual telemetry items, such as page views, exceptions, or web requests. And you can view log traces and events that you have coded.
+Transaction search is a feature of [Application Insights](./app-insights-overview.md) that you use to find and explore individual telemetry items, such as page views, exceptions, or web requests. You can also view log traces and events that you've coded.
-(For more complex queries over your data, use [Analytics](../logs/log-analytics-tutorial.md).)
+For more complex queries over your data, use [Log Analytics](../logs/log-analytics-tutorial.md).
## Where do you see Search?
+You can find **Search** in the Azure portal or Visual Studio.
+ ### In the Azure portal
-You can open transaction search from the Application Insights Overview tab of your application (located at in the top bar) or under investigate on the left.
+You can open transaction search from the Application Insights **Overview** tab of your application. You can also select **Search** under **Investigate** on the left menu.
-![Search tab](./media/diagnostic-search/view-custom-events.png)
+![Screenshot that shows the Search tab.](./media/diagnostic-search/view-custom-events.png)
-Go to the Event types' drop-down menu to see a list of telemetry items- server requests, page views, custom events that you have coded, and so on. At the top of the results' list, is a summary chart showing counts of events over time.
+Go to the **Event types** dropdown menu to see a list of telemetry items such as server requests, page views, and custom events that you've coded. At the top of the **Results** list is a summary chart showing counts of events over time.
-Click out of the drop-down menu or Refresh to get new events.
+Back out of the dropdown menu or select **Refresh** to get new events.
### In Visual Studio
-In Visual Studio, there's also an Application Insights Search window. It's most useful for displaying telemetry events generated by the application that you're debugging. But it can also show the events collected from your published app at the Azure portal.
+In Visual Studio, there's also an **Application Insights Search** window. It's most useful for displaying telemetry events generated by the application that you're debugging. But it can also show the events collected from your published app at the Azure portal.
-Open the Search window in Visual Studio:
+Open the **Application Insights Search** window in Visual Studio:
-![Visual Studio open Application Insights search](./media/diagnostic-search/32.png)
+![Screenshot that shows Visual Studio open to Application Insights Search.](./media/diagnostic-search/32.png)
-The Search window has features similar to the web portal:
+The **Application Insights Search** window has features similar to the web portal:
-![Visual Studio Application Insights search window](./media/diagnostic-search/34.png)
+![Screenshot that shows Visual Studio Application Insights Search window.](./media/diagnostic-search/34.png)
-The Track Operation tab is available when you open a request or a page view. An 'operation' is a sequence of events that is associated with to a single request or page view. For example, dependency calls, exceptions, trace logs, and custom events might be part of a single operation. The Track Operation tab shows graphically the timing and duration of these events in relation to the request or page view.
+The **Track Operation** tab is available when you open a request or a page view. An "operation" is a sequence of events that's associated with a single request or page view. For example, dependency calls, exceptions, trace logs, and custom events might be part of a single operation. The **Track Operation** tab shows graphically the timing and duration of these events in relation to the request or page view.
## Inspect individual items Select any telemetry item to see key fields and related items.
-![Screenshot of an individual dependency request](./media/diagnostic-search/telemetry-item.png)
+![Screenshot that shows an individual dependency request.](./media/diagnostic-search/telemetry-item.png)
-This will launch the end-to-end transaction details view.
+The end-to-end transaction details view opens.
## Filter event types
-Open the Event types' drop-down menu and choose the event types you want to see. (If, later, you want to restore the filters, click Reset.)
+Open the **Event types** dropdown menu and choose the event types you want to see. If you want to restore the filters later, select **Reset**.
The event types are:
-* **Trace** - [Diagnostic logs](./asp-net-trace-logs.md) including TrackTrace, log4Net, NLog, and System.Diagnostic.Trace calls.
-* **Request** - HTTP requests received by your server application, including pages, scripts, images, style files, and data. These events are used to create the request and response overview charts.
-* **Page View** - [Telemetry sent by the web client](./javascript.md), used to create page view reports.
-* **Custom Event** - If you inserted calls to TrackEvent() in order to [monitor usage](./api-custom-events-metrics.md), you can search them here.
-* **Exception** - Uncaught [exceptions in the server](./asp-net-exceptions.md), and those that you log by using TrackException().
-* **Dependency** - [Calls from your server application](./asp-net-dependencies.md) to other services such as REST APIs or databases, and AJAX calls from your [client code](./javascript.md).
-* **Availability** - Results of [availability tests](./monitor-web-app-availability.md).
+* **Trace**: [Diagnostic logs](./asp-net-trace-logs.md) including TrackTrace, log4Net, NLog, and System.Diagnostic.Trace calls.
+* **Request**: HTTP requests received by your server application including pages, scripts, images, style files, and data. These events are used to create the request and response overview charts.
+* **Page View**: [Telemetry sent by the web client](./javascript.md) used to create page view reports.
+* **Custom Event**: If you inserted calls to `TrackEvent()` to [monitor usage](./api-custom-events-metrics.md), you can search them here.
+* **Exception**: Uncaught [exceptions in the server](./asp-net-exceptions.md), and the exceptions that you log by using `TrackException()`.
+* **Dependency**: [Calls from your server application](./asp-net-dependencies.md) to other services such as REST APIs or databases, and AJAX calls from your [client code](./javascript.md).
+* **Availability**: Results of [availability tests](./monitor-web-app-availability.md).
## Filter on property values
-You can filter events on the values of their properties. The available properties depend on the event types you selected. Click on the filter icon ![Filter icon](./media/diagnostic-search/filter-icon.png) to start.
+You can filter events on the values of their properties. The available properties depend on the event types you selected. Select **Filter** ![Filter icon](./media/diagnostic-search/filter-icon.png) to start.
Choosing no values of a particular property has the same effect as choosing all values. It switches off filtering on that property.
Notice that the counts to the right of the filter values show how many occurrenc
## Find events with the same property
-To find all the items with the same property value, either type it into the search bar or click the checkbox when looking through properties in the filter tab.
+To find all the items with the same property value, either enter it in the **Search** box or select the checkbox when you look through properties on the **Filter** tab.
-![Click the checkbox of a property in the filter tab](./media/diagnostic-search/filter-property.png)
+![Screenshot that shows selecting the checkbox of a property on the Filter tab.](./media/diagnostic-search/filter-property.png)
## Search the data > [!NOTE]
-> To write more complex queries, open [**Logs (Analytics)**](../logs/log-analytics-tutorial.md) from the top of the Search blade.
+> To write more complex queries, open [Logs (Analytics)](../logs/log-analytics-tutorial.md) at the top of the **Search** pane.
>
-You can search for terms in any of the property values. This is useful if you have written [custom events](./api-custom-events-metrics.md) with property values.
+You can search for terms in any of the property values. This capability is useful if you've written [custom events](./api-custom-events-metrics.md) with property values.
-You might want to set a time range, as searches over a shorter range are faster.
+You might want to set a time range because searches over a shorter range are faster.
-![Open diagnostic search](./media/diagnostic-search/search-property.png)
+![Screenshot that shows opening a diagnostic search.](./media/diagnostic-search/search-property.png)
Search for complete words, not substrings. Use quotation marks to enclose special characters.
Search for complete words, not substrings. Use quotation marks to enclose specia
| HomeController.About |`home`<br/>`controller`<br/>`out` | `homecontroller`<br/>`about`<br/>`"homecontroller.about"`| |United States|`Uni`<br/>`ted`|`united`<br/>`states`<br/>`united AND states`<br/>`"united states"`
-Here are the search expressions you can use:
+You can use the following search expressions:
| Sample query | Effect | | | |
-| `apple` |Find all events in the time range whose fields include the word "apple" |
+| `apple` |Find all events in the time range whose fields include the word "apple". |
| `apple AND banana` <br/>`apple banana` |Find events that contain both words. Use capital "AND", not "and". <br/>Short form. | | `apple OR banana` |Find events that contain either word. Use "OR", not "or". | | `apple NOT banana` |Find events that contain one word but not the other. | ## Sampling
-If your app generates a large amount of telemetry (and you are using the ASP.NET SDK version 2.0.0-beta3 or later), the adaptive sampling module automatically reduces the volume that is sent to the portal by sending only a representative fraction of events. However, events that are related to the same request are selected or deselected as a group, so that you can navigate between related events.
+If your app generates a large amount of telemetry, and you're using the ASP.NET SDK version 2.0.0-beta3 or later, the adaptive sampling module automatically reduces the volume that's sent to the portal by sending only a representative fraction of events. Events that are related to the same request are selected or deselected as a group so that you can navigate between related events.
-[Learn about sampling](./sampling.md).
+Learn about [sampling](./sampling.md).
## Create work item You can create a bug in GitHub or Azure DevOps with the details from any telemetry item.
-Go to the end-to-end transaction detail view by clicking on any telemetry item then select **Create work item**.
-
-![Click New Work Item, edit the fields, and then click OK.](./media/diagnostic-search/work-item.png)
+Go to the end-to-end transaction detail view by selecting any telemetry item. Then select **Create work item**.
-The first time you do this, you are asked to configure a link to your Azure DevOps organization and project.
+![Screenshot that shows Create work item.](./media/diagnostic-search/work-item.png)
-(You can also configure the link on the Work Items tab.)
+The first time you do this step, you're asked to configure a link to your Azure DevOps organization and project. You can also configure the link on the **Work Items** tab.
## Send more telemetry to Application Insights In addition to the out-of-the-box telemetry sent by Application Insights SDK, you can: * Capture log traces from your favorite logging framework in [.NET](./asp-net-trace-logs.md) or [Java](./java-in-process-agent.md#autocollected-logs). This means you can search through your log traces and correlate them with page views, exceptions, and other events.+ * [Write code](./api-custom-events-metrics.md) to send custom events, page views, and exceptions.
-[Learn how to send logs and custom telemetry to Application Insights](./asp-net-trace-logs.md).
+Learn how to [send logs and custom telemetry to Application Insights](./asp-net-trace-logs.md).
## <a name="questions"></a>Q & A
+Find answers to common questions.
+ ### <a name="limits"></a>How much data is retained? See the [Limits summary](../service-limits.md#application-insights).
azure-monitor Container Insights Enable Existing Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-existing-clusters.md
To enable monitoring of your AKS cluster in the Azure portal from Azure Monitor,
3. On the **Monitor - containers** page, select **Unmonitored clusters**.
-4. From the list of unmonitored clusters, find the container in the list and click **Enable**.
+4. From the list of unmonitored clusters, find the cluster in the list and click **Enable**.
5. On the **Onboarding to Container insights** page, if you have an existing Log Analytics workspace in the same subscription as the cluster, select it from the drop-down list. The list preselects the default workspace and location that the AKS container is deployed to in the subscription.
azure-monitor Data Collection Rule Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-overview.md
The following resources describe different scenarios for creating data collectio
| Scenario | Resources | Description | |:|:|:| | Azure Monitor agent | [Configure data collection for the Azure Monitor agent](../agents/data-collection-rule-azure-monitor-agent.md) | Use the Azure portal to create a data collection rule that specifies events and performance counters to collect from a machine with the Azure Monitor agent and then apply that rule to one or more virtual machines. The Azure Monitor agent will be installed on any machines that don't currently have it. |
-| | [Use Azure Policy to install Azure Monitor agent and associate with DCR](../agents/azure-monitor-agent-manage.md#using-azure-policy) | Use Azure Policy to install the Azure Monitor agent and associate one or more data collection rules with any virtual machines or virtual machine scale sets as they're created in your subscription.
+| | [Use Azure Policy to install Azure Monitor agent and associate with DCR](../agents/azure-monitor-agent-manage.md#use-azure-policy) | Use Azure Policy to install the Azure Monitor agent and associate one or more data collection rules with any virtual machines or virtual machine scale sets as they're created in your subscription.
| Custom logs | [Configure custom logs using the Azure portal](../logs/tutorial-logs-ingestion-portal.md)<br>[Configure custom logs using Resource Manager templates and REST API](../logs/tutorial-logs-ingestion-api.md) | Send custom data using a REST API. The API call connects to a DCE and specifies a DCR to use. The DCR specifies the target table and potentially includes a transformation that filters and modifies the data before it's stored in a Log Analytics workspace. | | Workspace transformation | [Configure ingestion-time transformations using the Azure portal](../logs/tutorial-workspace-transformations-portal.md)<br>[Configure ingestion-time transformations using Resource Manager templates and REST API](../logs/tutorial-workspace-transformations-api.md) | Create a transformation for any supported table in a Log Analytics workspace. The transformation is defined in a DCR that's then associated with the workspace and applied to any data sent to that table from a legacy workload that doesn't use a DCR. |
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |AddRegion|Yes|Region Added|Count|Count|Region Added|Region| |AutoscaleMaxThroughput|No|Autoscale Max Throughput|Count|Maximum|Autoscale Max Throughput|DatabaseName, CollectionName|
-|AvailableStorage|No|(deprecated) Available Storage|Bytes|Total|"Available Storage"will be removed from Azure Monitor at the end of September 2023. Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more info about Cosmos DB storage quota, please check this doc https://docs.microsoft.com/azure/cosmos-db/concepts-limits. After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date.|CollectionName, DatabaseName, Region|
+|AvailableStorage|No|(deprecated) Available Storage|Bytes|Total|"Available Storage"will be removed from Azure Monitor at the end of September 2023. Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more info about Cosmos DB storage quota, please check this [here](/azure/cosmos-db/concepts-limits). After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date.|CollectionName, DatabaseName, Region|
|CassandraConnectionClosures|No|Cassandra Connection Closures|Count|Total|Number of Cassandra connections that were closed, reported at a 1 minute granularity|APIType, Region, ClosureReason| |CassandraConnectorAvgReplicationLatency|No|Cassandra Connector Average ReplicationLatency|MilliSeconds|Average|Cassandra Connector Average ReplicationLatency|No Dimensions| |CassandraConnectorReplicationHealthStatus|No|Cassandra Connector Replication Health Status|Count|Count|Cassandra Connector Replication Health Status|NotStarted, ReplicationInProgress, Error|
azure-monitor Workspace Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/workspace-design.md
While a single [Log Analytics workspace](log-analytics-workspace-overview.md) ma
## Design strategy Your design should always start with a single workspace since this reduces the complexity of managing multiple workspaces and in querying data from them. There are no performance limitations from the amount of data in your workspace, and multiple services and data sources can send data to the same workspace. As you identify criteria to create additional workspaces, your design should use the fewest number that will match your particular requirements.
-Designing a workspace configuration includes evaluation of multiple criteria, some of which may in conflict. For example, you may be able to reduce egress charges by creating a separate workspace in each Azure region, but consolidating into a single workspace might allow you to reduce charges even more with a commitment tier. Evaluate each of the criteria below independently and consider your particular requirements and priorities in determining which design will be most effective for your particular environment.
+Designing a workspace configuration includes evaluation of multiple criteria, some of which may be in conflict. For example, you may be able to reduce egress charges by creating a separate workspace in each Azure region, but consolidating into a single workspace might allow you to reduce charges even more with a commitment tier. Evaluate each of the criteria below independently and consider your particular requirements and priorities in determining which design will be most effective for your particular environment.
## Design criteria
azure-signalr Signalr Howto Reverse Proxy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-reverse-proxy-overview.md
+
+ Title: How to integrate Azure SignalR with reverse proxies
+description: This article provides information about the general practice integrating Azure SignalR with reverse proxies
++ Last updated : 08/16/2022++++
+# How to integrate Azure SignalR with reverse proxies
+
+A reverse proxy server can be used in front of Azure SignalR Service. Reverse proxy servers sit in between the clients and the Azure SignalR service and other services can help in various scenarios. For example, reverse proxy servers can load balance different client requests to different backend services, you can usually configure different routing rules for different client requests, and provide seamless user experience for users accessing different backend services. They can also protect your backend servers from common exploits vulnerabilities with centralized protection control. Services such as [Azure Application Gateway](/azure/application-gateway/overview), [Azure API Management](/azure/api-management/api-management-key-concepts) or [Akamai](https://www.akamai.com) can act as reverse proxy servers.
+
+A common architecture using a reverse proxy server with Azure SignalR is as below:
++
+## General practices
+There are several general practices to follow when using a reverse proxy in front of SignalR Service.
+
+* Make sure to rewrite the incoming HTTP [HOST header](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Host) with the Azure SignalR service URL, e.g. `https://demo.service.signalr.net`. Azure SignalR is a multi-tenant service, and it relies on the `HOST` header to resolve to the correct endpoint. For example, when [configuring Application Gateway](./signalr-howto-work-with-app-gateway.md#create-an-application-gateway-instance) for Azure SignalR, select **Yes** for the option *Override with new host name*.
+
+* When your client goes through your reverse proxy to Azure SignalR, set `ClientEndpoint` as your reverse proxy URL. When your client *negotiate*s with your hub server, the hub server will return the URL defined in `ClientEndpoint` for your client to connect. [Check here for more details.](./concept-connection-string.md#client-and-server-endpoints)
+
+ There are two ways to configure `ClientEndpoint`:
+ * Add a `ClientEndpoint` section to your ConnectionString: `Endpoint=...;AccessKey=...;ClientEndpoint=<reverse-proxy-URL>`
+ * Configure `ClientEndpoint` when calling `AddAzureSignalR`:
+
+ ```cs
+ services.AddSignalR().AddAzureSignalR(o =>
+ {
+ o.Endpoints = new Microsoft.Azure.SignalR.ServiceEndpoint[1]
+ {
+ new Microsoft.Azure.SignalR.ServiceEndpoint("<azure-signalr-connection-string>")
+ {
+ ClientEndpoint = new Uri("<reverse-proxy-URL>")
+ }
+ };
+ })
+ ```
+
+* When a client goes through your reverse proxy to Azure SignalR, there are two types of requests:
+ * HTTP post request to `<reverse-proxy-URL>/client/negotiate`, which we call as **negotiate request**
+ * WebSocket/SSE/LongPolling connection request depending on your transport type to `<reverse-proxy-URL>/client`, which we call as **connect request**.
+
+ Make sure that your reverse proxy supports both transport types for `/client` subpath. For example, when your transport type is WebSocket, make sure your reverse proxy supports both HTTP and WebSocket for `/client` subpath.
+
+ If you have configured multiple SignalR services behind your reverse proxy, make sure `negotiate` request and `connect` request with the same `asrs_request_id` query parameter(meaning they are for the same connection) are routed to the same SignalR service instance.
+
+* When reverse proxy is used, you can further secure your SignalR service by [disabling public network access](./howto-network-access-control.md) and using [private endpoints](howto-private-endpoints.md) to allow only private access from your reverse proxy to your SignalR service through VNet.
+
+## Next steps
+
+- Learn [how to work with Application Gateway](./signalr-howto-work-with-app-gateway.md).
+
+- Learn more about [the internals of Azure SignalR](./signalr-concept-internals.md).
azure-signalr Signalr Howto Work With App Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-work-with-app-gateway.md
+
+ Title: How to use SignalR Service with Azure Application Gateway
+description: This article provides information about using Azure SignalR Service with Azure Application Gateway.
++ Last updated : 08/16/2022++++
+# How to use Azure SignalR Service with Azure Application Gateway
+
+Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. Using Application Gateway with SignalR Service enables you to do the following:
+
+* Protect your applications from common web vulnerabilities.
+* Get application-level load-balancing for your scalable and highly available applications.
+* Set up end-to-end security.
+* Customize the domain name.
+
+This article contains two parts,
+* [The first part](#set-up-and-configure-application-gateway) shows how to configure Application Gateway so that the clients can access SignalR through Application Gateway.
+* [The second part](#secure-signalr-service) shows how to secure SignalR Service by adding access control to SignalR Service and only allow traffic from Application Gateway.
++
+## Set up and configure Application Gateway
+
+### Create a SignalR Service instance
+* Follow [the article](./signalr-quickstart-azure-signalr-service-arm-template.md) and create a SignalR Service instance **_ASRS1_**
+
+### Create an Application Gateway instance
+Create from the portal an Application Gateway instance **_AG1_**:
+* On the [Azure portal](https://portal.azure.com/), search for **Application Gateway** and **Create**.
+* On the **Basics** tab, use these values for the following application gateway settings:
+ - **Subscription** and **Resource group** and **Region**: the same as what you choose for SignalR Service
+ - **Application gateway name**: **_AG1_**
+ - **Virtual network**, select **Create new**, and in the **Create virtual network** window that opens, enter the following values to create the virtual network and two subnets, one for the application gateway, and another for the backend servers.
+ - **Name**: Enter **_VN1_** for the name of the virtual network.
+ - **Subnets**: Update the **Subnets** grid with below 2 subnets
+
+ | Subnet name | Address range| Note|
+ |--|--|--|
+ | *myAGSubnet* | (address range) | Subnet for the application gateway. The application gateway subnet can contain only application gateways. No other resources are allowed.
+ | *myBackendSubnet* | (another address range) | Subnet for the Azure SignalR instance.
+
+ - Accept the default values for the other settings and then select **Next: Frontends**
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/basics.png" alt-text="Screenshot of creating Application Gateway instance with Basics tab.":::
+
+* On the **Frontends** tab:
+ - **Frontend IP address type**: **Public**.
+ - Select **Add new** for the **Public IP address** and enter *myAGPublicIPAddress* for the public IP address name, and then select **OK**.
+ - Select **Next: Backends**
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-frontends.png" alt-text="Screenshot of creating Application Gateway instance with Frontends tab.":::
+
+* On the **Backends** tab, select **Add a backend pool**:
+ - **Name**: Enter **_signalr_** for the SignalR Service resource backend pool.
+ - Backend targets **Target**: the **host name** of your SignalR Service instance **_ASRS1_**, for example `asrs1.service.signalr.net`
+ - Select **Next: Configuration**
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-backends.png" alt-text="Screenshot of setting up the application gateway backend pool for the SignalR Service.":::
+
+* On the **Configuration** tab, select **Add a routing rule** in the **Routing rules** column:
+ - **Rule name**: **_myRoutingRule_**
+ - **Priority**: 1
+ - On the **Listener** tab within the **Add a routing rule** window, enter the following values for the listener:
+ - **Listener name**: Enter *myListener* for the name of the listener.
+ - **Frontend IP**: Select **Public** to choose the public IP you created for the frontend.
+ - **Protocol**: HTTP
+ * We use the HTTP frontend protocol on Application Gateway in this article to simplify the demo and help you get started easier. But in reality, you may need to enable HTTPs and Customer Domain on it with production scenario.
+ - Accept the default values for the other settings on the **Listener** tab
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-rule-listener.png" alt-text="Screenshot of setting up the application gateway routing rule listener tab for the SignalR Service.":::
+ - On the **Backend targets** tab, use the following values:
+ * **Target type**: Backend pool
+ * **Backend target**: select **signalr** we previously created
+ * **Backend settings**: select **Add new** to add a new setting.
+ * **Backend settings name**: *mySetting*
+ * **Backend protocol**: **HTTPS**
+ * **Use well known CA certificate**: **Yes**
+ * **Override with new host name**: **Yes**
+ * **Host name override**: **Pick host name from backend target**
+ * Others keep the default values
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-setup-backend.png" alt-text="Screenshot of setting up the application gateway backend setting for the SignalR Service.":::
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-rule-backends.png" alt-text="Screenshot of creating backend targets for application gateway.":::
+
+* Review and create the **_AG1_**
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-review.png" alt-text="Screenshot of reviewing and creating the application gateway instance.":::
+
+### Configure Application Gateway health probe
+
+When **_AG1_** is created, go to **Health probes** tab under **Settings** section in the portal, change the health probe path to `/api/health`
++
+### Quick test
+
+* Try with an invalid client request https://asrs1.service.signalr.net/client and it returns *400* with error message *'hub' query parameter is required.* It means the request arrived at the SignalR Service and did the request validation.
+ ```bash
+ curl -v https://asrs1.service.signalr.net/client
+ ```
+ returns
+ ```
+ < HTTP/1.1 400 Bad Request
+ < ...
+ <
+ 'hub' query parameter is required.
+ ```
+* Go to the Overview tab of **_AG1_**, and find out the Frontend public IP address
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/quick-test.png" alt-text="Screenshot of quick testing SignalR Service health endpoint through Application Gateway.":::
+
+* Visit the health endpoint through **_AG1_** `http://<frontend-public-IP-address>/client`, and it also returns *400* with error message *'hub' query parameter is required.* It means the request successfully went through Application Gateway to SignalR Service and did the request validation.
+
+ ```bash
+ curl -I http://<frontend-public-IP-address>/client
+ ```
+ returns
+ ```
+ < HTTP/1.1 400 Bad Request
+ < ...
+ <
+ 'hub' query parameter is required.
+ ```
+
+### Run chat through Application Gateway
+
+Now, the traffic can reach SignalR Service through the Application Gateway. The customer could use the Application Gateway public IP address or custom domain name to access the resource. LetΓÇÖs use [this chat application](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/ChatRoom) as an example. Let's start with running it locally.
+
+* First let's get the connection string of **_ASRS1_**
+ * On the **Connection strings** tab of **_ASRS1_**
+ * **Client endpoint**: Enter the URL using frontend public IP address of **_AG1_**, for example `http://20.88.8.8`. It's a connection string generator when using reverse proxies, and the value isn't preserved when next time you come back to this tab. When value entered, the connection string appends a `ClientEndpoint` section.
+ * Copy the Connection string
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/connection-string.png" alt-text="Screenshot of getting the connection string for SignalR Service with client endpoint.":::
+
+* Clone the GitHub repo https://github.com/aspnet/AzureSignalR-samples
+* Go to samples/Chatroom folder:
+* Set the copied connection string and run the application locally, you can see that there's a `ClientEndpoint` section in the ConnectionString.
+
+ ```bash
+ cd samples/Chatroom
+ dotnet restore
+ dotnet user-secrets set Azure:SignalR:ConnectionString "<copied-onnection-string-with-client-endpoint>"
+ dotnet run
+ ```
+* Open http://localhost:5000 from the browser and use F12 to view the network traces, you can see that the WebSocket connection is established through **_AG1_** 
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/chat-local-run.png" alt-text="Screenshot of running chat application locally with App Gateway and SignalR Service.":::
+
+## Secure SignalR Service
+
+In previous section, we successfully configured SignalR Service as the backend service of Application Gateway, we can call SignalR Service directly from public network, or through Application Gateway.
+
+In this section, let's configure SignalR Service to deny all the traffic from public network and only accept traffic from Application Gateway.
+
+### Configure SignalR Service
+
+Let's configure SignalR Service to only allow private access. You can find more details in [use private endpoint for SignalR Service](howto-private-endpoints.md).
+
+* Go to the SignalR Service instance **_ASRS1_** in the portal.
+* Go the **Networking** tab:
+ * On **Public access** tab: **Public network access** change to **Disabled** and **Save**, now you're no longer able to access SignalR Service from public network
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/disable-public-access.png" alt-text="Screenshot of disabling public access for SignalR Service.":::
+
+ * On **Private access** tab, select **+ Private endpoint**:
+ * On **Basics** tab:
+ * **Name**: **_PE1_**
+ * **Network Interface Name**: **_PE1-nic_**
+ * **Region**: make sure to choose the same region as your Application Gateway
+ * Select **Next: Resources**
+ * On **Resources** tab
+ * Keep default values
+ * Select **Next: Virtual Network**
+ * On **Virtual Network** tab
+ * **Virtual network**: Select previously created **_VN1_**
+ * **Subnet**: Select previously created **_VN1/myBackendSubnet_**
+ * Others keep the default settings
+ * Select **Next: DNS**
+ * On **DNS** tab
+ * **Integration with private DNS zone**: **Yes**
+ * Review and create the private endpoint
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-setup-private-endpoint.png" alt-text="Screenshot of setting up the private endpoint resource for the SignalR Service.":::
+
+### Refresh Application Gateway backend pool
+Since Application Gateway was set up before there was a private endpoint for it to use, we need to **refresh** the backend pool for it to look at the Private DNS Zone and figure out that it should route the traffic to the private endpoint instead of the public address. We do the **refresh** by setting the backend FQDN to some other value and then changing it back.
+
+Go to the **Backend pools** tab for **_AG1_**, and select **signalr**:
+* Step1: change Target `asrs1.service.signalr.net` to some other value, for example, `x.service.signalr.net`, and select **Save**
+* Step2: change Target back to `asrs1.service.signalr.net`
+
+### Quick test
+
+* Now let's visit https://asrs1.service.signalr.net/client again. With public access disabled, it returns *403* instead.
+ ```bash
+ curl -v https://asrs1.service.signalr.net/client
+ ```
+ returns
+ ```
+ < HTTP/1.1 403 Forbidden
+* Visit the endpoint through **_AG1_** `http://<frontend-public-IP-address>/client`, and it returns *400* with error message *'hub' query parameter is required*. It means the request successfully went through the Application Gateway to SignalR Service.
+
+ ```bash
+ curl -I http://<frontend-public-IP-address>/client
+ ```
+ returns
+ ```
+ < HTTP/1.1 400 Bad Request
+ < ...
+ <
+ 'hub' query parameter is required.
+ ```
+
+Now if you run the Chat application locally again, you'll see error messages `Failed to connect to .... The server returned status code '403' when status code '101' was expected.`, it is because public access is disabled so that localhost server connections are longer able to connect to the SignalR service.
+
+Let's deploy the Chat application into the same VNet with **_ASRS1_** so that the chat can talk with **_ASRS1_**.
+
+### Deploy the chat application to Azure
+* On the [Azure portal](https://portal.azure.com/), search for **App services** and **Create**.
+
+* On the **Basics** tab, use these values for the following application gateway settings:
+ - **Subscription** and **Resource group** and **Region**: the same as what you choose for SignalR Service
+ - **Name**: **_WA1_**
+ * **Publish**: **Code**
+ * **Runtime stack**: **.NET 6 (LTS)**
+ * **Operating System**: **Linux**
+ * **Region**: Make sure it's the same as what you choose for SignalR Service
+ * Select **Next: Docker**
+* On the **Networking** tab
+ * **Enable network injection**: select **On**
+ * **Virtual Network**: select **_VN1_** we previously created
+ * **Enable VNet integration**: **On**
+ * **Outbound subnet**: create a new subnet
+ * Select **Review + create**
+
+Now let's deploy our chat application to Azure. Below we use Azure CLI to deploy the web app, you can also choose other deployment environments following [publish your web app section](/azure/app-service/quickstart-dotnetcore#publish-your-web-app).
+
+Under folder samples/Chatroom, run the below commands:
+
+```bash
+# Build and publish the assemblies to publish folder
+dotnet publish -os linux -o publish
+# zip the publish folder as app.zip
+cd publish
+zip -r app.zip .
+# use az CLI to deploy app.zip to our webapp
+az login
+az account set -s <your-subscription-name-used-to-create-WA1>
+az webapp deployment source config-zip -n WA1 -g <resource-group-of-WA1> --src app.zip
+```
+
+Now the web app is deployed, let's go to the portal for **_WA1_** and make the following updates:
+* On the **Configuration** tab:
+ * New application settings:
+
+ | Name | Value |
+ | --| |
+ |**WEBSITE_DNS_SERVER**| **168.63.129.16** |
+ |**WEBSITE_VNET_ROUTE_ALL**| **1**|
+
+ * New connection string:
+
+ | Name | Value | Type|
+ | --| ||
+ |**Azure__SignalR__ConnectionString**| The copied connection string with ClientEndpoint value| select **Custom**|
+++
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/web-app-settings.png" alt-text="Screenshot of configuring web app connection string.":::
+
+* On the **TLS/SSL settings** tab:
+ * **HTTPS Only**: **Off**. To Simplify the demo, we used the HTTP frontend protocol on Application Gateway. Therefore, we need to turn off this option to avoid changing the HTTP URL to HTTPs automatically.
+
+* Go to the **Overview** tab and get the URL of **_WA1_**.
+* Get the URL, and replace scheme https with http, for example, http://wa1.azurewebsites.net, open the URL in the browser, now you can start chatting! Use F12 to open network traces, and you can see the SignalR connection is established through **_AG1_**.
+ > [!NOTE]
+ >
+ > Sometimes you need to disable browser's auto https redirection and browser cache to prevent the URL from redirecting to HTTPS automatically.
++
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/web-app-run.png" alt-text="Screenshot of running chat application in Azure with App Gateway and SignalR Service.":::
+
+## Next steps
+
+Now, you have successfully built a real-time chat application with SignalR Service and used Application Gateway to protect your applications and set up end-to-end security. [Learn more about SignalR Service](./signalr-overview.md).
azure-video-indexer Switch Tenants Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/switch-tenants-portal.md
+
+ Title: Switch between tenants on the Azure Video Indexer website
+description: This article shows how to switch between tenants in the Azure Video Indexer website.
+ Last updated : 08/26/2022++
+# Switch between multiple tenants
+
+This article shows how to switch between multiple tenants on the Azure Video Indexer website. When you create an Azure Resource Manager (ARM)-based account, the new account may not show up on the Azure Video Indexer website. So you need to make sure to sign in with the correct domain.
+
+The article shows how to sign in with the correct domain name into the Azure Video Indexer website:
+
+1. Sign into the [Azure portal](https://portal.azure.com/) with the same subscription where your Video Indexer ARM account was created.
+1. Get the domain name of the current Azure subscription tenant.
+1. Sign in with the correct domain name on the [Azure Video Indexer](https://www.videoindexer.ai/) website.
+
+## Get the domain name from the Azure portal
+
+1. In the [Azure portal](https://portal.azure.com/), sign in with the same subscription tenant in which your Azure Video Indexer Azure Resource Manager (ARM) account was created.
+1. Hover over your account name (in the right-top corner).
+
+ > [!div class="mx-imgBorder"]
+ > ![Hover over your account name.](./media/switch-directory/account-attributes.png)
+1. Get the domain name of the current Azure subscription, you'll need it for the last step of the following section.
+
+If you want to see domains for all of your directories and switch between them, see [Switch and manage directories with the Azure portal](../azure-portal/set-preferences.md#switch-and-manage-directories).
+
+## Sign in with the correct domain name on the AVI website
+
+1. Go to the [Azure Video Indexer](https://www.videoindexer.ai/) website.
+1. Press **Sign out** after pressing the button in the top-right corner.
+1. On the AVI website, press **Sign in** and choose the Azure AD account.
+
+ > [!div class="mx-imgBorder"]
+ > ![Sign in with the AAD account.](./media/switch-directory/choose-account.png)
+1. Press **Use another account**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Choose another account.](./media/switch-directory/use-another-account.png)
+1. Choose **Sign-in with other options**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Sign in with other options.](./media/switch-directory/sign-in-options.png)
+1. Press **Sign in to an organization**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Sign in to an organization.](./media/switch-directory/sign-in-organization.png)
+1. Enter the domain name you copied in the [Get the domain name from the Azure portal](#get-the-domain-name-from-the-azure-portal) section.
+
+ > [!div class="mx-imgBorder"]
+ > ![Find the organization.](./media/switch-directory/find-your-organization.png)
+
+## Next steps
+
+[FAQ](faq.yml)
azure-vmware Move Azure Vmware Solution Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/move-azure-vmware-solution-across-regions.md
Azure VMware Solution supports all backup solutions. You'll need CloudAdmin priv
- VM workload backup using [Veritas NetBackup solution](https://vrt.as/nb4avs). >[!TIP]
->You can use [Azure Resource Mover](../resource-mover/move-region-within-resource-group.md?toc=%2fazure%2fazure-resource-manager%2fmanagement%2ftoc.json) to verify and migrate the list of supported resources to move across regions, which are dependent on Azure VMware Solution.
+>You can use [Azure Resource Mover](../resource-mover/move-region-within-resource-group.md?toc=/azure/azure-resource-manager/management/toc.json) to verify and migrate the list of supported resources to move across regions, which are dependent on Azure VMware Solution.
### Locate the source ExpressRoute circuit ID
azure-vmware Tutorial Configure Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-configure-networking.md
Before selecting an existing vNet, there are specific requirements that must be
1. In the same region as Azure VMware Solution private cloud. 1. In the same resource group as Azure VMware Solution private cloud. 1. vNet must contain an address space that doesn't overlap with Azure VMware Solution.
-1. Validate solution design is within Azure VMware Solution limits (https://docs.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits).
+1. Validate solution design is within Azure VMware Solution limits (Microsoft technical documentation/azure/azure-resource-manager/management/azure-subscription-service-limits).
### Select an existing vNet
cloud-services Cloud Services Nodejs Develop Deploy App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-nodejs-develop-deploy-app.md
For more information, see the [Node.js Developer Center].
[Azure SDK for .NET 3.0]: https://www.microsoft.com/download/details.aspx?id=54917 [Connect PowerShell]: /powershell/azure/ [nodejs.org]: https://nodejs.org/
-[Overview of Creating a Hosted Service for Azure]: https://azure.microsoft.com/documentation/services/cloud-services/
+[Overview of Creating a Hosted Service for Azure]: /azure/cloud-services/
[Node.js Developer Center]: https://azure.microsoft.com/develop/nodejs/ <!-- IMG List -->
For more information, see the [Node.js Developer Center].
[A browser window displaying the hello world page; the URL indicates the page is hosted on Azure.]: ./media/cloud-services-nodejs-develop-deploy-app/node21.png [The status of the Stop-AzureService command]: ./media/cloud-services-nodejs-develop-deploy-app/node48.png [The status of the Remove-AzureService command]: ./media/cloud-services-nodejs-develop-deploy-app/node49.png---
cognitive-services Multivariate Anomaly Detection Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/tutorials/multivariate-anomaly-detection-synapse.md
If you have the need to run training code and inference code in separate noteboo
* Learn about [what is Multivariate Anomaly Detector](../overview-multivariate.md). * SynapseML documentation with [Multivariate Anomaly Detector feature](https://microsoft.github.io/SynapseML/docs/documentation/estimators/estimators_cognitive/#fitmultivariateanomaly).
-* Recipe: [Cognitive Services - Multivariate Anomaly Detector](https://microsoft.github.io/SynapseML/docs/next/features/cognitive_services/CognitiveServices).
+* Recipe: [Cognitive Services - Multivariate Anomaly Detector](https://microsoft.github.io/SynapseML/docs/features/cognitive_services/CognitiveServices%20-%20Multivariate%20Anomaly%20Detection/).
* Need support? [Join the Anomaly Detector Community](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR2Ci-wb6-iNDoBoNxrnEk9VURjNXUU1VREpOT0U1UEdURkc0OVRLSkZBNC4u). ### About Synapse
If you have the need to run training code and inference code in separate noteboo
* Quick start: [Configure prerequisites for using Cognitive Services in Azure Synapse Analytics](/azure/synapse-analytics/machine-learning/tutorial-configure-cognitive-services-synapse#create-a-key-vault-and-configure-secrets-and-access). * Visit [SynpaseML new website](https://microsoft.github.io/SynapseML/) for the latest docs, demos, and examples. * Learn more about [Synapse Analytics](/azure/synapse-analytics/).
-* Read about the [SynapseML v0.9.5 release](https://github.com/microsoft/SynapseML/releases/tag/v0.9.5) on GitHub.
+* Read about the [SynapseML v0.9.5 release](https://github.com/microsoft/SynapseML/releases/tag/v0.9.5) on GitHub.
cognitive-services Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/client-library.md
Last updated 06/13/2022
ms.devlang: csharp, golang, java, javascript, python
-zone_pivot_groups: programming-languages-computer-vision
+zone_pivot_groups: programming-languages-ocr
keywords: computer vision, computer vision service
Get started with the Computer Vision Read REST API or client libraries. The Read
::: zone-end --- ::: zone pivot="programming-language-javascript" [!INCLUDE [NodeJS SDK quickstart](../includes/quickstarts-sdk/node-sdk.md)]
cognitive-services How To Select Audio Input Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-select-audio-input-devices.md
void enumerateDeviceIds()
promise.Completed( [](winrt::Windows::Foundation::IAsyncOperation<DeviceInformationCollection> const& sender,
- winrt::Windows::Foundation::AsyncStatus /* asyncStatus */ ) {
+ winrt::Windows::Foundation::AsyncStatus /* asyncStatus */) {
auto info = sender.GetResults(); auto num_devices = info.Size();
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
Title: Azure OpenAI Models
+ Title: Azure OpenAI models
-description: Learn about the different AI models that are available.
+description: Learn about the different models that are available in Azure OpenAI.
Last updated 06/24/2022
recommendations: false
keywords:
-# Azure OpenAI Models
+# Azure OpenAI models
-The service provides access to many different models. Models describe a family of models and are broken out as follows:
+The service provides access to many different models, grouped by family and capability. A model family typically associates models by their intended task. The following table describes model families currently available in Azure OpenAI.
-|Modes | Description|
+| Model family | Description |
|--|--|
-| GPT-3 series | A set of GPT-3 models that can understand and generate natural language |
-| Codex Series | A set of models that can understand and generate code, including translating natural language to code |
-| Embeddings Series | An embedding is a special format of data representation that can be easily utilized by machine learning models and algorithms. The embedding is an information dense representation of the semantic meaning of a piece of text. Currently we offer three families of embedding models for different functionalities: text search, text similarity and code search |
+| [GPT-3](#gpt-3-models) | A series of models that can understand and generate natural language. |
+| [Codex](#codex-models) | A series of models that can understand and generate code, including translating natural language to code. |
+| [Embeddings](#embeddings-models) | A set of models that can understand and use embeddings. An embedding is a special format of data representation that can be easily utilized by machine learning models and algorithms. The embedding is an information dense representation of the semantic meaning of a piece of text. Currently, we offer three families of Embeddings models for different functionalities: similarity, text search, and code search. |
+
+## Model capabilities
+
+Each model family has a series of models that are further distinguished by capability. These capabilities are typically identified by names, and the alphabetical order of these names generally signifies the relative capability and cost of that model within a given model family. For example, GPT-3 models use names such as Ada, Babbage, Curie, and Davinci to indicate relative capability and cost. Davinci is more capable (at a higher cost) than Curie, which in turn is more capable (at a higher cost) than Babbage, and so on.
+
+> [!NOTE]
+> Any task that can be performed by a less capable model like Ada can be performed by a more capable model like Curie or Davinci.
## Naming convention
-Azure OpenAI's models follow a standard naming convention: `{task}-{model name}-{version #}`. For example, our most powerful natural language model is called `text-davinci-001` and a Codex series model would look like `code-cushman-001`.
+Azure OpenAI's model names typically correspond to the following standard naming convention:
+
+`{family}-{capability}[-{input-type}]-{identifier}`
+
+| Element | Description |
+| | |
+| `{family}` | The model family of the model. For example, [GPT-3 models](#gpt-3-models) uses `text`, while [Codex models](#codex-models) use `code`.|
+| `{capability}` | The relative capability of the model. For example, GPT-3 models include `ada`, `babbage`, `curie`, and `davinci`.|
+| `{input-type}` | ([Embeddings models](#embeddings-models) only) The input type of the embedding supported by the model. For example, text search embedding models support `doc` and `query`.|
+| `{identifier}` | The version identifier of the model. |
-> Older versions of the GPT-3 models are available as `ada`, `babbage`, `curie`, `davinci` and do not follow these conventions. These models are primarily intended to be used for fine-tuning and search.
+For example, our most powerful GPT-3 model is called `text-davinci-002`, while our most powerful Codex model is called `code-davinci-002`.
+
+> Older versions of the GPT-3 models are available, named `ada`, `babbage`, `curie`, and `davinci`. These older models do not follow the standard naming conventions, and they are primarily intended for fine tuning. For more information, see [Learn how to customize a model for your application](../how-to/fine-tuning.md).
## Finding what models are available You can easily see the models you have available for both inference and fine-tuning in your resource by using the [Models API](../reference.md#models).
+## Finding the right model
+
+We recommend starting with the most capable model in a model family because it's the best way to understand what the service is capable of. After you have an idea of what you want to accomplish, you can either stay with that model or move to a model with lower capability and cost, optimizing around that model's capabilities.
-## GPT-3 Series
+## GPT-3 models
-The GPT-3 models can understand and generate natural language. The service offers four model types with different levels of power suitable for different tasks. Davinci is the most capable model, and Ada is the fastest. Going forward these models are named with the following convention: `text-{model name}-XXX` where `XXX` refers to a numerical value for different versions of the model. Currently the latest versions are:
+The GPT-3 models can understand and generate natural language. The service offers four model capabilities, each with different levels of power and speed suitable for different tasks. Davinci is the most capable model, while Ada is the fastest. The following list represents the latest versions of GPT-3 models, ordered by increasing capability.
-- text-ada-001-- text-babbage-001-- text-curie-001-- text-davinci-001
+- `text-ada-001`
+- `text-babbage-001`
+- `text-curie-001`
+- `text-davinci-002`
-While Davinci is the most capable, the other models provide significant speed advantages. Our recommendation is for users to start with Davinci while experimenting since it will produce the best results and validate the value our service can provide. Once you have a prototype working, you can then optimize your model choice with the best latency - performance tradeoff for your application.
+While Davinci is the most capable, the other models provide significant speed advantages. Our recommendation is for users to start with Davinci while experimenting, because it will produce the best results and validate the value our service can provide. Once you have a prototype working, you can then optimize your model choice with the best latency/performance balance for your application.
-### Davinci
+### <a id="gpt-3-davinci"></a>Davinci
-Davinci is the most capable model and can perform any task the other models can perform and often with less instruction. For applications requiring deep understanding of the content, like summarization for a specific audience and creative content generation, Davinci is going to produce the best results. These increased capabilities require more compute resources, so Davinci costs more and isn't as fast as the other models.
+Davinci is the most capable model and can perform any task the other models can perform, often with less instruction. For applications requiring deep understanding of the content, like summarization for a specific audience and creative content generation, Davinci produces the best results. The increased capabilities provided by Davinci require more compute resources, so Davinci costs more and isn't as fast as other models.
Another area where Davinci excels is in understanding the intent of text. Davinci is excellent at solving many kinds of logic problems and explaining the motives of characters. Davinci has been able to solve some of the most challenging AI problems involving cause and effect.
Curie is powerful, yet fast. While Davinci is stronger when it comes to analyzin
### Babbage
-Babbage can perform straightforward tasks like simple classification. ItΓÇÖs also capable when it comes to semantic search ranking how well documents match up with search queries.
+Babbage can perform straightforward tasks like simple classification. ItΓÇÖs also capable when it comes to semantic search, ranking how well documents match up with search queries.
**Use for**: Moderate classification, semantic search classification
Babbage can perform straightforward tasks like simple classification. ItΓÇÖs als
Ada is usually the fastest model and can perform tasks like parsing text, address correction and certain kinds of classification tasks that donΓÇÖt require too much nuance. AdaΓÇÖs performance can often be improved by providing more context.
-**Use For** Parsing text, simple classification, address correction, keywords
-
-> [!NOTE]
-> Any task performed by a faster model like Ada can be performed by a more powerful model like Curie or Davinci.
+**Use for**: Parsing text, simple classification, address correction, keywords
-## Codex Series
+## Codex models
The Codex models are descendants of our base GPT-3 models that can understand and generate code. Their training data contains both natural language and billions of lines of public code from GitHub.
-TheyΓÇÖre most capable in Python and proficient in over a dozen languages, including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
+TheyΓÇÖre most capable in Python and proficient in over a dozen languages, including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell. The following list represents the latest versions of Codex models, ordered by increasing capability.
-Currently we only offer one Codex model: `code-cushman-001`.
+- `code-cushman-001`
+- `code-davinci-002`
-## Embeddings Models
+### <a id="codex-davinci"></a>Davinci
-Currently we offer three families of embedding models for different functionalities: text search, text similarity and code search. Each family includes up to four models across a spectrum of capabilities:
+Similar to GPT-3, Davinci is the most capable Codex model and can perform any task the other models can perform, often with less instruction. For applications requiring deep understanding of the content, Davinci produces the best results. These increased capabilities require more compute resources, so Davinci costs more and isn't as fast as other models.
-Ada (1024 dimensions),
-Babbage (2048 dimensions),
-Curie (4096 dimensions),
-Davinci (12,288 dimensions).
-Davinci is the most capable, but is slower and more expensive than the other models. Ada is the least capable, but is both faster and cheaper.
+### Cushman
-These embedding models are specifically created to be good at a particular task.
+Cushman is powerful, yet fast. While Davinci is stronger when it comes to analyzing complicated tasks, Cushman is a capable model for many code generation tasks. Cushman typically runs faster and cheaper than Davinci, as well.
-### Similarity embeddings
+## Embeddings models
-These models are good at capturing semantic similarity between two or more pieces of text.
+Currently, we offer three families of Embeddings models for different functionalities:
-| USE CASES | AVAILABLE MODELS |
-|||
-| Clustering, regression, anomaly detection, visualization |Text-similarity-ada-001, <br> text-similarity-babbage-001, <br> text-similarity-curie-001, <br> text-similarity-davinci-001 <br>|
+- [Similarity](#similarity-embedding)
+- [Text search](#text-search-embedding)
+- [Code search](#code-search-embedding)
-### Text search embeddings
+Each family includes models across a range of capability. The following list indicates the length of the numerical vector returned by the service, based on model capability:
+
+- Ada: 1024 dimensions
+- Babbage: 2048 dimensions
+- Curie: 4096 dimensions
+- Davinci: 12288 dimensions
+
+Davinci is the most capable, but is slower and more expensive than the other models. Ada is the least capable, but is both faster and cheaper.
-These models help measure whether long documents are relevant to a short search query. There are two types: one for embedding the documents to be retrieved, and one for embedding the search query.
+### Similarity embedding
-| USE CASES | AVAILABLE MODELS |
+These models are good at capturing semantic similarity between two or more pieces of text.
+
+| Use cases | Models |
|||
-| Search, context relevance, information retrieval | text-search-ada-doc-001, <br> text-search-ada-query-001 <br> text-search-babbage-doc-001, <br> text-search-babbage-query-001, <br> text-search-curie-doc-001, <br> text-search-curie-query-001, <br> text-search-davinci-doc-001, <br> text-search-davinci-query-001 <br> |
+| Clustering, regression, anomaly detection, visualization | `text-similarity-ada-001` <br> `text-similarity-babbage-001` <br> `text-similarity-curie-001` <br> `text-similarity-davinci-001` <br>|
-### Code search embeddings
+### Text search embedding
-Similar to text search embeddings, there are two types: one for embedding code snippets to be retrieved and one for embedding natural language search queries.
+These models help measure whether long documents are relevant to a short search query. There are two input types supported by this family: `doc`, for embedding the documents to be retrieved, and `query`, for embedding the search query.
-| USE CASES | AVAILABLE MODELS |
+| Use cases | Models |
|||
-| Code search and relevance | code-search-ada-code-001, <br> code-search-ada-text-001, <br> code-search-babbage-code-001, <br> code-search-babbage-text-001 |
+| Search, context relevance, information retrieval | `text-search-ada-doc-001` <br> `text-search-ada-query-001` <br> `text-search-babbage-doc-001` <br> `text-search-babbage-query-001` <br> `text-search-curie-doc-001` <br> `text-search-curie-query-001` <br> `text-search-davinci-doc-001` <br> `text-search-davinci-query-001` <br> |
-When using our embedding models, keep in mind their limitations and risks.
+### Code search embedding
-## Finding the right model
+Similar to text search embedding models, there are two input types supported by this family: `code`, for embedding code snippets to be retrieved, and `text`, for embedding natural language search queries.
+
+| Use cases | Models |
+|||
+| Code search and relevance | `code-search-ada-code-001` <br> `code-search-ada-text-001` <br> `code-search-babbage-code-001` <br> `code-search-babbage-text-001` |
-We recommend starting with our Davinci model since it will be the best way to understand what the service is capable of. After you have an idea of what you want to accomplish, you can either stay with Davinci if youΓÇÖre not concerned about cost and speed, or you can move onto Curie or another model and try to optimize around its capabilities.
+When using our Embeddings models, keep in mind their limitations and risks.
## Next steps
cognitive-services Completions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/completions.md
write a tagline for an ice cream shop
we serve up smiles with every scoop! ```
-The actual completion results you see may differ because the API is stochastic by default which means that you might get a slightly different completion every time you call it, even if your prompt stays the same. You can control this behavior with the temperature setting.
+The actual completion results you see may differ because the API is stochastic by default. In other words, you might get a slightly different completion every time you call it, even if your prompt stays the same. You can control this behavior with the temperature setting.
-This simple text-in, text-out interface means you can "program" the model by providing instructions or just a few examples of what you'd like it to do. Its success generally depends on the complexity of the task and quality of your prompt. A general rule is to think about how you would write a word problem for a middle school student to solve. A well-written prompt provides enough information for the model to know what you want and how it should respond.
+This simple, "text in, text out" interface means you can "program" the model by providing instructions or just a few examples of what you'd like it to do. Its success generally depends on the complexity of the task and quality of your prompt. A general rule is to think about how you would write a word problem for a middle school student to solve. A well-written prompt provides enough information for the model to know what you want and how it should respond.
> [!NOTE] > Keep in mind that the models' training data cuts off in October 2019, so they may not have knowledge of current events. We plan to add more continuous training in the future.
This simple text-in, text-out interface means you can "program" the model by pro
OpenAI's models can do everything from generating original stories to performing complex text analysis. Because they can do so many things, you have to be explicit in showing what you want. Showing, not just telling, is often the secret to a good prompt.
-The models try to predict what you want from the prompt. If you send the words "Give me a list of cat breeds," the model wouldn't automatically assume that you're asking for a list of cat breeds. You could just as easily be asking the model to continue a conversation where the first words are "Give me a list of cat breeds" and the next ones are "and I'll tell you which ones I like." If the model only assumed that you wanted a list of cats, it wouldn't be as good at content creation, classification, or other tasks.
+The models try to predict what you want from the prompt. If you send the words "Give me a list of cat breeds," the model wouldn't automatically assume that you're asking for a list of cat breeds. You could as easily be asking the model to continue a conversation where the first words are "Give me a list of cat breeds" and the next ones are "and I'll tell you which ones I like." If the model only assumed that you wanted a list of cats, it wouldn't be as good at content creation, classification, or other tasks.
There are three basic guidelines to creating prompts: **Show and tell.** Make it clear what you want either through instructions, examples, or a combination of the two. If you want the model to rank a list of items in alphabetical order or to classify a paragraph by sentiment, show it that's what you want.
-**Provide quality data.** If you're trying to build a classifier or get the model to follow a pattern, make sure that there are enough examples. Be sure to proofread your examples ΓÇö the model is usually smart enough to see through basic spelling mistakes and give you a response, but it also might assume this is intentional and it can affect the response.
+**Provide quality data.** If you're trying to build a classifier or get the model to follow a pattern, make sure that there are enough examples. Be sure to proofread your examples ΓÇö the model is usually smart enough to see through basic spelling mistakes and give you a response, but it also might assume that the mistakes are intentional and it can affect the response.
-**Check your settings.** The temperature and top_p settings control how deterministic the model is in generating a response. If you're asking it for a response where there's only one right answer, then you'd want to set these lower. If you're looking for a response that's not obvious, then you might want to set them higher. The number one mistake people use with these settings is assuming that they're "cleverness" or "creativity" controls.
+**Check your settings.** The temperature and top_p settings control how deterministic the model is in generating a response. If you're asking it for a response where there's only one right answer, then you'd want to set these settings to lower values. If you're looking for a response that's not obvious, then you might want to set them to higher values. The number one mistake people use with these settings is assuming that they're "cleverness" or "creativity" controls.
### Troubleshooting
While all prompts result in completions, it can be helpful to think of text comp
Vertical farming provides a novel solution for producing food locally, reducing transportation costs and ```
-This next prompt shows how you can use completion to help write React components. We send some code to the API, and it's able to continue the rest because it has an understanding of the React library. We recommend using OpenAI's Codex models for tasks that involve understanding or generating code. Currently only `code-cushman-001` is supported.
+This next prompt shows how you can use completion to help write React components. We send some code to the API, and it's able to continue the rest because it has an understanding of the React library. We recommend using models from our Codex series for tasks that involve understanding or generating code. Currently, we support two Codex models: `code-davinci-002` and `code-cushman-001`. For more information about Codex models, see the [Codex models](../concepts/models.md#codex-models) section in [Models](../concepts/models.md).
``` import React from 'react';
Q:
## Working with code
-The Codex model series is a descendant of OpenAI's base GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
+The Codex model series is a descendant of OpenAI's base GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
You can use Codex for a variety of tasks including:
Create an array of users and email addresses
""" ```
-**Put comments inside of functions can be helpful.** Recommended coding standards usually suggest placing the description of a function inside the function. Using this format helps Codex more clearly understand what you want the function to do.
+**Put comments inside of functions can be helpful.** Recommended coding standards suggest placing the description of a function inside the function. Using this format helps Codex more clearly understand what you want the function to do.
``` def getUserBalance(id):
Create a list of random animals and species
animals = [ {"name": "Chomper", "species": "Hamster"}, {"name": ```
-**Lower temperatures give more precise results.** Setting the API temperature to 0, or close to zero (such as 0.1 or 0.2) tends to give better results in most cases. Unlike GPT-3, where a higher temperature can provide useful creative and random results, higher temperatures with Codex may give you really random or erratic responses.
+**Lower temperatures give more precise results.** Setting the API temperature to 0, or close to zero (such as 0.1 or 0.2) tends to give better results in most cases. Unlike GPT-3 models, where a higher temperature can provide useful creative and random results, higher temperatures with Codex models may give you really random or erratic responses.
In cases where you need Codex to provide different potential results, start at zero and then increment upwards by .1 until you find suitable variation.
Use the lists to generate stories about what I saw at the zoo in each city
*/ ```
-**Use Codex to explain code.** Codex's ability to create and understand code allows us to use it to perform tasks like explaining what the code in a file does. One way to accomplish this is by putting a comment after a function that starts with "This function" or "This application is." Codex will usually interpret this as the start of an explanation and complete the rest of the text.
+**Use Codex to explain code.** Codex's ability to create and understand code allows us to use it to perform tasks like explaining what the code in a file does. One way to accomplish this is by putting a comment after a function that starts with "This function" or "This application is." Codex typically interprets this comment as the start of an explanation and completes the rest of the text.
``` /* Explain what the previous function is doing: It
cognitive-services Fine Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/fine-tuning.md
The Azure OpenAI Service lets you tailor our models to your personal datasets us
## Prerequisites -- An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)-- Access granted to service in the desired Azure subscription. This service is currently invite only. You can fill out a new use case request here: <https://aka.ms/oai/access>. Please open an issue on this repo to contact us if you have an issue
+- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>
+- Access granted to the Azure OpenAI service in the desired Azure subscription
+
+ Currently, access to this service is granted only by application. You can apply for access to the Azure OpenAI service by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
- The following Python libraries: os, requests, json-- An Azure OpenAI Service resource with a model deployed. If you don't have a resource/model the process is documented in our [resource deployment guide](../how-to/create-resource.md)
+- An Azure OpenAI Service resource with a model deployed
+
+ If you don't have a resource/model the process is documented in our [resource deployment guide](../how-to/create-resource.md)
## Fine-tuning workflow
cognitive-services Integrate Synapseml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/integrate-synapseml.md
The Azure OpenAI service can be used to solve a large number of natural language
## Prerequisites -- An Azure OpenAI resource ΓÇô request access [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUOFA5Qk1UWDRBMjg0WFhPMkIzTzhKQ1dWNyQlQCN0PWcu) before [creating a resource](create-resource.md?pivots=web-portal#create-a-resource)
+- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>
+- Access granted to the Azure OpenAI service in the desired Azure subscription
+
+ Currently, access to this service is granted only by application. You can apply for access to the Azure OpenAI service by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
+- An Azure OpenAI resource ΓÇô [create a resource](create-resource.md?pivots=web-portal#create-a-resource)
- An Apache Spark cluster with SynapseML installed - create a serverless Apache Spark pool [here](../../../synapse-analytics/get-started-analyze-spark.md#create-a-serverless-apache-spark-pool) We recommend [creating a Synapse workspace](../../../synapse-analytics/get-started-create-workspace.md), but an Azure Databricks, HDInsight, or Spark on Kubernetes, or even a Python environment with the `pyspark` package, will also work.
display(completed_autobatch_df)
### Prompt engineering for translation
-The Azure OpenAI service can solve many different natural language tasks through [prompt engineering](completions.md). Here we show an example of prompting for language translation:
+The Azure OpenAI service can solve many different natural language tasks through [prompt engineering](completions.md). Here, we show an example of prompting for language translation:
```python translate_df = spark.createDataFrame(
display(completion.transform(translate_df))
### Prompt for question answering
-Here, we prompt GPT-3 for general-knowledge question answering:
+Here, we prompt the GPT-3 model for general-knowledge question answering:
```python qa_df = spark.createDataFrame(
cognitive-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/managed-identity.md
In the following sections, you'll use the Azure CLI to assign roles, and obtain
## Prerequisites -- An Azure subscription-- Access granted to service in the desired Azure subscription. -- Azure CLI. [Installation Guide](/cli/azure/install-azure-cli)
+- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>
+- Access granted to the Azure OpenAI service in the desired Azure subscription
+
+ Currently, access to this service is granted only by application. You can apply for access to the Azure OpenAI service by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
+- Azure CLI - [Installation Guide](/cli/azure/install-azure-cli)
- The following Python libraries: os, requests, json ## Sign into the Azure CLI
cognitive-services Work With Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/work-with-code.md
keywords:
# Codex models and Azure OpenAI
-The Codex model series is a descendant of our GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
+The Codex model series is a descendant of our GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
You can use Codex for a variety of tasks including:
You can use Codex for a variety of tasks including:
## How to use the Codex models
-Here are a few examples of using Codex that can be tested in the [Azure OpenAI Studio's](https://oai.azure.com) playground with a deployment of a code series model such as `code-cushman-001`.
+Here are a few examples of using Codex that can be tested in the [Azure OpenAI Studio's](https://oai.azure.com) playground with a deployment of a Codex series model, such as `code-davinci-002`.
### Saying "Hello" (Python)
animals = [ {"name": "Chomper", "species": "Hamster"}, {"name":
### Lower temperatures give more precise results
-Setting the API temperature to 0, or close to zero (such as 0.1 or 0.2) tends to give better results in most cases. Unlike GPT-3, where a higher temperature can provide useful creative and random results, higher temperatures with Codex may give you really random or erratic responses.
+Setting the API temperature to 0, or close to zero (such as 0.1 or 0.2) tends to give better results in most cases. Unlike GPT-3 models, where a higher temperature can provide useful creative and random results, higher temperatures with Codex models may give you really random or erratic responses.
In cases where you need Codex to provide different potential results, start at zero and then increment upwards by 0.1 until you find suitable variation.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md
The Azure OpenAI service provides REST API access to OpenAI's powerful language
### Features overview | Feature | Azure OpenAI |
-| | |
-| Models available | GPT-3 base series <br> Codex Series <br> Embeddings Series <br> Learn more in our [Models](./concepts/models.md) page.|
-| Fine-tuning | Ada, <br>Babbage, <br> Curie,<br>Code-cushman-001* <br> Davinci*<br> \* available by request|
+| | |
+| Models available | GPT-3 base series <br> Codex series <br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.|
+| Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman* <br> Davinci* <br> \* available by request |
| Billing Model| Coming Soon | | Virtual network support | Yes | | Managed Identity| Yes, via Azure Active Directory | | UI experience | **Azure Portal** for account & resource management, <br> **Azure OpenAI Service Studio** for model exploration and fine tuning |
-| Regional availability | South Central US, <br> West Europe |
+| Regional availability | South Central US <br> West Europe |
| Content filtering | Prompts and completions are evaluated against our content policy with automated systems. High severity content will be filtered. | ## Responsible AI
The number of examples typically range from 0 to 100 depending on how many can f
### Models
-The service provides users access to several different models. Each model provides a different capability and price point. The base GPT-3 models are known as Davinci, Curie, Babbage and Ada in decreasing order of intelligence and speed.
+The service provides users access to several different models. Each model provides a different capability and price point. The GPT-3 base models are known as Davinci, Curie, Babbage, and Ada in decreasing order of capability and speed.
-The Codex series of models are a descendant of GPT-3 and have been trained on both natural language and code to power natural language to code use cases. Learn more about each model on our [models concept page](./concepts/models.md).
+The Codex series of models is a descendant of GPT-3 and has been trained on both natural language and code to power natural language to code use cases. Learn more about each model on our [models concept page](./concepts/models.md).
## Next steps
cognitive-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md
The following sections provide you with a quick guide to the quotas and limits t
| Max Files per resource | 50 | | Total size of all files per resource | 1 GB| | Max training job time (job will fail if exceeded) | 120 hours |
-| Max training job size (tokens in training file * # of epochs) | **Ada**: 4-M tokens <br> **Babbage**: 4-M tokens <br> **Curie**: 4-M tokens <br> **Cushman**: 4-M tokens <br> **DaVinci**: 500 K |
+| Max training job size (tokens in training file * # of epochs) | **Ada**: 4-M tokens <br> **Babbage**: 4-M tokens <br> **Curie**: 4-M tokens <br> **Cushman**: 4-M tokens <br> **Davinci**: 500 K |
### General best practices to mitigate throttling during autoscaling
cognitive-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/reference.md
POST https://{your-resource-name}.openai.azure.com/openai/fine-tunes?api-version
| validation_file| string | no | null | The ID of an uploaded file that contains validation data. <br> If you provide this file, the data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in the fine-tuning results file. Your train and validation data should be mutually exclusive. <br><br> Your dataset must be formatted as a JSONL file, where each validation example is a JSON object with the keys "prompt" and "completion". Additionally, you must upload your file with the purpose fine-tune. | | batch_size | integer | no | null | The batch size to use for training. The batch size is the number of training examples used to train a single forward and backward pass. <br><br> By default, the batch size will be dynamically configured to be ~0.2% of the number of examples in the training set, capped at 256 - in general, we've found that larger batch sizes tend to work better for larger datasets. | learning_rate_multiplier | number (double) | no | null | The learning rate multiplier to use for training. The fine-tuning learning rate is the original learning rate used for pre-training multiplied by this value.<br><br> We recommend experimenting with values in the range 0.02 to 0.2 to see what produces the best results. |
-| n_epochs | integer | no | 4 for `ada`, `babbage`, `curie`. 1 for `DaVinci` | The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. |
+| n_epochs | integer | no | 4 for `ada`, `babbage`, `curie`. 1 for `davinci` | The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. |
| prompt_loss_weight | number (double) | no | 0.1 | The weight to use for loss on the prompt tokens. This controls how much the model tries to learn to generate the prompt (as compared to the completion, which always has a weight of 1.0), and can add a stabilizing effect to training when completions are short. <br><br> | | compute_classification_metrics | boolean | no | false | If set, we calculate classification-specific metrics such as accuracy and F-1 score using the validation set at the end of every epoch. | | classification_n_classes | integer | no | null | The number of classes in a classification task. This parameter is required for multiclass classification |
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/chat/concepts.md
Some SDKs (like the JavaScript Chat SDK) support real-time notifications. This f
## Push notifications To send push notifications for messages missed by your users while they were away, Communication Services provides two different ways to integrate:
+ - Use an Event Grid resource to subscribe to chat related events (post operation) which can be plugged into your custom app notification service. For more details, see [Server Events](../../../event-grid/event-schema-communication-services.md?bc=/azure/bread/toc.json&toc=/azure/communication-services/toc.json).
- Connect a Notification Hub resource with Communication Services resource to send push notifications and notify your application users about incoming chats and messages when the mobile app is not running in the foreground. IOS and Android SDK can support the below event:
communication-services Add Chat Push Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/add-chat-push-notifications.md
Access the sample code for this tutorial on [GitHub](https://github.com/Azure-Sa
## Prerequisites
-1. Finish all the prerequisite steps in [Chat Quickstart](https://docs.microsoft.com/azure/communication-services/quickstarts/chat/get-started?pivots=programming-language-swift)
+1. Finish all the prerequisite steps in [Chat Quickstart](/azure/communication-services/quickstarts/chat/get-started?pivots=programming-language-swift)
2. ANH Setup Create an Azure Notification Hub within the same subscription as your Communication Services resource and link the Notification Hub to your Communication Services resource. See [Notification Hub provisioning](../concepts/notifications.md#notification-hub-provisioning).
In protocol extension, chat SDK provides the implementation of `decryptPayload(n
5. Plug the IOS device into your mac, run the program and click ΓÇ£allowΓÇ¥ when asked to authorize push notification on device. 6. As User B, send a chat message. You (User A) should be able to receive a push notification in your IOS device. --
connectors Connectors Native Reqres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-reqres.md
ms.suite: integration ms.reviewers: estfan, azla Previously updated : 08/16/2022 Last updated : 08/29/2022 tags: connectors
When you use the Request trigger to receive inbound requests, you can model the
> * If you have one or more Response actions in a complex workflow with branches, make sure that the workflow > processes at least one Response action during runtime. Otherwise, if all Response actions are skipped, > the caller receives a **502 Bad Gateway** error, even if the workflow finishes successfully.
+>
+> * In a Standard logic app *stateless* workflow, the Response action must appear last in your workflow. If the action appears
+> anywhere else, Azure Logic Apps still won't run the action until all other actions finish running.
+ ## [Consumption](#tab/consumption)
cosmos-db Dedicated Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/dedicated-gateway.md
Previously updated : 11/08/2021 Last updated : 08/29/2022
-# Azure Cosmos DB dedicated gateway - Overview (Preview)
+# Azure Cosmos DB dedicated gateway - Overview
[!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)] A dedicated gateway is server-side compute that is a front-end to your Azure Cosmos DB account. When you connect to the dedicated gateway, it both routes requests and caches data. Like provisioned throughput, the dedicated gateway is billed hourly. ## Overview
-You can provision a dedicated gateway to improve performance at scale. The most common reason that you would want to provision a dedicated gateway would be for caching. When you provision a dedicated gateway, an [integrated cache](integrated-cache.md) is automatically configured within the dedicated gateway. Point reads and queries that hit the integrated cache do not use any of your RUs. Provisioning a dedicated gateway with an integrated cache can help read-heavy workloads lower costs on Azure Cosmos DB.
+You can provision a dedicated gateway to improve performance at scale. The most common reason that you would want to provision a dedicated gateway is for caching. When you provision a dedicated gateway, an [integrated cache](integrated-cache.md) is automatically configured within the dedicated gateway. Point reads and queries that hit the integrated cache do not use any of your RUs. Provisioning a dedicated gateway with an integrated cache can help read-heavy workloads lower costs on Azure Cosmos DB.
-The dedicated gateway is built into Azure Cosmos DB. When you [provision a dedicated gateway](how-to-configure-integrated-cache.md), you have a fully-managed node that routes requests to backend partitions. Connecting to Azure Cosmos DB with the dedicated gateway provides lower and more predictable latency than connecting to Azure Cosmos DB with the standard gateway. Even cache misses will see latency improvements when comparing the dedicated gateway and standard gateway.
+The dedicated gateway is built into Azure Cosmos DB. When you [provision a dedicated gateway](how-to-configure-integrated-cache.md), you have a fully managed node that routes requests to backend partitions. Connecting to Azure Cosmos DB with the dedicated gateway provides lower and more predictable latency than connecting to Azure Cosmos DB with the standard gateway. Even cache misses will see latency improvements when comparing the dedicated gateway and standard gateway.
There are only minimal code changes required in order for your application to use a dedicated gateway. Both new and existing Azure Cosmos DB accounts can provision a dedicated gateway for improved read performance.
cosmoscachefeedback@microsoft.com
## Connection modes
-There are three ways to connect to an Azure Cosmos DB account:
+There are two [connectivity modes](./sql/sql-sdk-connection-modes.md) for Azure Cosmos DB, Direct mode and Gateway mode. With Gateway mode you can connect to either the standard gateway or the dedicated gateway depending on the endpoint you configure.
-- [Direct mode](#connect-to-azure-cosmos-db-using-direct-mode)-- [Gateway mode using the standard gateway](#connect-to-azure-cosmos-db-using-gateway-mode)-- [Gateway mode using the dedicated gateway](#connect-to-azure-cosmos-db-using-the-dedicated-gateway) (only available for SQL API accounts) ### Connect to Azure Cosmos DB using direct mode
-When you connect to Azure Cosmos DB using direct mode, your application connects directly to the Azure Cosmos DB backend. Even if you have many physical partitions, request routing is handled entirely client-side. Direct mode offers low latency because your application can communicate directly with the Azure Cosmos DB backend and doesn't need an intermediate network hop.
-
-Graphical representation of direct mode connection:
-
+When you connect to Azure Cosmos DB using direct mode, your application connects directly to the Azure Cosmos DB backend. Even if you have many physical partitions, request routing is handled entirely client-side. Direct mode offers low latency because your application can communicate directly with the Azure Cosmos DB backend and doesn't need an intermediate network hop. If you choose to connect with direct mode your requests will not use the dedicated gateway or the integrated cache.
### Connect to Azure Cosmos DB using gateway mode
If you connect to Azure Cosmos DB using gateway mode, your application will conn
When connecting to Azure Cosmos DB with gateway mode, you can connect with either of the following options:
-* **Standard gateway** - While the backend, which includes your provisioned throughput and storage, has dedicated capacity per container, the standard gateway is shared between many Azure Cosmos accounts. It is practical for many customers to share a standard gateway since the compute resources consumed by each individual customer is small.
+* **Standard gateway** - While the backend, which includes your provisioned throughput and storage, has dedicated capacity per container, the standard gateway is shared between many Azure Cosmos DB accounts. It is practical for many customers to share a standard gateway since the compute resources consumed by each individual customer are small.
* **Dedicated gateway** - In this gateway, the backend and gateway both have dedicated capacity. The integrated cache requires a dedicated gateway because it requires significant CPU and memory that is specific to your Azure Cosmos account.
-### Connect to Azure Cosmos DB using the dedicated gateway
-
-You must connect to Azure Cosmos DB using the dedicated gateway in order to use the integrated cache. The dedicated gateway has a different endpoint from the standard one provided with your Azure Cosmos DB account. When you connect to your dedicated gateway endpoint, your application sends a request to the dedicated gateway, which then routes the request to different backend nodes. If possible, the integrated cache will serve the result.
+You must connect to Azure Cosmos DB using the dedicated gateway in order to use the integrated cache. The dedicated gateway has a different endpoint from the standard one provided with your Azure Cosmos DB account, but requests are routed in the same way. When you connect to your dedicated gateway endpoint, your application sends a request to the dedicated gateway, which then routes the request to different backend nodes. If possible, the integrated cache will serve the result.
Diagram of gateway mode connection with a dedicated gateway: ## Provisioning the dedicated gateway
-A dedicated gateway cluster can be provisioned in Core (SQL) API accounts. A dedicated gateway cluster can have up to five nodes and you can add or remove nodes at any time. All dedicated gateway nodes within your account [share the same connection string](how-to-configure-integrated-cache.md#configuring-the-integrated-cache).
+A dedicated gateway cluster can be provisioned in Core (SQL) API accounts. A dedicated gateway cluster can have up to five nodes by default and you can add or remove nodes at any time. All dedicated gateway nodes within your account [share the same connection string](how-to-configure-integrated-cache.md#configuring-the-integrated-cache).
-Dedicated gateway nodes are independent from one another. When you provision multiple dedicated gateway nodes, any single node can route any given request. In addition, each node has a separate integrated cache from the others. The cached data within each node depends on the data that was recently [written or read](integrated-cache.md#item-cache) through that specific node. In other words, if an item or query is cached on one node, it isn't necessarily cached on the others.
+Dedicated gateway nodes are independent from one another. When you provision multiple dedicated gateway nodes, any single node can route any given request. In addition, each node has a separate integrated cache from the others. The cached data within each node depends on the data that was recently [written or read](integrated-cache.md#item-cache) through that specific node. If an item or query is cached on one node, it isn't necessarily cached on the others.
For development, we recommend starting with one node but for production, you should provision three or more nodes for high availability. [Learn how to provision a dedicated gateway cluster with an integrated cache](how-to-configure-integrated-cache.md). Provisioning multiple dedicated gateway nodes allows the dedicated gateway cluster to continue to route requests and serve cached data, even when one of the dedicated gateway nodes is unavailable.
-Because it is in public preview, the dedicated gateway does not have an availability SLA. However, you should generally expect comparable availability to the rest of your Azure Cosmos DB account.
-
-The dedicated gateway is available in the following sizes:
+The dedicated gateway is available in the following sizes. The integrated cache uses approximately 50% of the memory and the rest is reserved for metadata and routing requests to backend partitions.
| **Sku Name** | **vCPU** | **Memory** | | | -- | -- |
The dedicated gateway is available in the following sizes:
| **D16s** | **16** | **64 GB** | > [!NOTE]
-> Once created, you can't modify the size of the dedicated gateway nodes. However, you can add or remove nodes.
+> Once created, you can add or remove dedicated gateway nodes, but you can't modify the size of the nodes. To change the size of your dedicated gateway nodes you can deprovision the cluster and provision it again in a different size. This will result in a short period of downtime unless you change the connection string in your application to use the standard gateway during reprovisioning.
There are many different ways to provision a dedicated gateway: -- [Provision a dedicated gateway using the Azure Portal](how-to-configure-integrated-cache.md#provision-a-dedicated-gateway-cluster)-- [Use Azure Cosmos DB's REAT API](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/service/create)
+- [Provision a dedicated gateway using the Azure Portal](how-to-configure-integrated-cache.md#provision-the-dedicated-gateway)
+- [Use Azure Cosmos DB's REST API](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/service/create)
- [Azure CLI](/cli/azure/cosmosdb/service?view=azure-cli-latest&preserve-view=true#az-cosmosdb-service-create) - [ARM template](/azure/templates/microsoft.documentdb/databaseaccounts/services?tabs=bicep) - Note: You cannot deprovision a dedicated gateway using ARM templates
There are many different ways to provision a dedicated gateway:
When you provision a dedicated gateway cluster in multi-region accounts, identical dedicated gateway clusters are provisioned in each region. For example, consider an Azure Cosmos DB account in East US and North Europe. If you provision a dedicated gateway cluster with two D8 nodes in this account, you'd have four D8 nodes in total - two in East US and two in North Europe. You don't need to explicitly configure dedicated gateways in each region and your connection string remains the same. There are also no changes to best practices for performing failovers.
-> [!NOTE]
-> You cannot provision a dedicated gateway cluster in accounts with availability zones enabled
- Like nodes within a cluster, dedicated gateway nodes across regions are independent. It's possible that the cached data in each region will be different, depending on the recent reads or writes to that region. ## Limitations
-The dedicated gateway has the following limitations during the public preview:
+The dedicated gateway has the following limitations:
-- Dedicated gateways are only supported on SQL API accounts.-- You can't provision a dedicated gateway in Azure Cosmos DB accounts with [IP firewalls](how-to-configure-firewall.md) or [Private Link](how-to-configure-private-endpoints.md) configured.-- You can't provision a dedicated gateway in an Azure Cosmos DB account in a [Virtual Network (Vnet)](how-to-configure-vnet-service-endpoint.md)
+- Dedicated gateways are only supported on SQL API accounts
- You can't provision a dedicated gateway in Azure Cosmos DB accounts with [availability zones](../availability-zones/az-region.md). - You can't use [role-based access control (RBAC)](how-to-setup-rbac.md) to authenticate data plane requests routed through the dedicated gateway
-The dedicated gateway blade is hidden on Azure Cosmos DB accounts with IP firewalls, Vnet, Private Link, or availability zones.
-
-## Supported regions
-
-The dedicated gateway is in public preview and isn't supported in every Azure region yet. Throughout the public preview, we'll be adding new capacity. We won't have region restrictions when the dedicated gateway is generally available.
-
-Current list of supported Azure regions:
-
-| **Americas** | **Europe and Africa** | **Asia Pacific** |
-| | -- | -- |
-| Brazil South | France Central | Australia Central |
-| Canada Central | France South | Australia Central 2 |
-| Canada East | Germany North | Australia Southeast |
-| Central US | Germany West Central | Central India |
-| East US | North Europe | East Asia |
-| East US 2 | Switzerland North | Japan West |
-| North Central US | UK South | Korea Central |
-| South Central US | UK West | Korea South |
-| West Central US | West Europe | Southeast Asia |
-| West US | | UAE Central |
-| West US 2 | | West India |
- ## Next steps
Read more about dedicated gateway usage in the following articles:
- [Configure the integrated cache](how-to-configure-integrated-cache.md) - [Integrated cache FAQ](integrated-cache-faq.md) - Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- - If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db How To Configure Integrated Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-integrated-cache.md
Previously updated : 09/28/2021 Last updated : 08/29/2022
-# How to configure the Azure Cosmos DB integrated cache (Preview)
+# How to configure the Azure Cosmos DB integrated cache
[!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)] This article describes how to provision a dedicated gateway, configure the integrated cache, and connect your application.
This article describes how to provision a dedicated gateway, configure the integ
- An existing application that uses Azure Cosmos DB. If you don't have one, [here are some examples](https://github.com/AzureCosmosDB/labs). - An existing [Azure Cosmos DB SQL (core) API account](create-cosmosdb-resources-portal.md).
-## Provision a dedicated gateway cluster
+## Provision the dedicated gateway
1. Navigate to an Azure Cosmos DB account in the Azure portal and select the **Dedicated Gateway** tab.
- :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-tab.png" alt-text="An image that shows how to navigate to the dedicated gateway tab" lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-tab.png" border="false":::
+ :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-tab.png" alt-text="Screenshot of the Azure Portal that shows how to navigate to the Azure Cosmos DB dedicated gateway tab." lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-tab.png" :::
2. Fill out the **Dedicated gateway** form with the following details: * **Dedicated Gateway** - Turn on the toggle to **Provisioned**.
- * **SKU** - Select a SKU with the required compute and memory size.
- * **Number of instances** - Number of nodes. For development purpose, we recommend starting with one node of the D4 size. Based on the amount of data you need to cache, you can increase the node size after initial testing.
+ * **SKU** - Select a SKU with the required compute and memory size. The integrated cache will use approximately 50% of the memory, and the remaining memory is used for metadata and routing requests to the backend partitions.
+ * **Number of instances** - Number of nodes. For development purpose, we recommend starting with one node of the D4 size. Based on the amount of data you need to cache and to achieve high availability, you can increase the node size after initial testing.
- :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-input.png" alt-text="An image that shows sample input settings for creating a dedicated gateway cluster" lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-input.png" border="false":::
+ :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-input.png" alt-text="Screenshot of the Azure Portal dedicated gateway tab that shows sample input settings for creating a dedicated gateway cluster." lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-input.png" :::
3. Select **Save** and wait about 5-10 minutes for the dedicated gateway provisioning to complete. When the provisioning is done, you'll see the following notification:
- :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-notification.png" alt-text="An image that shows how to check if dedicated gateway provisioning is complete" lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-notification.png" border="false":::
+ :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-notification.png" alt-text="Screenshot of a notification in the Azure Portal that shows how to check if dedicated gateway provisioning is complete." lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-notification.png" :::
## Configuring the integrated cache
-1. When you create a dedicated gateway, an integrated cache is automatically provisioned. The integrated cache will use approximately 70% of the memory in the dedicated gateway. The remaining 30% of memory in the dedicated gateway is used for routing requests to the backend partitions.
+When you create a dedicated gateway, an integrated cache is automatically provisioned.
-2. Modify your application's connection string to use the new dedicated gateway endpoint.
+1. Modify your application's connection string to use the new dedicated gateway endpoint.
The updated dedicated gateway connection string is in the **Keys** blade:
- :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-connection-string.png" alt-text="An image that shows the dedicated gateway connection string" lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-connection-string.png" border="false":::
+ :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-connection-string.png" alt-text="Screenshot of the Azure Portal keys tab with the dedicated gateway connection string." lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-connection-string.png" :::
All dedicated gateway connection strings follow the same pattern. Remove `documents.azure.com` from your original connection string and replace it with `sqlx.cosmos.azure.com`. A dedicated gateway will always have the same connection string, even if you remove and reprovision it. You donΓÇÖt need to modify the connection string in all applications using the same Azure Cosmos DB account. For example, you could have one `CosmosClient` connect using gateway mode and the dedicated gateway endpoint while another `CosmosClient` uses direct mode. In other words, adding a dedicated gateway doesn't impact the existing ways of connecting to Azure Cosmos DB.
-3. If you're using the .NET or Java SDK, set the connection mode to [gateway mode](sql-sdk-connection-modes.md#available-connectivity-modes). This step isn't necessary for the Python and Node.js SDKs since they don't have additional options of connecting besides gateway mode.
+2. If you're using the .NET or Java SDK, set the connection mode to [gateway mode](sql-sdk-connection-modes.md#available-connectivity-modes). This step isn't necessary for the Python and Node.js SDKs since they don't have additional options of connecting besides gateway mode.
> [!NOTE] > If you are using the latest .NET or Java SDK version, the default connection mode is direct mode. In order to use the integrated cache, you must override this default.
-If you're using the Java SDK, you must also manually set [contentResponseOnWriteEnabled](/java/api/com.azure.cosmos.cosmosclientbuilder.contentresponseonwriteenabled?view=azure-java-stable&preserve-view=true) to `true` within the `CosmosClientBuilder`. If you're using any other SDK, this value already defaults to `true`, so you don't need to make any changes.
- ## Adjust request consistency
-You must adjust the request consistency to session or eventual. If not, the request will always bypass the integrated cache. The easiest way to configure a specific consistency for all read operations is to [set it at the account-level](consistency-levels.md#configure-the-default-consistency-level). You can also configure consistency at the [request-level](how-to-manage-consistency.md#override-the-default-consistency-level), which is recommended if you only want a subset of your reads to utilize the integrated cache.
+You must ensure the request consistency is session or eventual. If not, the request will always bypass the integrated cache. The easiest way to configure a specific consistency for all read operations is to [set it at the account-level](consistency-levels.md#configure-the-default-consistency-level). You can also configure consistency at the [request-level](how-to-manage-consistency.md#override-the-default-consistency-level), which is recommended if you only want a subset of your reads to utilize the integrated cache.
> [!NOTE] > If you are using the Python SDK, you **must** explicitly set the consistency level for each request. The default account-level setting will not automatically apply. ## Adjust MaxIntegratedCacheStaleness
-Configure `MaxIntegratedCacheStaleness`, which is the maximum time in which you are willing to tolerate stale cached data. We recommend setting the `MaxIntegratedCacheStaleness` as high as possible because it will increase the likelihood that repeated point reads and queries can be cache hits. If you set `MaxIntegratedCacheStaleness` to 0, your read request will **never** use the integrated cache, regardless of the consistency level. When not configured, the default `MaxIntegratedCacheStaleness` is 5 minutes.
+Configure `MaxIntegratedCacheStaleness`, which is the maximum time in which you are willing to tolerate stale cached data. It is recommended to set the `MaxIntegratedCacheStaleness` as high as possible because it will increase the likelihood that repeated point reads and queries can be cache hits. If you set `MaxIntegratedCacheStaleness` to 0, your read request will **never** use the integrated cache, regardless of the consistency level. When not configured, the default `MaxIntegratedCacheStaleness` is 5 minutes.
+
+Adjusting the `MaxIntegratedCacheStaleness` is supported in these versions of each SDK:
-**.NET**
+| SDK | Supported versions |
+| | |
+| **.NET SDK v3** | *>= 3.30.0* |
+| **Java SDK v4** | *>= 4.34.0* |
+| **Node.js SDK** | *>=3.17.0* |
+| **Python SDK** | *>=4.3.1* |
+
+### [.NET](#tab/dotnet)
```csharp
-FeedIterator<Food> myQuery = container.GetItemQueryIterator<Food>(new QueryDefinition("SELECT * FROM c"), requestOptions: new QueryRequestOptions
+FeedIterator<MyClass> myQuery = container.GetItemQueryIterator<MyClass>(new QueryDefinition("SELECT * FROM c"), requestOptions: new QueryRequestOptions
{
- ConsistencyLevel = ConsistencyLevel.Eventual,
DedicatedGatewayRequestOptions = new DedicatedGatewayRequestOptions { MaxIntegratedCacheStaleness = TimeSpan.FromMinutes(30)
FeedIterator<Food> myQuery = container.GetItemQueryIterator<Food>(new QueryDefin
); ```
-> [!NOTE]
-> Currently, you can only adjust the MaxIntegratedCacheStaleness using the latest [.NET](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.17.0-preview) and [Java](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.16.0-beta.1) preview SDK's.
+### [Java](#tab/java)
+
+```java
+DedicatedGatewayRequestOptions dgOptions = new DedicatedGatewayRequestOptions()
+ .setMaxIntegratedCacheStaleness(Duration.ofMinutes(30));
+CosmosQueryRequestOptions queryOptions = new CosmosQueryRequestOptions()
+ .setDedicatedGatewayRequestOptions(dgOptions);
+
+CosmosPagedFlux<MyClass> pagedFluxResponse = container.queryItems(
+ "SELECT * FROM c", queryOptions, MyClass.class);
+```
+
+### [Node.js](#tab/nodejs)
+
+```javascript
+ const queryRequestOptions = {
+ maxIntegratedCacheStalenessInMs: 1800000 };
+ const querySpec = {
+ query: "SELECT * from c"
+ };
+ const { resources: items } = await container.items
+ .query(querySpec, queryRequestOptions)
+ .fetchAll();
+```
+
+### [Python](#tab/python)
+
+```python
+query = "SELECT * FROM c"
+container.query_items(
+ query=query,
+ max_integrated_cache_staleness_in_ms=1800000
+)
+```
+++ ## Verify cache hits
-Finally, you can restart your application and verify integrated cache hits for repeated point reads or queries. Once youΓÇÖve modified your `CosmosClient` to use the dedicated gateway endpoint, all requests will be routed through the dedicated gateway.
+Finally, you can restart your application and verify integrated cache hits for repeated point reads or queries by seeing if the request charge is 0. Once youΓÇÖve modified your `CosmosClient` to use the dedicated gateway endpoint, all requests will be routed through the dedicated gateway.
For a read request (point read or query) to utilize the integrated cache, **all** of the following criteria must be true: - Your client connects to the dedicated gateway endpoint-- Your client uses gateway mode (Python and Node.js SDK's always use gateway mode)
+- Your client uses gateway mode (Python and Node.js SDKs always use gateway mode)
- The consistency for the request must be set to session or eventual > [!NOTE]
cosmos-db How To Configure Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-private-endpoints.md
Use the following steps to create a private endpoint for an existing Azure Cosmo
| Subscription| Select your subscription. | | Resource type | Select **Microsoft.AzureCosmosDB/databaseAccounts**. | | Resource |Select your Azure Cosmos account. |
- |Target sub-resource |Select the Azure Cosmos DB API type that you want to map. This defaults to only one choice for the SQL, MongoDB, and Cassandra APIs. For the Gremlin and Table APIs, you can also choose **Sql** because these APIs are interoperable with the SQL API. |
+ |Target sub-resource |Select the Azure Cosmos DB API type that you want to map. This defaults to only one choice for the SQL, MongoDB, and Cassandra APIs. For the Gremlin and Table APIs, you can also choose **Sql** because these APIs are interoperable with the SQL API. If you have a [dedicated gateway](./dedicated-gateway.md) provisioned for a SQL API account, you will also see an option for **SqlDedicated**. |
||| 1. Select **Next: Configuration**.
Use the following steps to create a private endpoint for an existing Azure Cosmo
| Virtual network| Select your virtual network. | | Subnet | Select your subnet. | |**Private DNS Integration**||
- |Integrate with private DNS zone |Select **Yes**. <br><br/> To connect privately with your private endpoint, you need a DNS record. We recommend that you integrate your private endpoint with a private DNS zone. You can also use your own DNS servers or create DNS records by using the host files on your virtual machines. <br><br/> When you select yes for this option, a private DNS zone group is also created. DNS zone group is a link between the private DNS zone and the private endpoint. This link helps you to auto update the private DNS Zone when there is an update to the private endpoint. For example, when you add or remove regions,the private DNS zone is automatically updated. |
+ |Integrate with private DNS zone |Select **Yes**. <br><br/> To connect privately with your private endpoint, you need a DNS record. We recommend that you integrate your private endpoint with a private DNS zone. You can also use your own DNS servers or create DNS records by using the host files on your virtual machines. <br><br/> When you select yes for this option, a private DNS zone group is also created. DNS zone group is a link between the private DNS zone and the private endpoint. This link helps you to auto update the private DNS Zone when there is an update to the private endpoint. For example, when you add or remove regions, the private DNS zone is automatically updated. |
|Private DNS Zone |Select **privatelink.documents.azure.com**. <br><br/> The private DNS zone is determined automatically. You can't change it by using the Azure portal.| |||
When you have approved Private Link for an Azure Cosmos account, in the Azure po
## <a id="private-zone-name-mapping"></a>API types and private zone names
-The following table shows the mapping between different Azure Cosmos account API types, supported sub-resources, and the corresponding private zone names. You can also access the Gremlin and Table API accounts through the SQL API, so there are two entries for these APIs.
+The following table shows the mapping between different Azure Cosmos account API types, supported sub-resources, and the corresponding private zone names. You can also access the Gremlin and Table API accounts through the SQL API, so there are two entries for these APIs. There is also an extra entry for the SQL API for accounts using the [dedicated gateway](./dedicated-gateway.md).
|Azure Cosmos account API type |Supported sub-resources (or group IDs) |Private zone name | |||| |Sql | Sql | privatelink.documents.azure.com |
+|Sql | SqlDedicated | privatelink.sqlx.cosmos.azure.com |
|Cassandra | Cassandra | privatelink.cassandra.cosmos.azure.com | |Mongo | MongoDB | privatelink.mongo.cosmos.azure.com | |Gremlin | Gremlin | privatelink.gremlin.cosmos.azure.com |
$ResourceGroupName = "myResourceGroup"
# Name of the Azure Cosmos account $CosmosDbAccountName = "mycosmosaccount"
-# API type of the Azure Cosmos account: Sql, MongoDB, Cassandra, Gremlin, or Table
-$CosmosDbApiType = "Sql"
+# Resource for the Azure Cosmos account: Sql, SqlDedicated, MongoDB, Cassandra, Gremlin, or Table
+$CosmosDbSubResourceType = "Sql"
# Name of the existing virtual network $VNetName = "myVnet" # Name of the target subnet in the virtual network
$Location = "westcentralus"
$cosmosDbResourceId = "/subscriptions/$($SubscriptionId)/resourceGroups/$($ResourceGroupName)/providers/Microsoft.DocumentDB/databaseAccounts/$($CosmosDbAccountName)"
-$privateEndpointConnection = New-AzPrivateLinkServiceConnection -Name "myConnectionPS" -PrivateLinkServiceId $cosmosDbResourceId -GroupId $CosmosDbApiType
+$privateEndpointConnection = New-AzPrivateLinkServiceConnection -Name "myConnectionPS" -PrivateLinkServiceId $cosmosDbResourceId -GroupId $CosmosDbSubResourceType
$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $ResourceGroupName -Name $VNetName
SubscriptionId="<your Azure subscription ID>"
# Name of the existing Azure Cosmos account CosmosDbAccountName="mycosmosaccount"
-# API type of your Azure Cosmos account: Sql, MongoDB, Cassandra, Gremlin, or Table
-CosmosDbApiType="Sql"
+# API type of your Azure Cosmos account: Sql, SqlDedicated, MongoDB, Cassandra, Gremlin, or Table
+CosmosDbSubResourceType="Sql"
# Name of the virtual network to create VNetName="myVnet"
az network private-endpoint create \
--vnet-name $VNetName \ --subnet $SubnetName \ --private-connection-resource-id "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.DocumentDB/databaseAccounts/$CosmosDbAccountName" \
- --group-ids $CosmosDbApiType \
+ --group-ids $CosmosDbSubResourceType \
--connection-name $PrivateConnectionName ```
$SubscriptionId = "<your Azure subscription ID>"
$ResourceGroupName = "myResourceGroup" # Name of the Azure Cosmos account $CosmosDbAccountName = "mycosmosaccount"
-# API type of the Azure Cosmos account. It can be one of the following: "Sql", "MongoDB", "Cassandra", "Gremlin", "Table"
-$CosmosDbApiType = "Sql"
+# API type of the Azure Cosmos account. It can be one of the following: "Sql", "SqlDedicated", "MongoDB", "Cassandra", "Gremlin", "Table"
+$CosmosDbSubResourceType = "Sql"
# Name of the existing virtual network $VNetName = "myVnet" # Name of the target subnet in the virtual network
$deploymentOutput = New-AzResourceGroupDeployment -Name "PrivateCosmosDbEndpoint
-TemplateParameterFile $PrivateEndpointParametersFilePath ` -SubnetId $SubnetResourceId ` -ResourceId $CosmosDbResourceId `
- -GroupId $CosmosDbApiType `
+ -GroupId $CosmosDbSubResourceType `
-PrivateEndpointName $PrivateEndpointName $deploymentOutput ```
-In the PowerShell script, the `GroupId` variable can contain only one value. That value is the API type of the account. Allowed values are: `Sql`, `MongoDB`, `Cassandra`, `Gremlin`, and `Table`. Some Azure Cosmos account types are accessible through multiple APIs. For example:
+In the PowerShell script, the `GroupId` variable can contain only one value. That value is the API type of the account. Allowed values are: `Sql`, `SqlDedicated`, `MongoDB`, `Cassandra`, `Gremlin`, and `Table`. Some Azure Cosmos account types are accessible through multiple APIs. For example:
+* A SQL API account has an added option for accounts configured to use the [Dedicated Gateway](./dedicated-gateway.md).
* A Gremlin API account can be accessed from both Gremlin and SQL API accounts. * A Table API account can be accessed from both Table and SQL API accounts.
-For those accounts, you must create one private endpoint for each API type. The corresponding API type is specified in the `GroupId` array.
+For those accounts, you must create one private endpoint for each API type. If you are creating a private endpoint for `SqlDedicated`, you only need to add a second endpoint for `Sql` if you want to also connect to your account using the standard gateway. The corresponding API type is specified in the `GroupId` array.
After the template is deployed successfully, you can see an output similar to what the following image shows. The `provisioningState` value is `Succeeded` if the private endpoints are set up correctly.
$SubscriptionId = "<your Azure subscription ID>"
$ResourceGroupName = "myResourceGroup" # Name of the Azure Cosmos account $CosmosDbAccountName = "mycosmosaccount"
-# API type of the Azure Cosmos account. It can be one of the following: "Sql", "MongoDB", "Cassandra", "Gremlin", "Table"
-$CosmosDbApiType = "Sql"
+# API type of the Azure Cosmos account. It can be one of the following: "Sql", "SqlDedicated", "MongoDB", "Cassandra", "Gremlin", "Table"
+$CosmosDbSubResourceType = "Sql"
# Name of the existing virtual network $VNetName = "myVnet" # Name of the target subnet in the virtual network
$deploymentOutput = New-AzResourceGroupDeployment -Name "PrivateCosmosDbEndpoint
-TemplateParameterFile $PrivateEndpointParametersFilePath ` -SubnetId $SubnetResourceId ` -ResourceId $CosmosDbResourceId `
- -GroupId $CosmosDbApiType `
+ -GroupId $CosmosDbSubResourceType `
-PrivateEndpointName $PrivateEndpointName $deploymentOutput
cosmos-db Integrated Cache Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/integrated-cache-faq.md
Previously updated : 09/20/2021 Last updated : 08/29/2022
# Azure Cosmos DB integrated cache frequently asked questions [!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)]
-The Azure Cosmos DB integrated cache is an in-memory cache that is built-in to Azure Cosmos DB. This article answers commonly asked questions about the Azure Cosmos DB integrated cache.
+The Azure Cosmos DB integrated cache is an in-memory cache that is built in to Azure Cosmos DB. This article answers commonly asked questions about the Azure Cosmos DB integrated cache.
## Frequently asked questions
In general, requests routed by the dedicated gateway will have a slightly lower
### What kind of latency should I expect from the integrated cache?
-A request served by the integrated cache is faster because the cached data is stored in-memory on the dedicated gateway, rather than on the backend. For cached point reads, you should expect latency of 2-4 ms.
+A request served by the integrated cache is fast because the cached data is stored in-memory on the dedicated gateway, rather than on the backend.
-For cached queries, latency depends on the query. The query cache works by caching the query engineΓÇÖs response for a particular query. This response is then sent back client-side to the SDK for processing. For simple queries, minimal work in the SDK is required and latencies of 2-4 ms are typical. However, more complex queries with `GROUP BY` or `DISTINCT` require more processing in the SDK so latency may be higher, even with the query cache.
+For cached point reads, you should expect a median latency of 2-4 ms. For cached queries, latency depends on the query. The query cache works by caching the query engineΓÇÖs response for a particular query. This response is then sent back client-side to the SDK for processing. For simple queries, minimal work in the SDK is required and median latencies of 2-4 ms are typical. More complex queries with `GROUP BY` or `DISTINCT` require more processing in the SDK so latency may be higher, even with the query cache.
-If you were previously connecting to Azure Cosmos DB with direct mode and switch to connecting with the dedicated gateway, you may observe a slight latency increase for some requests. Using gateway mode requires a request to be sent to the gateway (in this case the dedicated gateway) and then routed appropriately to the backend. Direct mode, as the name suggests, allows the client to communicate directly with the backend, removing an extra hop.
+If you were previously connecting to Azure Cosmos DB with direct mode and switch to connecting with the dedicated gateway, you may observe a slight latency increase for some requests. Using gateway mode requires a request to be sent to the gateway (in this case the dedicated gateway) and then routed appropriately to the backend. Direct mode, as the name suggests, allows the client to communicate directly with the backend, removing an extra hop. There is no latency SLA for requests using the dedicated gateway.
If your app previously used direct mode, the latency advantages of the integrated cache will be significant in only the following scenarios:
If your app previously used gateway mode with the standard gateway, the integrat
### Does the Azure Cosmos DB availability SLA extend to the dedicated gateway and integrated cache?
-We will have an availability SLA/SLO on the dedicated gateway (and therefore the integrated cache) once the feature is generally available. For scenarios that require high availability, you should provision 3x the number of dedicated gateway instances needed. For example, if one dedicated gateway node is needed in production, you should provision two additional dedicated gateway nodes to account for possible downtime or outages.
+For scenarios that require high availability and in order to be covered by the Azure Cosmos DB availability SLA, you should provision at least 3 dedicated gateway nodes. For example, if one dedicated gateway node is needed in production, you should provision two additional dedicated gateway nodes to account for possible downtime, outages and upgrades. If only one dedicated gateway node is provisioned, you will temporarily lose availability in these scenarios. Additionally, [ensure your dedicated gateway has enough nodes](./integrated-cache.md#i-want-to-understand-if-i-need-to-add-more-dedicated-gateway-nodes) to serve your workload.
### The integrated cache is only available for SQL (Core) API right now. Are you planning on releasing it for other APIs as well?
-Expanding the integrated cache beyond SQL API is planned on the long-term roadmap but beyond the initial public preview of the integrated cache.
+Expanding the integrated cache beyond SQL API is planned on the long-term roadmap but is beyond the initial scope of the integrated cache.
### What consistency does the integrated cache support?
The integrated cache supports both session and eventual consistency. You can als
- [Configure the integrated cache](how-to-configure-integrated-cache.md) - [Dedicated gateway](dedicated-gateway.md) - Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- - If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Integrated Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/integrated-cache.md
Previously updated : 09/28/2021 Last updated : 08/29/2022
-# Azure Cosmos DB integrated cache - Overview (Preview)
+# Azure Cosmos DB integrated cache - Overview
[!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)] The Azure Cosmos DB integrated cache is an in-memory cache that helps you ensure manageable costs and low latency as your request volume grows. The integrated cache is easy to set up and you donΓÇÖt need to spend time writing custom code for cache invalidation or managing backend infrastructure. Your integrated cache uses a [dedicated gateway](dedicated-gateway.md) within your Azure Cosmos DB account. The integrated cache is the first of many Azure Cosmos DB features that will utilize a dedicated gateway for improved performance. You can choose from three possible dedicated gateway sizes based on the number of cores and memory needed for your workload.
cosmoscachefeedback@microsoft.com
The main goal of the integrated cache is to reduce costs for read-heavy workloads. Low latency, while helpful, is not the main benefit of the integrated cache because Azure Cosmos DB is already fast without caching.
-Point reads and queries that hit the integrated cache won't use any RUs. In other words, any cache hits will have an RU charge of 0. Cache hits will have a much lower per-operation cost than reads from the backend database.
+Point reads and queries that hit the integrated cache will have an RU charge of 0. Cache hits will have a much lower per-operation cost than reads from the backend database.
Workloads that fit the following characteristics should evaluate if the integrated cache will help lower costs:
The query cache can be used to cache queries. The query cache transforms a query
### Working with the query cache
-You don't need special code when working with the query cache, even if your queries have multiple pages of results. The best practices and code for query pagination are the same, whether your query hits the integrated cache or is executed on the backend query engine.
+You don't need special code when working with the query cache, even if your queries have multiple pages of results. The best practices and code for query pagination are the same whether your query hits the integrated cache or is executed on the backend query engine.
-The query cache will automatically cache query continuation tokens, where applicable. If you have a query with multiple pages of results, any pages that are stored in the integrated cache will have an RU charge of 0. If your subsequent pages of query results require backend execution, they'll have a continuation token from the previous page so they can avoid duplicating previous work.
+The query cache will automatically cache query continuation tokens where applicable. If you have a query with multiple pages of results, any pages that are stored in the integrated cache will have an RU charge of 0. If your subsequent pages of query results require backend execution, they'll have a continuation token from the previous page so they can avoid duplicating previous work.
> [!NOTE]
-> Integrated cache instances within different dedicated gateway nodes have independent caches from one another. If data is cached within one node, it is not necessarily cached in the others.
+> Integrated cache instances within different dedicated gateway nodes have independent caches from one another. If data is cached within one node, it is not necessarily cached in the others. Multiple pages of the same query are not guaranteed to be routed to the same dedicated gateway node.
## Integrated cache consistency
-The integrated cache supports both session and eventual [consistency](consistency-levels.md) only. If a read has consistent prefix, bounded staleness, or strong consistency, it will always bypass the integrated cache.
+The integrated cache supports read requests with session and eventual [consistency](consistency-levels.md) only. If a read has consistent prefix, bounded staleness, or strong consistency, it will always bypass the integrated cache and be served from the backend.
The easiest way to configure either session or eventual consistency for all reads is to [set it at the account-level](consistency-levels.md#configure-the-default-consistency-level). However, if you would only like some of your reads to have a specific consistency, you can also configure consistency at the [request-level](how-to-manage-consistency.md#override-the-default-consistency-level).
+> [!NOTE]
+> Write requests with other consistencies will still populate the cache, but in order to read from the cache the request must have either session or eventual consistency.
+ ### Session consistency [Session consistency](consistency-levels.md#session-consistency) is the most widely used consistency level for both single region as well as globally distributed Azure Cosmos DB accounts. When using session consistency, single client sessions can read their own writes. When using the integrated cache, clients outside of the session performing writes will see eventual consistency.
It's important to understand that the `MaxIntegratedCacheStaleness`, when config
This is an improvement from how most caches work and allows the following additional customization: - You can set different staleness requirements for each point read or query-- Different clients, even if they run the same point read or query, can configure different `MaxIntegratedCacheStaleness` values.-- If you wanted to modify read consistency when using cached data, changing `MaxIntegratedCacheStaleness` will have an immediate effect on read consistency.
+- Different clients, even if they run the same point read or query, can configure different `MaxIntegratedCacheStaleness` values
+- If you wanted to modify read consistency when using cached data, changing `MaxIntegratedCacheStaleness` will have an immediate effect on read consistency
> [!NOTE] > When not explicitly configured, the MaxIntegratedCacheStaleness defaults to 5 minutes.
To better understand the `MaxIntegratedCacheStaleness` parameter, consider the f
| t = 40 sec | Run Query B with MaxIntegratedCacheStaleness = 60 seconds | Return results from integrated cache (0 RU charge) | | t = 50 sec | Run Query B with MaxIntegratedCacheStaleness = 20 seconds | Return results from backend database (normal RU charges) and refresh cache |
-> [!NOTE]
-> Customizing `MaxIntegratedCacheStaleness` is only supported in the latest .NET and Java preview SDK's.
- [Learn to configure the `MaxIntegratedCacheStaleness`.](how-to-configure-integrated-cache.md#adjust-maxintegratedcachestaleness) ## Metrics When using the integrated cache, it is helpful to monitor some key metrics. The integrated cache metrics include: -- `DedicatedGatewayAverageCpuUsage` - Average CPU usage across dedicated gateway nodes.-- `DedicatedGatewayMaxCpuUsage` - Maximum CPU usage across dedicated gateway nodes.-- `DedicatedGatewayAverageMemoryUsage` - Average memory usage across dedicated gateway nodes, which are used for both routing requests and caching data.-- `DedicatedGatewayRequests` - Total number of dedicated gateway requests across all dedicated gateway instances.-- `IntegratedCacheEvictedEntriesSize` ΓÇô The average amount of data evicted due to LRU from the integrated cache across dedicated gateway nodes. This value does not include data that expired due to exceeding the `MaxIntegratedCacheStaleness` time.-- `IntegratedCacheItemExpirationCount` - The number of items that are evicted from the integrated cache due to cached point reads exceeding the `MaxIntegratedCacheStaleness` time. This value is an average of integrated cache instances across all dedicated gateway nodes.-- `IntegratedCacheQueryExpirationCount` - The number of queries that are evicted from the integrated cache due to cached queries exceeding the `MaxIntegratedCacheStaleness` time. This value is an average of integrated cache instances across all dedicated gateway nodes.
+- `DedicatedGatewayCPUUsage` - CPU usage with Avg, Max, or Min Aggregation types for data across all dedicated gateway nodes.
+- `DedicatedGatewayAverageCPUUsage` - (Deprecated) Average CPU usage across all dedicated gateway nodes.
+- `DedicatedGatewayMaximumCPUUsage` - (Deprecated) Maximum CPU usage across all dedicated gateway nodes.
+- `DedicatedGatewayMemoryUsage` - Memory usage with Avg, Max, or Min Aggregation types for data across all dedicated gateway nodes.
+- `DedicatedGatewayAverageMemoryUsage` - (Deprecated) Average memory usage across all dedicated gateway nodes.
+- `DedicatedGatewayRequests` - Total number of dedicated gateway requests across all dedicated gateway nodes.
+- `IntegratedCacheEvictedEntriesSize` ΓÇô The average amount of data evicted from the integrated cache due to LRU across all dedicated gateway nodes. This value does not include data that expired due to exceeding the `MaxIntegratedCacheStaleness` time.
+- `IntegratedCacheItemExpirationCount` - The average number of items that are evicted from the integrated cache due to cached point reads exceeding the `MaxIntegratedCacheStaleness` time across all dedicated gateway nodes.
+- `IntegratedCacheQueryExpirationCount` - The average number of queries that are evicted from the integrated cache due to cached queries exceeding the `MaxIntegratedCacheStaleness` time across all dedicated gateway nodes.
- `IntegratedCacheItemHitRate` ΓÇô The proportion of point reads that used the integrated cache (out of all point reads routed through the dedicated gateway with session or eventual consistency). This value is an average of integrated cache instances across all dedicated gateway nodes. - `IntegratedCacheQueryHitRate` ΓÇô The proportion of queries that used the integrated cache (out of all queries routed through the dedicated gateway with session or eventual consistency). This value is an average of integrated cache instances across all dedicated gateway nodes. All existing metrics are available, by default, from the **Metrics** blade (not Metrics classic):
- :::image type="content" source="./media/integrated-cache/integrated-cache-metrics.png" alt-text="An image that shows the location of integrated cache metrics" border="false":::
+ :::image type="content" source="./media/integrated-cache/integrated-cache-metrics.png" alt-text="Screenshot of the Azure Portal that shows the location of integrated cache metrics." border="false":::
-Metrics are either an average, maximum, or sum across all dedicated gateway nodes. For example, if you provision a dedicated gateway cluster with five nodes, the metrics reflect the aggregated value across all five nodes. It isn't possible to determine the metric values for each individual nodes.
+Metrics are either an average, maximum, or sum across all dedicated gateway nodes. For example, if you provision a dedicated gateway cluster with five nodes, the metrics reflect the aggregated value across all five nodes. It isn't possible to determine the metric values for each individual node.
## Troubleshooting common issues
If most data is evicted from the cache due to exceeding the `MaxIntegratedCacheS
### I want to understand if I need to add more dedicated gateway nodes
-In some cases, if latency is unexpectedly high, you may need more dedicated gateway nodes rather than bigger nodes. Check the `DedicatedGatewayMaxCpuUsage` and `DedicatedGatewayAverageMemoryUsage` to determine if adding more dedicated gateway nodes would reduce latency. It's good to keep in mind that since all instances of the integrated cache are independent from one another, adding more dedicated gateway nodes won't reduce the `IntegratedCacheEvictedEntriesSize`. Adding more nodes will improve the request volume that your dedicated gateway cluster can handle, though.
+In some cases, if latency is unexpectedly high, you may need more dedicated gateway nodes rather than bigger nodes. Check the `DedicatedGatewayCPUUsage` and `DedicatedGatewayMemoryUsage` to determine if adding more dedicated gateway nodes would reduce latency. It's good to keep in mind that since all instances of the integrated cache are independent from one another, adding more dedicated gateway nodes won't reduce the `IntegratedCacheEvictedEntriesSize`. Adding more nodes will improve the request volume that your dedicated gateway cluster can handle, though.
## Next steps
In some cases, if latency is unexpectedly high, you may need more dedicated gate
- [Configure the integrated cache](how-to-configure-integrated-cache.md) - [Dedicated gateway](dedicated-gateway.md) - Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- - If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Change Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/change-log.md
Last updated 06/22/2022--++ # Change log for Azure Cosmos DB API for MongoDB
cosmos-db How To Use Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-use-stored-procedures-triggers-udfs.md
const container = client.database("myDatabase").container("myContainer");
const triggerId = "trgPreValidateToDoItemTimestamp"; await container.items.create({ category: "Personal",
- name : "Groceries",
- description : "Pick up strawberries",
- isComplete : false
+ name: "Groceries",
+ description: "Pick up strawberries",
+ isComplete: false
}, {preTriggerInclude: [triggerId]}); ```
cosmos-db Modeling Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/modeling-data.md
Just as there's no single way to represent a piece of data on a screen, there's
## Next steps
-* To learn more about Azure Cosmos DB, refer to the service's [documentation](https://azure.microsoft.com/documentation/services/cosmos-db/) page.
+* To learn more about Azure Cosmos DB, refer to the service's [documentation](/azure/cosmos-db/) page.
* To understand how to shard your data across multiple partitions, refer to [Partitioning Data in Azure Cosmos DB](../partitioning-overview.md).
cosmos-db Powerbi Visualize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/powerbi-visualize.md
To build a Power BI report/dashboard:
## Next steps * To learn more about Power BI, see [Get started with Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-get-started/).
-* To learn more about Azure Cosmos DB, see the [Azure Cosmos DB documentation landing page](https://azure.microsoft.com/documentation/services/cosmos-db/).
+* To learn more about Azure Cosmos DB, see the [Azure Cosmos DB documentation landing page](/azure/cosmos-db/).
cosmos-db Sql Api Java Sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-java-sdk-samples.md
The Query Samples files for [sync](https://github.com/Azure-Samples/azure-cosmos
<!-- | Query with ORDER BY for partitioned collections | CosmosContainer.queryItems <br> CosmosAsyncContainer.queryItems | --> ## Change feed examples
-The [Change Feed Processor Sample](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav) and [Change feed processor](https://docs.microsoft.com/azure/cosmos-db/sql/change-feed-processor?tabs=java).
+The [Change Feed Processor Sample](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav) and [Change feed processor](/azure/cosmos-db/sql/change-feed-processor?tabs=java).
| Task | API reference | | | |
The User Management Sample file shows how to do the following tasks:
Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Sql Query Index Of https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-index-of.md
Previously updated : 09/13/2019 Last updated : 08/30/2022 + # INDEX_OF (Azure Cosmos DB)+ [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
- Returns the starting position of the first occurrence of the second string expression within the first specified string expression, or -1 if the string is not found.
-
+Returns the starting position of the first occurrence of the second string expression within the first specified string expression, or `-1` if the string isn't found.
+ ## Syntax
-
+ ```sql
-INDEX_OF(<str_expr1>, <str_expr2> [, <numeric_expr>])
-```
-
+INDEX_OF(<str_expr1>, <str_expr2> [, <numeric_expr>])
+```
+ ## Arguments
-
-*str_expr1*
- Is the string expression to be searched.
-
-*str_expr2*
- Is the string expression to search for.
+
+*str_expr1*
+ Is the string expression to be searched.
+
+*str_expr2*
+ Is the string expression to search for.
*numeric_expr*
- Optional numeric expression that sets the position the search will start. The first position in *str_expr1* is 0.
-
+ Optional numeric expression that sets the position the search will start. The first position in *str_expr1* is 0.
+ ## Return types
-
- Returns a numeric expression.
-
+
+Returns a numeric expression.
+ ## Examples
-
- The following example returns the index of various substrings inside "abc".
-
+
+The following example returns the index of various substrings inside "abc".
+ ```sql
-SELECT INDEX_OF("abc", "ab") AS i1, INDEX_OF("abc", "b") AS i2, INDEX_OF("abc", "c") AS i3
-```
-
- Here is the result set.
-
+SELECT
+ INDEX_OF("abc", "ab") AS index_of_prefix,
+ INDEX_OF("abc", "b") AS index_of_middle,
+ INDEX_OF("abc", "c") AS index_of_last,
+ INDEX_OF("abc", "d") AS index_of_missing
+```
+
+Here's the result set.
+ ```json
-[{"i1": 0, "i2": 1, "i3": -1}]
-```
+[
+ {
+ "index_of_prefix": 0,
+ "index_of_middle": 1,
+ "index_of_last": 2,
+ "index_of_missing": -1
+ }
+]
+```
## Next steps
cosmos-db How To Use C Plus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-c-plus.md
Follow these links to learn more about Azure Storage and the Table API in Azure
* [Introduction to the Table API](introduction.md) * [List Azure Storage resources in C++](../../storage/common/storage-c-plus-plus-enumeration.md) * [Storage Client Library for C++ reference](https://azure.github.io/azure-storage-cpp)
-* [Azure Storage documentation](https://azure.microsoft.com/documentation/services/storage/)
+* [Azure Storage documentation](/azure/storage/)
cost-management-billing Troubleshoot Declined Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/troubleshoot-declined-card.md
Previously updated : 04/22/2022 Last updated : 08/30/2022
When you choose a card, Azure displays the card options that are valid in the co
## You're using a virtual or prepaid card
-Prepaid and virtual cards aren't accepted as payment for Azure subscriptions.
+Prepaid and virtual cards are not accepted as payment for Azure subscriptions.
## Your credit information is inaccurate or incomplete
cost-management-billing Prepay App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepay-app-service.md
You can buy Isolated Stamp reserved capacity in the [Azure portal](https://porta
- **Management group** - Applies the reservation discount to the matching resource in the list of subscriptions that are a part of both the management group and billing scope. 1. Select a **Region** to choose an Azure region that's covered by the reserved capacity and add the reservation to the cart. 1. Select an Isolated Plan type and then select **Select**.
- ![Example ](./media/prepay-app-service/app-service-isolated-stamp-select.png)
+ ![Example](./media/prepay-app-service/app-service-isolated-stamp-select.png)
1. Enter the quantity of App Service Isolated stamps to reserve. For example, a quantity of three would give you three reserved stamps a region. Select **Next: Review + Buy**. 1. Review and select **Buy now**.
cost-management-billing Reservation Renew https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-renew.md
Previously updated : 06/17/2022 Last updated : 08/29/2022
Emails are sent to different people depending on your purchase method:
- Individual subscription customers with pay-as-you-go rates - Emails are sent to users who are set up as account administrators. - Cloud Solution Provider customers - Emails are sent to the partner notification contact. This notification isn't currently supported for Microsoft Customer Agreement subscriptions (CSP Azure Plan subscription).
+Renewal notifications are not sent to any Microsoft Customer Agreement (Azure Plan) users.
+ ## Next steps - To learn more about Azure Reservations, see [What are Azure Reservations?](save-compute-costs-reservations.md)
data-factory Ci Cd Github Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ci-cd-github-troubleshoot-guide.md
If you are using old default parameterization template, new way to include globa
Default parameterization template should include all values from global parameter list. #### Resolution
-Use updated [default parameterization template.](https://docs.microsoft.com/azure/data-factory/continuous-integration-delivery-resource-manager-custom-parameters#default-parameterization-template) as one time migration to new method of including global parameters. This template references to all values in global parameter list. You also have to update the deployment task in the **release pipeline** if you are already overriding the template parameters there.
+Use updated [default parameterization template.](/azure/data-factory/continuous-integration-delivery-resource-manager-custom-parameters#default-parameterization-template) as one time migration to new method of including global parameters. This template references to all values in global parameter list. You also have to update the deployment task in the **release pipeline** if you are already overriding the template parameters there.
### Error code: InvalidTemplate
data-factory Transform Data Using Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-machine-learning.md
Last updated 09/09/2021
> [!NOTE] > Since Machine Learning Studio (classic) resources can no longer be created after 1 Dec, 2021, users are encouraged to use [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) with the [Machine Learning Execute Pipeline activity](transform-data-machine-learning-service.md) rather than using the Batch Execution activity to execute Machine Learning Studio (classic) batches.
-[ML Studio (classic)](https://azure.microsoft.com/documentation/services/machine-learning/) enables you to build, test, and deploy predictive analytics solutions. From a high-level point of view, it is done in three steps:
+[ML Studio (classic)](/azure/machine-learning/) enables you to build, test, and deploy predictive analytics solutions. From a high-level point of view, it is done in three steps:
1. **Create a training experiment**. You do this step by using the ML Studio (classic). ML Studio (classic) is a collaborative visual development environment that you use to train and test a predictive analytics model using training data. 2. **Convert it to a predictive experiment**. Once your model has been trained with existing data and you are ready to use it to score new data, you prepare and streamline your experiment for scoring. 3. **Deploy it as a web service**. You can publish your scoring experiment as an Azure web service. You can send data to your model via this web service end point and receive result predictions from the model. ### Using Machine Learning Studio (classic) with Azure Data Factory or Synapse Analytics
-Azure Data Factory and Synapse Analytics enable you to easily create pipelines that use a published [Machine Learning Studio (classic)](https://azure.microsoft.com/documentation/services/machine-learning) web service for predictive analytics. Using the **Batch Execution Activity** in a pipeline, you can invoke Machine Learning Studio (classic) web service to make predictions on the data in batch.
+Azure Data Factory and Synapse Analytics enable you to easily create pipelines that use a published [Machine Learning Studio (classic)](/azure/machine-learning) web service for predictive analytics. Using the **Batch Execution Activity** in a pipeline, you can invoke Machine Learning Studio (classic) web service to make predictions on the data in batch.
Over time, the predictive models in the Machine Learning Studio (classic) scoring experiments need to be retrained using new input datasets. You can retrain a model from a pipeline by doing the following steps:
data-factory Data Factory Azure Ml Batch Execution Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-ml-batch-execution-activity.md
Last updated 10/22/2021
> This article applies to version 1 of Data Factory. If you are using the current version of the Data Factory service, see [transform data using machine learning in Data Factory](../transform-data-using-machine-learning.md). ### Machine Learning Studio (classic)
-[ML Studio (classic)](https://azure.microsoft.com/documentation/services/machine-learning/) enables you to build, test, and deploy predictive analytics solutions. From a high-level point of view, it is done in three steps:
+[ML Studio (classic)](/azure/machine-learning/) enables you to build, test, and deploy predictive analytics solutions. From a high-level point of view, it is done in three steps:
1. **Create a training experiment**. You do this step by using ML Studio (classic). Studio (classic) is a collaborative visual development environment that you use to train and test a predictive analytics model using training data. 2. **Convert it to a predictive experiment**. Once your model has been trained with existing data and you are ready to use it to score new data, you prepare and streamline your experiment for scoring.
You can also use [Data Factory Functions](data-factory-functions-variables.md) i
[adf-build-1st-pipeline]: data-factory-build-your-first-pipeline.md
-[azure-machine-learning]: https://azure.microsoft.com/services/machine-learning/
+[azure-machine-learning]: https://azure.microsoft.com/services/machine-learning/
data-factory Data Factory Data Processing Using Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-data-processing-using-batch.md
After you process data, you can consume it with online tools such as Power BI. H
* [Azure and Power BI: Basic overview](https://powerbi.microsoft.com/documentation/powerbi-azure-and-power-bi/) ## References
-* [Azure Data Factory](https://azure.microsoft.com/documentation/services/data-factory/)
+* [Azure Data Factory](/azure/data-factory/)
* [Introduction to the Data Factory service](data-factory-introduction.md) * [Get started with Data Factory](data-factory-build-your-first-pipeline.md) * [Use custom activities in a Data Factory pipeline](data-factory-use-custom-activities.md)
-* [Azure Batch](https://azure.microsoft.com/documentation/services/batch/)
+* [Azure Batch](/azure/batch/)
* [Basics of Batch](/azure/azure-sql/database/sql-database-paas-overview) * [Overview of Batch features](../../batch/batch-service-workflow-features.md))
databox-online Azure Stack Edge Gpu Deploy Iot Edge Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md
Previously updated : 06/28/2022 Last updated : 08/30/2022
Deploying the IoT Edge runtime is part of VM creation, using the *cloud-init* sc
Here are the high-level steps to deploy the VM and IoT Edge runtime:
-1. In the [Azure portal](https://portal.azure.com), go to Azure Marketplace.
- 1. Connect to the Azure Cloud Shell or a client with Azure CLI installed. For detailed steps, see [Quickstart for Bash in Azure Cloud Shell](../cloud-shell/quickstart.md).
- 1. Use steps in [Search for Azure Marketplace images](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md#search-for-azure-marketplace-images) to search the Azure Marketplace for the following Ubuntu 20.04 LTS image:
+1. Acquire the Ubuntu VM image from Azure Marketplace. For detailed steps, follow the instructions in [Use Azure Marketplace image to create VM image for your Azure Stack Edge](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md).
+
+ 1. In the [Azure portal](https://portal.azure.com), go to Azure Marketplace.
+ 1. Connect to the Azure Cloud Shell or a client with Azure CLI installed. For detailed steps, see [Quickstart for Bash in Azure Cloud Shell](../cloud-shell/quickstart.md).
+
+ > [!NOTE]
+ > Closing the shell session will delete all variables created during the shell session. Reopening the session will require recreating the variables.
+
+ c. Run the following command to set the subscription.
+
+ ```
+ az account set ΓÇôsubscription <subscription id>
+ ```
+
+2. Use steps in [Search for Azure Marketplace images](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md#search-for-azure-marketplace-images) to search the Azure Marketplace for an Ubuntu 20.04 LTS image.
+
+ Example of an Ubuntu 20.04 LTS image:
- ```azurecli
- $urn = Canonical:0001-com-ubuntu-server-focal:20_04-lts:20.04.202007160
- ```
+ ```
+ $urn = Canonical:0001-com-ubuntu-server-focal:20_04-lts:20.04.202007160
+ ```
- 1. Create a new managed disk from the Marketplace image.
+3. Create a new managed disk from the Marketplace image. For detailed steps, see [Use Azure Marketplace image to create VM image for your Azure Stack Edge](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md).
- 1. Export a VHD from the managed disk to an Azure Storage account.
-
- For detailed steps, follow the instructions in [Use Azure Marketplace image to create VM image for your Azure Stack Edge](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md).
+4. Export a VHD from the managed disk to an Azure Storage account. For detailed steps, see [Export a VHD from the managed disk to Azure Storage](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md#export-a-vhd-from-the-managed-disk-to-azure-storage).
-1. Follow these steps to create an Ubuntu VM using the VM image.
+5. Follow these steps to create an Ubuntu VM using the VM image.
1. Specify the *cloud-init* script on the **Advanced** tab. To create a VM, see [Deploy GPU VM via Azure portal](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md?tabs=portal) or [Deploy VM via Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md). ![Screenshot of the Advanced tab of VM configuration in the Azure portal.](media/azure-stack-edge-gpu-deploy-iot-edge-linux-vm/azure-portal-create-vm-advanced-page-2.png)
Use these steps to verify that your IoT Edge runtime is running.
![Screenshot of the IoT Edge runtime status in the Azure portal.](media/azure-stack-edge-gpu-deploy-iot-edge-linux-vm/azure-portal-iot-edge-runtime-status.png)
+ To troubleshoot your IoT Edge device configuration, see [Troubleshoot your IoT Edge device](../iot-edge/troubleshoot.md?view=iotedge-2020-11&tabs=linux&preserve-view=true).
+
+ <!-- Cannot get the link to render properly for version at https://docs.microsoft.com/azure/iot-edge/troubleshoot?view=iotedge-2020-11 -->
+ ## Update the IoT Edge runtime To update the VM, follow the instructions in [Update IoT Edge](../iot-edge/how-to-update-iot-edge.md?view=iotedge-2020-11&tabs=linux&preserve-view=true). To find the latest version of Azure IoT Edge, see [Azure IoT Edge releases](../iot-edge/how-to-update-iot-edge.md?view=iotedge-2020-11&tabs=linux&preserve-view=true).
To update the VM, follow the instructions in [Update IoT Edge](../iot-edge/how-t
To deploy and run an IoT Edge module on your Ubuntu VM, see the steps in [Deploy IoT Edge modules](../iot-edge/how-to-deploy-modules-portal.md?view=iotedge-2020-11&preserve-view=true). To deploy NvidiaΓÇÖs DeepStream module, see [Deploy the Nvidia DeepStream module on Ubuntu VM on Azure Stack Edge Pro with GPU](azure-stack-edge-deploy-nvidia-deepstream-module.md).+
+To deploy NVIDIA DIGITS, see [Enable a GPU in a prefabricated NVIDIA module](/azure/iot-edge/configure-connect-verify-gpu?view=iotedge-2020-11&preserve-view=true#enable-a-gpu-in-a-prefabricated-nvidia-module).
ddos-protection Ddos Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md
The following table shows features and corresponding SKUs.
Simplified configuration immediately protects all resources on a virtual network as soon as DDoS Protection Standard is enabled. No intervention or user definition is required. ### Multi-Layered protection:
-When deployed with a web application firewall (WAF), DDoS Protection Standard protects both at the network layer (Layer 3 and 4, offered by Azure DDoS Protection Standard) and at the application layer (Layer 7, offered by a WAF). WAF offerings include Azure [Application Gateway WAF SKU](../web-application-firewall/ag/ag-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) and third-party web application firewall offerings available in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=web%20application%20firewall).
+When deployed with a web application firewall (WAF), DDoS Protection Standard protects both at the network layer (Layer 3 and 4, offered by Azure DDoS Protection Standard) and at the application layer (Layer 7, offered by a WAF). WAF offerings include Azure [Application Gateway WAF SKU](../web-application-firewall/ag/ag-overview.md?toc=/azure/virtual-network/toc.json) and third-party web application firewall offerings available in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=web%20application%20firewall).
### Extensive mitigation scale All L3/L4 attack vectors can be mitigated, with global capacity, to protect against the largest known DDoS attacks.
ddos-protection Ddos Protection Reference Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-reference-architectures.md
In this architecture, DDoS Protection Standard is enabled on the virtual network
### PaaS web application
-This reference architecture shows running an Azure App Service application in a single region. This architecture shows a set of proven practices for a web application that uses [Azure App Service](https://azure.microsoft.com/documentation/services/app-service/) and [Azure SQL Database](https://azure.microsoft.com/documentation/services/sql-database/).
+This reference architecture shows running an Azure App Service application in a single region. This architecture shows a set of proven practices for a web application that uses [Azure App Service](/azure/app-service/) and [Azure SQL Database](/azure/sql-database/).
A standby region is set up for failover scenarios. ![Diagram of the reference architecture for a PaaS web application](./media/ddos-best-practices/image-11.png)
This reference architecture shows configuring DDoS Protection Standard for an [A
In this architecture, traffic destined to the HDInsight cluster from the internet is routed to the public IP associated with the HDInsight gateway load balancer. The gateway load balancer then sends the traffic to the head nodes or the worker nodes directly. Because DDoS Protection Standard is enabled on the HDInsight virtual network, all public IPs in the virtual network get DDoS protection for Layer 3 and 4. This reference architecture can be combined with the N-Tier and multi-region reference architectures.
-For more information on this reference architecture, see the [Extend Azure HDInsight using an Azure Virtual Network](../hdinsight/hdinsight-plan-virtual-network-deployment.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
+For more information on this reference architecture, see the [Extend Azure HDInsight using an Azure Virtual Network](../hdinsight/hdinsight-plan-virtual-network-deployment.md?toc=/azure/virtual-network/toc.json)
documentation.
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
Defender for Servers provides two plans you can choose from:
- **Licensing**: Charges Defender for Endpoint licenses per hour instead of per seat, lowering costs by protecting virtual machines only when they are in use. - **Plan 2** - **Plan 1**: Includes everything in Defender for Servers Plan 1.
- - **Additional features**: All other enhanced Defender for Servers security capabilities for Windows and Linux machines running in Azure, AWS, GCP, and on-premises.
+ - **Additional features**: All other enhanced Defender for Servers security features.
## Plan features
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
You can also enable the MDE unified solution at scale through the supplied REST
This is an example request body for the PUT request to enable the MDE unified solution:
-URI: `https://management.microsoft.com/subscriptions/<subscriptionId>/providers/Microsoft.Security/settings&api-version=2022-05-01-preview`
+URI: `https://management.azure.com/subscriptions/<subscriptionId>/providers/Microsoft.Security/settings&api-version=2022-05-01-preview`
```json {
defender-for-cloud Multi Factor Authentication Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/multi-factor-authentication-enforcement.md
To see which accounts don't have MFA enabled, use the following Azure Resource G
```kusto securityresources | where type == "microsoft.security/assessments"
- | where properties.displayName == "MFA should be enabled on accounts with owner permissions on your subscription"
+ | where properties.displayName == "MFA should be enabled on accounts with owner permissions on subscriptions"
| where properties.status.code == "Unhealthy" ```
event-grid Partner Events Overview For Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview-for-partners.md
A verified partner is a partner organization whose identity has been validated b
Customers authorize you to create partner topics or partner destinations on their Azure subscription. The authorization is granted for a given resource group in a customer Azure subscription and it's time bound. You must create the channel before the expiration date set by the customer. You should have documentation suggesting the customer an adequate window of time for configuring your system to send or receive events and to create the channel before the authorization expires. If you attempt to create a channel without authorization or after it has expired, the channel creation will fail and no resource will be created on the customer's Azure subscription. > [!NOTE]
-> Event Grid will start **requiring authorizations to create partner topics or partner destinations** around June 30th, 2022. You should update your documentation asking your customers to grant you the authorization before you attempt to create a channel or an event channel.
+> Event Grid started **enforcing authorization checks to create partner topics or partner destinations** around June 30th, 2022. You should update your documentation asking your customers to grant you the authorization before you attempt to create a channel or an event channel.
>[!IMPORTANT] > **A verified partner is not an authorized partner**. Even if a partner has been vetted by Microsoft, you still need to be authorized before you can create a partner topic or partner destination on the customer's Azure subscription.
event-grid Subscribe To Partner Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-partner-events.md
You must grant your consent to the partner to create partner topics in a resourc
> For a greater security stance, specify the minimum expiration time that offers the partner enough time to configure your events to flow to Event Grid and to provision your partner topic. > [!NOTE]
-> Event Grid will start requiring authorizations to create partner topics or partner destinations around June 30th, 2022. Meanwhile, requiring your (subscriber's) authorization for a partner to create resources on your Azure subscription is an **optional** feature. We encourage you to opt-in to use this feature and try to use it in non-production Azure subscriptions before it becomes a mandatory step around June 30th, 2022. To opt-in to this feature, reach out to [mailto:askgrid@microsoft.com](mailto:askgrid@microsoft.com) using the subject line **Request to enforce partner authorization on my Azure subscription(s)** and provide your Azure subscription(s) in the email.
+> Event Grid will started enforcing authorization checks to create partner topics or partner destinations around June 30th, 2022.
Following example shows the way to create a partner configuration resource that contains the partner authorization. You must identify the partner by providing either its **partner registration ID** or the **partner name**. Both can be obtained from your partner, but only one of them is required. For your convenience, the following examples leave a sample expiration time in the UTC format.
event-hubs Event Hubs Kafka Spark Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-spark-tutorial.md
In this tutorial, you learn how to:
Before you start this tutorial, make sure that you have: - Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/). - [Apache Spark v2.4](https://spark.apache.org/downloads.html)-- [Apache Kafka v2.0]( https://kafka.apache.org/20/documentation.html)
+- [Apache Kafka v2.0](https://kafka.apache.org/20/documentation.html)
- [Git](https://www.git-scm.com/downloads) > [!NOTE]
To learn more about Event Hubs and Event Hubs for Kafka, see the following artic
- [Explore samples on our GitHub](https://github.com/Azure/azure-event-hubs-for-kafka) - [Connect Akka Streams to an event hub](event-hubs-kafka-akka-streams-tutorial.md) - [Apache Kafka developer guide for Azure Event Hubs](apache-kafka-developer-guide.md)-
expressroute Expressroute For Cloud Solution Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-for-cloud-solution-providers.md
The choices between these two options are based on your customerΓÇÖs needs and y
* **Azure role-based access control (Azure RBAC)** ΓÇô Azure RBAC is based on Azure Active Directory. For more information on Azure RBAC, see [here](../role-based-access-control/role-assignments-portal.md). * **Networking** ΓÇô Covers the various topics of networking in Microsoft Azure.
-* **Azure Active Directory (Azure AD)** ΓÇô Azure AD provides the identity management for Microsoft Azure and third-party SaaS applications. For more information about Azure AD, see [here](https://azure.microsoft.com/documentation/services/active-directory/).
+* **Azure Active Directory (Azure AD)** ΓÇô Azure AD provides the identity management for Microsoft Azure and third-party SaaS applications. For more information about Azure AD, see [here](/azure/active-directory/).
## Network speeds ExpressRoute supports network speeds from 50 Mb/s to 10 Gb/s. This allows customers to purchase the amount of network bandwidth needed for their unique environment.
Additional Information can be found at the following links:
[Azure in Cloud Solution Provider program](/azure/cloud-solution-provider). [Get ready to transact as a Cloud Solution Provider](https://partner.microsoft.com/solutions/cloud-reseller-pre-launch).
-[Microsoft Cloud Solution Provider resources](https://partner.microsoft.com/solutions/cloud-reseller-resources).
+[Microsoft Cloud Solution Provider resources](https://partner.microsoft.com/solutions/cloud-reseller-resources).
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **China Mobile International** |Supported |Supported | Hong Kong, Hong Kong2, Singapore | | **China Telecom Global** |Supported |Supported | Hong Kong, Hong Kong2 | | **[China Unicom Global](https://cloudbond.chinaunicom.cn/home-en)** |Supported |Supported | Frankfurt, Hong Kong, Singapore2, Tokyo2 |
-| **[Chunghwa Telecom](https://www.cht.com.tw/en/home/cht/about-cht/products-and-services/International/Cloud-Service)** |Supported |Supported | Taipei |
+| **Chunghwa Telecom** |Supported |Supported | Taipei |
| **[Claro](https://www.usclaro.com/enterprise-mnc/connectivity/mpls/)** |Supported |Supported | Miami | | **[Cologix](https://cologix.com/connectivity/cloud/cloud-connect/microsoft-azure/)** |Supported |Supported | Chicago, Dallas, Minneapolis, Montreal, Toronto, Vancouver, Washington DC | | **[Colt](https://www.colt.net/direct-connect/azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Berlin, Chicago, Dublin, Frankfurt, Geneva, Hong Kong, London, London2, Marseille, Milan, Munich, Newport, Osaka, Paris, Seoul, Silicon Valley, Singapore2, Tokyo, Tokyo2, Washington DC, Zurich |
The following table shows locations by service provider. If you want to view ava
| **[Transtelco](https://transtelco.net/enterprise-services/)** |Supported |Supported | Dallas, Queretaro(Mexico)| | **[T-Mobile/Sprint](https://www.t-mobile.com/business/solutions/networking/cloud-networking )** |Supported |Supported | Chicago, Silicon Valley, Washington DC | | **[T-Systems](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** |Supported |Supported | Frankfurt |
-| **[UOLDIVEO](https://www.uoldiveo.com.br/)** |Supported |Supported | Sao Paulo |
+| **UOLDIVEO** |Supported |Supported | Sao Paulo |
| **[UIH](https://www.uih.co.th/en/network-solutions/global-network/cloud-direct-for-microsoft-azure-expressroute)** | Supported | Supported | Bangkok | | **[Verizon](https://enterprise.verizon.com/products/network/application-enablement/secure-cloud-interconnect/)** |Supported |Supported | Amsterdam, Chicago, Dallas, Hong Kong SAR, London, Mumbai, Silicon Valley, Singapore, Sydney, Tokyo, Toronto, Washington DC | | **[Viasat](http://www.directcloud.viasatbusiness.com/)** | Supported | Supported | Washington DC2 |
expressroute How To Custom Route Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-custom-route-alert.md
In order to create an Automation account, you need privileges and permissions. F
### <a name="about"></a>1. Create an automation account
-Create an Automation account with run-as permissions. For instructions, see [Create an Azure Automation account](../automation/quickstarts/create-account-portal.md).
+Create an Automation account with run-as permissions. For instructions, see [Create an Azure Automation account](../automation/quickstarts/create-azure-automation-account-portal.md).
:::image type="content" source="./media/custom-route-alert-portal/create-account.png" alt-text="Add automation account" lightbox="./media/custom-route-alert-portal/create-account-expand.png":::
The final step is the workflow validation. In **Logic Apps Overview**, select **
## Next steps
-To learn more about how to customize the workflow, see [Azure Logic Apps](../logic-apps/logic-apps-overview.md).
+To learn more about how to customize the workflow, see [Azure Logic Apps](../logic-apps/logic-apps-overview.md).
expressroute Howto Routing Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/howto-routing-cli.md
This section helps you create, get, update, and delete the Microsoft peering con
> [!IMPORTANT] > Microsoft peering of ExpressRoute circuits that were configured prior to August 1, 2017 will have all service prefixes advertised through the Microsoft peering, even if route filters are not defined. Microsoft peering of ExpressRoute circuits that are configured on or after August 1, 2017 will not have any prefixes advertised until a route filter is attached to the circuit. For more information, see [Configure a route filter for Microsoft peering](how-to-routefilter-powershell.md).
->
- ### To create Microsoft peering
firewall Protect Azure Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-azure-virtual-desktop.md
Azure Virtual Desktop is a desktop and app virtualization service that runs on A
[ ![Azure Virtual Desktop architecture](media/protect-windows-virtual-desktop/windows-virtual-desktop-architecture-diagram.png) ](media/protect-windows-virtual-desktop/windows-virtual-desktop-architecture-diagram.png#lightbox)
-Follow the guidelines in this article to provide additional protection for your Azure Virtual Desktop host pool using Azure Firewall.
+Follow the guidelines in this article to provide extra protection for your Azure Virtual Desktop host pool using Azure Firewall.
## Prerequisites - A deployed Azure Virtual Desktop environment and host pool.
+ - An Azure Firewall deployed with at least one Firewall Manager Policy.
+ - DNS and DNS Proxy enabled in the Firewall Policy to use [FQDN in Network Rules](../firewall/fqdn-filtering-network-rules.md).
- For more information, see [Tutorial: Create a host pool by using the Azure portal](../virtual-desktop/create-host-pools-azure-marketplace.md)
+For more information, see [Tutorial: Create a host pool by using the Azure portal](../virtual-desktop/create-host-pools-azure-marketplace.md)
To learn more about Azure Virtual Desktop environments see [Azure Virtual Desktop environment](../virtual-desktop/environment-setup.md).
To learn more about Azure Virtual Desktop environments see [Azure Virtual Deskto
The Azure virtual machines you create for Azure Virtual Desktop must have access to several Fully Qualified Domain Names (FQDNs) to function properly. Azure Firewall provides an Azure Virtual Desktop FQDN Tag to simplify this configuration. Use the following steps to allow outbound Azure Virtual Desktop platform traffic:
-You will need to create an Azure Firewall Policy and create Rule Collections for Network Rules and Applications Rules. Give the Rule Collection a priority and an allow or deny action.
+You'll need to create an Azure Firewall Policy and create Rule Collections for Network Rules and Applications Rules. Give the Rule Collection a priority and an allow or deny action.
+In order to identify a specific AVD Host Pool as "Source" in the tables below, [IP Group](../firewall/ip-groups.md) can be created to represent it.
### Create network rules
-| Name | Source type | Source | Protocol | Destination ports | Destination type | Destination |
-| | -- | - | -- | -- | - | |
-| Rule Name | IP Address | VNet or Subnet IP Address | TCP | 80 | IP Address | 169.254.169.254, 168.63.129.16 |
-| Rule Name | IP Address | VNet or Subnet IP Address | TCP | 443 | Service Tag | AzureCloud, WindowsVirtualDesktop, AzureFrontDoor.Frontend |
-| Rule Name | IP Address | VNet or Subnet IP Address | TCP, UDP | 53 | IP Address | * |
-|Rule name | IP Address | VNet or Subnet IP Address | TCP | 1688 | IP address | 20.118.99.224, 40.83.235.53 (azkms.core.windows.net)|
-|Rule name | IP Address | VNet or Subnet IP Address | TCP | 1688 | IP address | 23.102.135.246 (kms.core.windows.net)|
+Based on the Azure Virtual Desktop (AVD) [reference article](../virtual-desktop/safe-url-list.md), these are the ***mandatory*** rules to allow outbound access to the control plane and core dependent
+
+| Name | Source type | Source | Protocol | Destination ports | Destination type | Destination |
+| | -- | - | -- | -- | - | |
+| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 443 | FQDN | login.microsoftonline.com |
+| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 80 | IP Address | 169.254.169.254, 168.63.129.16 |
+| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 443 | Service Tag | WindowsVirtualDesktop, AzureFrontDoor.Frontend, AzureMonitor |
+| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP, UDP | 53 | IP Address | * |
+| Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 1688 | IP address | 20.118.99.224, 40.83.235.53 (azkms.core.windows.net) |
+| Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 1688 | IP address | 23.102.135.246 (kms.core.windows.net) |
+| Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 443 | FQDN | mrsglobalsteus2prod.blob.core.windows.net |
+| Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 443 | FQDN | wvdportalstorageblob.blob.core.windows.net |
+| Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 80 | FQDN | oneocsp.microsoft.com |
+| Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 80 | FQDN | www.microsoft.com |
> [!NOTE] > Some deployments might not need DNS rules. For example, Azure Active Directory Domain controllers forward DNS queries to Azure DNS at 168.63.129.16.
+Azure Virtual Desktop (AVD) official documentation reports the following Network rules as **optional** depending on the usage and scenario:
+
+| Name | Source type | Source | Protocol | Destination ports | Destination type | Destination |
+| -| -- | - | -- | -- | - | |
+| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | UDP | 123 | FQDN | time.windows.com |
+| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 443 | FQDN | login.windows.net |
+| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 443 | FQDN | www.msftconnecttest.com |
++ ### Create application rules
-| Name | Source type | Source | Protocol | Destination type | Destination |
-| | -- | - | - | - | - |
-| Rule Name | IP Address | VNet or Subnet IP Address | Https:443 | FQDN Tag | WindowsVirtualDesktop, WindowsUpdate, Windows Diagnostics, MicrosoftActiveProtectionService |
+Azure Virtual Desktop (AVD) official documentation reports the following Application rules as **optional** depending on the usage and scenario:
+
+| Name | Source type | Source | Protocol | Destination type | Destination |
+| | -- | --| - | - | - |
+| Rule Name | IP Address or Group | VNet or Subnet IP Address | Https:443 | FQDN Tag | WindowsUpdate, Windows Diagnostics, MicrosoftActiveProtectionService |
+| Rule Name | IP Address or Group | VNet or Subnet IP Address | Https:443 | FQDN | *.events.data.microsoft.com |
+| Rule Name | IP Address or Group | VNet or Subnet IP Address | Https:443 | FQDN | *.sfx.ms |
+| Rule Name | IP Address or Group | VNet or Subnet IP Address | Https:443 | FQDN | *.digicert.com |
+| Rule Name | IP Address or Group | VNet or Subnet IP Address | Https:443 | FQDN | *.azure-dns.com, *.azure-dns.net |
> [!IMPORTANT] > We recommend that you don't use TLS inspection with Azure Virtual Desktop. For more information, see the [proxy server guidelines](../virtual-desktop/proxy-server-support.md#dont-use-ssl-termination-on-the-proxy-server).
+## Azure Firewall Policy Sample
+All the mandatory and optional rules mentioned above can be easily deployed a single Azure Firewall Policy using the template published at [this link](https://github.com/Azure/RDS-Templates/tree/master/AzureFirewallPolicyForAVD).
+Before deploying into production, it's highly recommended to review all the Network and Application rules defined, ensure alignment with Azure Virtual Desktop official documentation and security requirements.
+ ## Host pool outbound access to the Internet
-Depending on your organization needs, you might want to enable secure outbound internet access for your end users. If the list of allowed destinations is well-defined (for example, for [Microsoft 365 access](/microsoft-365/enterprise/microsoft-365-ip-web-service)), you can use Azure Firewall application and network rules to configure the required access. This routes end-user traffic directly to the internet for best performance. If you need to allow network connectivity for Windows 365 or Intune, see [Network requirments for Windows 365](/windows-365/requirements-network#allow-network-connectivity) and [Network endpoints for Intune](/mem/intune/fundamentals/intune-endpoints).
+Depending on your organization needs, you might want to enable secure outbound internet access for your end users. If the list of allowed destinations is well-defined (for example, for [Microsoft 365 access](/microsoft-365/enterprise/microsoft-365-ip-web-service)), you can use Azure Firewall application and network rules to configure the required access. This routes end-user traffic directly to the internet for best performance. If you need to allow network connectivity for Windows 365 or Intune, see [Network requirements for Windows 365](/windows-365/requirements-network#allow-network-connectivity) and [Network endpoints for Intune](/mem/intune/fundamentals/intune-endpoints).
If you want to filter outbound user internet traffic by using an existing on-premises secure web gateway, you can configure web browsers or other applications running on the Azure Virtual Desktop host pool with an explicit proxy configuration. For example, see [How to use Microsoft Edge command-line options to configure proxy settings](/deployedge/edge-learnmore-cmdline-options-proxy-settings). These proxy settings only influence your end-user internet access, allowing the Azure Virtual Desktop platform outbound traffic directly via Azure Firewall.
If you want to filter outbound user internet traffic by using an existing on-pre
Admins can allow or deny user access to different website categories. Add a rule to your Application Collection from your specific IP address to web categories you want to allow or deny. Review all the [web categories](web-categories.md).
-## Additional considerations
-
-You might need to configure additional firewall rules, depending on your requirements:
--- NTP server access-
- By default, virtual machines running Windows connect to `time.windows.com` over UDP port 123 for time synchronization. Create a network rule to allow this access, or for a time server that you use in your environment.
- ## Next steps - Learn more about Azure Virtual Desktop: [What is Azure Virtual Desktop?](../virtual-desktop/overview.md)+
frontdoor Front Door Wildcard Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-wildcard-domain.md
For accepting HTTPS traffic on your wildcard domain, you must enable HTTPS on th
## Adding wildcard domains
-You can add a wildcard domain under the section for front-end hosts or domains. Similar to subdomains, Azure Front Door (classic) validates that there's CNAME record mapping for your wildcard domain. This DNS mapping can be a direct CNAME record mapping like `*.contoso.com` mapped to `contoso.azurefd.net`. Or you can use afdverify temporary mapping. For example, `afdverify.contoso.com` mapped to `afdverify.contoso.azurefd.net` validates the CNAME record map for the wildcard.
+You can add a wildcard domain under the section for front-end hosts or domains. Similar to subdomains, Azure Front Door (classic) validates that there's CNAME record mapping for your wildcard domain. This DNS mapping can be a direct CNAME record mapping like `*.contoso.com` mapped to `endpoint.azurefd.net`. Or you can use afdverify temporary mapping. For example, `afdverify.contoso.com` mapped to `afdverify.endpoint.azurefd.net` validates the CNAME record map for the wildcard.
> [!NOTE] > Azure DNS supports wildcard records.
governance Machine Configuration Create Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create-definition.md
and the details about machine configuration policy effects
> configuration extension version **1.29.24** or later, > or Arc agent **1.10.0** or later, is required. >
-> Custom machine configuration policy definitions using **AuditIfNotExists** are
-> Generally Available, but definitions using **DeployIfNotExists** with guest
-> configuration are **in preview**.
+> Custom machine configuration policy definitions using either **AuditIfNotExists** or **DeployIfNotExists** are now
+> Generally Available.
Use the following steps to create your own policies that audit compliance or manage the state of Azure or Arc-enabled machines.
configuration package, in a specified path:
```powershell $PolicyConfig = @{ PolicyId = '_My GUID_'
- ContentUri = <_ContentUri output from the Publish command_>
+ ContentUri = $contenturi
DisplayName = 'My audit policy' Description = 'My audit policy'
- Path = './policies'
+ Path = './policies/auditIfNotExists.json'
Platform = 'Windows' PolicyVersion = 1.0.0 }
configuration package, in a specified path:
```powershell $PolicyConfig2 = @{ PolicyId = '_My GUID_'
- ContentUri = <_ContentUri output from the Publish command_>
+ ContentUri = $contenturi
DisplayName = 'My audit policy' Description = 'My audit policy'
- Path = './policies'
+ Path = './policies/deployIfNotExists.json'
Platform = 'Windows' PolicyVersion = 1.0.0 Mode = 'ApplyAndAutoCorrect'
$PolicyParameterInfo = @(
# ...and then passed into the `New-GuestConfigurationPolicy` cmdlet $PolicyParam = @{ PolicyId = 'My GUID'
- ContentUri = '<ContentUri output from the Publish command>'
+ ContentUri = $contenturi
DisplayName = 'Audit Windows Service.' Description = "Audit if a Windows Service isn't enabled on Windows machine."
- Path = '.\policies'
+ Path = '.\policies\auditIfNotExists.json'
Parameter = $PolicyParameterInfo PolicyVersion = 1.0.0 }
requirements are documented in the [Azure Policy Overview](./overview.md) page.
role is **Resource Policy Contributor**. ```powershell
-New-AzPolicyDefinition -Name 'mypolicydefinition' -Policy '.\policies'
+New-AzPolicyDefinition -Name 'mypolicydefinition' -Policy '.\policies\auditIfNotExists.json'
+```
+
+Or, if this is a deploy if not exist policy (DINE) please use
+
+```powershell
+New-AzPolicyDefinition -Name 'mypolicydefinition' -Policy '.\policies\deployIfNotExists.json'
``` With the policy definition created in Azure, the last step is to assign the definition. See how to assign the
governance Machine Configuration Create Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create-publish.md
$Context = New-AzStorageContext -ConnectionString "DefaultEndpointsProtocol=http
Next, add the configuration package to the storage account. This example uploads the zip file ./MyConfig.zip to the blob "guestconfiguration". ```powershell
-Set-AzStorageBlobContent -Container "guestconfiguration" -File ./MyConfig.zip -Blob "guestconfiguration" -Context $Context
+Set-AzStorageBlobContent -Container "guestconfiguration" -File ./MyConfig.zip -Context $Context
```
-Optionally, you can add a SAS token in the URL, this ensures that the content package will be accessed securely. The below example generates a blob SAS token with full blob permission and returns the full blob URI with the shared access signature token.
+Optionally, you can add a SAS token in the URL, this ensures that the content package will be accessed securely. The below example generates a blob SAS token with read access and returns the full blob URI with the shared access signature token. In this example, this includes a time limit of 3 years.
```powershell
-$contenturi = New-AzStorageBlobSASToken -Context $Context -FullUri -Container guestconfiguration -Blob "guestconfiguration" -Permission rwd
+$StartTime = Get-Date
+$EndTime = $startTime.AddYears(3)
+$contenturi = New-AzStorageBlobSASToken -StartTime $StartTime -ExpiryTime $EndTime -Container "guestconfiguration" -Blob "MyConfig.zip" -Permission rwd -Context $Context -FullUri
``` ## Next steps
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/overview.md
-# Understand the machine configuration feature of Azure Policy
+# Understand the machine configuration feature of Azure Automanage
[!INCLUDE [Machine config rename banner](../includes/banner.md)]
hdinsight Enable Private Link On Kafka Rest Proxy Hdi Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/enable-private-link-on-kafka-rest-proxy-hdi-cluster.md
Title: Enable Private Link on an HDInsight Kafka Rest Proxy cluster
-description: Learn how to Enable Private Link on an HDInsight Kafka Rest Proxy cluster.
+ Title: Enable Private Link on an Azure HDInsight Kafka Rest Proxy cluster
+description: Learn how to Enable Private Link on an Azure HDInsight Kafka Rest Proxy cluster.
Follow these extra steps to enable private link for Kafka Rest Proxy HDI cluster
## Prerequisites
-As a prerequisite, complete the steps mentioned in [Enable Private Link on an HDInsight cluster document](./hdinsight-private-link.md), then perform the below steps.
+As a prerequisite, complete the steps mentioned in [Enable Private Link on an Azure HDInsight cluster document](./hdinsight-private-link.md), then perform the below steps.
## Create private endpoints
hdinsight Apache Hadoop Use Hive Curl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-hive-curl.md
description: Learn how to remotely submit Apache Pig jobs to Azure HDInsight usi
Previously updated : 01/06/2020 Last updated : 08/30/2022 # Run Apache Hive queries with Apache Hadoop in HDInsight using REST
For information on other ways you can work with Hadoop on HDInsight:
* [Use Apache Hive with Apache Hadoop on HDInsight](hdinsight-use-hive.md) * [Use MapReduce with Apache Hadoop on HDInsight](hdinsight-use-mapreduce.md)
-For more information on the REST API used in this document, see the [WebHCat reference](https://cwiki.apache.org/confluence/display/Hive/WebHCat+Reference) document.
+For more information on the REST API used in this document, see the [WebHCat reference](https://cwiki.apache.org/confluence/display/Hive/WebHCat+Reference) document.
hdinsight Apache Hadoop Use Hive Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-hive-powershell.md
description: Use PowerShell to run Apache Hive queries in Apache Hadoop in Azure
Previously updated : 12/24/2019 Last updated : 08/30/2022 # Run Apache Hive queries using PowerShell
hdinsight Apache Hadoop Use Mapreduce Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-mapreduce-ssh.md
description: Learn how to use SSH to run MapReduce jobs using Apache Hadoop on H
Previously updated : 01/10/2020 Last updated : 08/30/2022 # Use MapReduce with Apache Hadoop on HDInsight with SSH
hdinsight Hbase Troubleshoot Unassigned Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-unassigned-regions.md
Title: Issues with region servers in Azure HDInsight
description: Issues with region servers in Azure HDInsight Previously updated : 06/30/2020 Last updated : 08/30/2022 # Issues with region servers in Azure HDInsight
hdinsight Hdinsight Apache Storm With Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-apache-storm-with-kafka.md
Last updated 08/05/2022+ #Customer intent: As a developer, I want to learn how to build a streaming pipeline that uses Storm and Kafka to process streaming data.
hdinsight Hdinsight Config For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-config-for-vscode.md
Title: Azure HDInsight configuration settings reference
description: Introduce the configuration of Azure HDInsight extension. Previously updated : 04/07/2021 Last updated : 08/30/2022
For general information about working with settings in VS Code, refer to [User a
## Next steps - For information about Azure HDInsight for VSCode, see [Spark & Hive for Visual Studio Code Tools](/sql/big-data-cluster/spark-hive-tools-vscode).-- For a video that demonstrates using Spark & Hive for Visual Studio Code, see [Spark & Hive for Visual Studio Code](https://go.microsoft.com/fwlink/?linkid=858706).
+- For a video that demonstrates using Spark & Hive for Visual Studio Code, see [Spark & Hive for Visual Studio Code](https://go.microsoft.com/fwlink/?linkid=858706).
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
HDI Hive 3.1 version is upgraded to OSS Hive 3.1.2. This version has all fixes a
| Support parallel load for HastTables - Interfaces|[HIVE-25583](https://issues.apache.org/jira/browse/HIVE-25583)| | Include MultiDelimitSerDe in HiveServer2 By Default|[HIVE-20619](https://issues.apache.org/jira/browse/HIVE-20619)| | Remove glassfish.jersey and mssql-jdbc classes from jdbc-standalone jar|[HIVE-22134](https://issues.apache.org/jira/browse/HIVE-22134)|
-| Null pointer exception on running compaction against an MM table.|[HIVE-21280 ](https://issues.apache.org/jira/browse/HIVE-21280)|
+| Null pointer exception on running compaction against an MM table.|[HIVE-21280](https://issues.apache.org/jira/browse/HIVE-21280)|
| Hive query with large size via knox fails with Broken pipe Write failed|[HIVE-22231](https://issues.apache.org/jira/browse/HIVE-22231)| | Adding ability for user to set bind user|[HIVE-21009](https://issues.apache.org/jira/browse/HIVE-21009)| | Implement UDF to interpret date/timestamp using its internal representation and Gregorian-Julian hybrid calendar|[HIVE-22241](https://issues.apache.org/jira/browse/HIVE-22241)| | Beeline option to show/not show execution report|[HIVE-22204](https://issues.apache.org/jira/browse/HIVE-22204)|
-| Tez: SplitGenerator tries to look for plan files, which won't exist for Tez|[HIVE-22169 ](https://issues.apache.org/jira/browse/HIVE-22169)|
+| Tez: SplitGenerator tries to look for plan files, which won't exist for Tez|[HIVE-22169](https://issues.apache.org/jira/browse/HIVE-22169)|
| Remove expensive logging from the LLAP cache hotpath|[HIVE-22168](https://issues.apache.org/jira/browse/HIVE-22168)| | UDF: FunctionRegistry synchronizes on org.apache.hadoop.hive.ql.udf.UDFType class|[HIVE-22161](https://issues.apache.org/jira/browse/HIVE-22161)| | Prevent the creation of query routing appender if property is set to false|[HIVE-22115](https://issues.apache.org/jira/browse/HIVE-22115)| | Remove cross-query synchronization for the partition-eval|[HIVE-22106](https://issues.apache.org/jira/browse/HIVE-22106)| | Skip setting up hive scratch dir during planning|[HIVE-21182](https://issues.apache.org/jira/browse/HIVE-21182)| | Skip creating scratch dirs for tez if RPC is on|[HIVE-21171](https://issues.apache.org/jira/browse/HIVE-21171)|
-| switch Hive UDFs to use Re2J regex engine|[HIVE-19661 ](https://issues.apache.org/jira/browse/HIVE-19661)|
+| switch Hive UDFs to use Re2J regex engine|[HIVE-19661](https://issues.apache.org/jira/browse/HIVE-19661)|
| Migrated clustered tables using bucketing_version 1 on hive 3 uses bucketing_version 2 for inserts|[HIVE-22429](https://issues.apache.org/jira/browse/HIVE-22429)|
-| Bucketing: Bucketing version 1 is incorrectly partitioning data|[HIVE-21167 ](https://issues.apache.org/jira/browse/HIVE-21167)|
+| Bucketing: Bucketing version 1 is incorrectly partitioning data|[HIVE-21167](https://issues.apache.org/jira/browse/HIVE-21167)|
| Adding ASF License header to the newly added file|[HIVE-22498](https://issues.apache.org/jira/browse/HIVE-22498)| | Schema tool enhancements to support mergeCatalog|[HIVE-22498](https://issues.apache.org/jira/browse/HIVE-22498)| | Hive with TEZ UNION ALL and UDTF results in data loss|[HIVE-21915](https://issues.apache.org/jira/browse/HIVE-21915)|
HDI Hive 3.1 version is upgraded to OSS Hive 3.1.2. This version has all fixes a
| Fix wrong results/ArrayOutOfBound exception in left outer map joins on specific boundary conditions|[HIVE-22120](https://issues.apache.org/jira/browse/HIVE-22120)| | Remove distribution management tag from pom.xml|[HIVE-19667](https://issues.apache.org/jira/browse/HIVE-19667)| | Parsing time can be high if there's deeply nested subqueries|[HIVE-21980](https://issues.apache.org/jira/browse/HIVE-21980)|
-| For ALTER TABLE t SET TBLPROPERTIES ('EXTERNAL'='TRUE'); `TBL_TYPE` attribute changes not reflecting for non-CAPS|[HIVE-20057 ](https://issues.apache.org/jira/browse/HIVE-20057)|
+| For ALTER TABLE t SET TBLPROPERTIES ('EXTERNAL'='TRUE'); `TBL_TYPE` attribute changes not reflecting for non-CAPS|[HIVE-20057](https://issues.apache.org/jira/browse/HIVE-20057)|
| JDBC: HiveConnection shades log4j interfaces|[HIVE-18874](https://issues.apache.org/jira/browse/HIVE-18874)| | Update repo URLs in poms - branh 3.1 version|[HIVE-21786](https://issues.apache.org/jira/browse/HIVE-21786)| | DBInstall tests broken on master and branch-3.1|[HIVE-21758](https://issues.apache.org/jira/browse/HIVE-21758)|
hdinsight Hdinsight Sdk Dotnet Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-sdk-dotnet-samples.md
Title: 'Azure HDInsight: .NET samples'
description: Find C# .NET examples on GitHub for common tasks using the HDInsight SDK for .NET. Previously updated : 12/06/2019 Last updated : 08/30/2022 # Azure HDInsight: .NET samples
hdinsight Network Virtual Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/network-virtual-appliance.md
Title: Configure network virtual appliance in Azure HDInsight
description: Learn how to configure a number of additional features for your network virtual appliance in Azure HDInsight. Previously updated : 06/30/2020 Last updated : 08/30/2022 # Configure network virtual appliance in Azure HDInsight
hdinsight Apache Spark Shell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-shell.md
description: An interactive Spark Shell provides a read-execute-print process fo
Previously updated : 02/10/2020 Last updated : 08/30/2022 # Run Apache Spark from the Spark Shell
hdinsight Use Scp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/use-scp.md
description: This document provides information on connecting to HDInsight using
Previously updated : 04/22/2020 Last updated : 08/30/2022 # Use SCP with Apache Hadoop in Azure HDInsight
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/convert-data.md
# Converting your data to FHIR for Azure API for FHIR
-The `$convert-data` custom endpoint in the FHIR service is meant for data conversion from different data types to FHIR. It uses the Liquid template engine and the templates from the [FHIR Converter](https://github.com/microsoft/FHIR-Converter) project as the default templates. You can customize these conversion templates as needed. Currently it supports three types of data conversion: **C-CDA to FHIR**, **HL7v2 to FHIR**, **JSON to FHIR**.
+The `$convert-data` custom endpoint in the FHIR service is meant for data conversion from different data types to FHIR. It uses the Liquid template engine and the templates from the [FHIR Converter](https://github.com/microsoft/FHIR-Converter) project as the default templates. You can customize these conversion templates as needed. Currently it supports three types of data conversion: **C-CDA to FHIR**, **HL7v2 to FHIR**, **JSON to FHIR**, **FHIR STU3 to FHIR R4(new!)**.
> [!NOTE] > `$convert-data` endpoint can be used as a component within an ETL pipeline for the conversion of raw healthcare data from legacy formats into FHIR format. However, it is not an ETL pipeline in itself. We recommend you to use an ETL engine such as Logic Apps or Azure Data Factory for a complete workflow in preparing your FHIR data to be persisted into the FHIR server. The workflow might include: data reading and ingestion, data validation, making $convert-data API calls, data pre/post-processing, data enrichment, and data de-duplication.
$convert-data takes a [Parameter](http://hl7.org/fhir/parameters.html) resource
| Parameter Name | Description | Accepted values | | -- | -- | -- |
-| inputData | Data to be converted. | For `Hl7v2`: string <br> For `Ccda`: XML <br> For `Json`: JSON |
-| inputDataType | Data type of input. | ```HL7v2```, ``Ccda``, ``Json`` |
-| templateCollectionReference | Reference to an [OCI image ](https://github.com/opencontainers/image-spec) template collection on [Azure Container Registry (ACR)](https://azure.microsoft.com/services/container-registry/). It's the image containing Liquid templates to use for conversion. It can be a reference either to the default templates or a custom template image that is registered within the FHIR service. See below to learn about customizing the templates, hosting those on ACR, and registering to the FHIR service. | For ***default/sample*** templates: <br> **HL7v2** templates: <br>```microsofthealth/fhirconverter:default``` <br>``microsofthealth/hl7v2templates:default``<br> **C-CDA** templates: <br> ``microsofthealth/ccdatemplates:default`` <br> **JSON** templates: <br> ``microsofthealth/jsontemplates:default`` <br><br> For ***custom*** templates: <br> \<RegistryServer\>/\<imageName\>@\<imageDigest\>, \<RegistryServer\>/\<imageName\>:\<imageTag\> |
-| rootTemplate | The root template to use while transforming the data. | For **HL7v2**:<br> "ADT_A01", "ADT_A02", "ADT_A03", "ADT_A04", "ADT_A05", "ADT_A08", "ADT_A11", "ADT_A13", "ADT_A14", "ADT_A15", "ADT_A16", "ADT_A25", "ADT_A26", "ADT_A27", "ADT_A28", "ADT_A29", "ADT_A31", "ADT_A47", "ADT_A60", "OML_O21", "ORU_R01", "ORM_O01", "VXU_V04", "SIU_S12", "SIU_S13", "SIU_S14", "SIU_S15", "SIU_S16", "SIU_S17", "SIU_S26", "MDM_T01", "MDM_T02"<br><br> For **C-CDA**:<br> "CCD", "ConsultationNote", "DischargeSummary", "HistoryandPhysical", "OperativeNote", "ProcedureNote", "ProgressNote", "ReferralNote", "TransferSummary" <br><br> For **JSON**: <br> "ExamplePatient", "Stu3ChargeItem" <br> |
+| inputData | Data to be converted. | For `Hl7v2`: string <br> For `Ccda`: XML <br> For `Json`: JSON <br> For `FHIR STU3`: JSON|
+| inputDataType | Data type of input. | ```HL7v2```, ``Ccda``, ``Json``, ``Fhir``|
+| templateCollectionReference | Reference to an [OCI image ](https://github.com/opencontainers/image-spec) template collection on [Azure Container Registry (ACR)](https://azure.microsoft.com/services/container-registry/). It's the image containing Liquid templates to use for conversion. It can be a reference either to the default templates or a custom template image that is registered within the FHIR service. See below to learn about customizing the templates, hosting those on ACR, and registering to the FHIR service. | For ***default/sample*** templates: <br> **HL7v2** templates: <br>```microsofthealth/fhirconverter:default``` <br>``microsofthealth/hl7v2templates:default``<br> **C-CDA** templates: <br> ``microsofthealth/ccdatemplates:default`` <br> **JSON** templates: <br> ``microsofthealth/jsontemplates:default`` <br> **FHIR-STU3** templates: <br> ``microsofthealth/stu3tor4templates:default`` <br><br> For ***custom*** templates: <br> \<RegistryServer\>/\<imageName\>@\<imageDigest\>, \<RegistryServer\>/\<imageName\>:\<imageTag\> |
+| rootTemplate | The root template to use while transforming the data. | For **HL7v2**:<br> "ADT_A01", "ADT_A02", "ADT_A03", "ADT_A04", "ADT_A05", "ADT_A08", "ADT_A11", "ADT_A13", "ADT_A14", "ADT_A15", "ADT_A16", "ADT_A25", "ADT_A26", "ADT_A27", "ADT_A28", "ADT_A29", "ADT_A31", "ADT_A47", "ADT_A60", "OML_O21", "ORU_R01", "ORM_O01", "VXU_V04", "SIU_S12", "SIU_S13", "SIU_S14", "SIU_S15", "SIU_S16", "SIU_S17", "SIU_S26", "MDM_T01", "MDM_T02"<br><br> For **C-CDA**:<br> "CCD", "ConsultationNote", "DischargeSummary", "HistoryandPhysical", "OperativeNote", "ProcedureNote", "ProgressNote", "ReferralNote", "TransferSummary" <br><br> For **JSON**: <br> "ExamplePatient", "Stu3ChargeItem" <br><br> **For FHIR STU3 to R4**": <br>Name of the root template that is the same as the STU3 resource name e.g., "Patient", "Observation", "Organization". |
> [!NOTE] > JSON templates are sample templates for use, not "default" templates that adhere to any pre-defined JSON message types. JSON doesn't have any standardized message types, unlike HL7v2 messages or C-CDA documents. Therefore, instead of default templates we provide you with some sample templates that you can use as a starting guide for your own customized templates.
healthcare-apis Configure Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-import-data.md
Title: Configure import settings in the FHIR service - Azure Health Data Services description: This article describes how to configure import settings in the FHIR service.-+ Last updated 06/06/2022-+ # Configure bulk-import settings (Preview)
healthcare-apis De Identified Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/de-identified-export.md
Previously updated : 08/15/2022- Last updated : 08/30/2022+ # Exporting de-identified data
healthcare-apis Get Started With Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started-with-iot.md
Title: Get started with the MedTech service in Azure Health Data Services
-description: This document describes how to get started with the MedTech service in Azure Health Data Services.
+description: This document describes how to get you started with the MedTech service in Azure Health Data Services.
Previously updated : 08/02/2022 Last updated : 08/30/2022
The following diagram outlines the basic architectural path that enables the Med
### Data processing -- Step 5 represents the data flow from a device to an event hub and is processed through the five parts of the MedTech service.
+- Step 5 represents the data flow from a device to an event hub and the way it's processed through the five parts of the MedTech service.
- Step 6 demonstrates the path to verify processed data sent from MedTech service to the FHIR service.
healthcare-apis Iot Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-overview.md
## Overview
-MedTech service in Azure Health Data Services is a Platform as a service (PaaS) that enables you to gather data from diverse medical devices and change it into a Fast Healthcare Interoperability Resources (FHIR&#174;) service format. MedTech service's device data translation capabilities make it possible to convert a wide variety of data into a unified FHIR format that provides secure health data management in a cloud environment.
+MedTech service in Azure Health Data Services is a Platform as a service (PaaS) that enables you to gather data from diverse medical devices and convert it into a Fast Healthcare Interoperability Resources (FHIR&#174;) service format. MedTech service's device data translation capabilities make it possible to transform a wide variety of data into a unified FHIR format that provides secure health data management in a cloud environment.
MedTech service is important because healthcare data can be difficult to access or lost when it comes from diverse or incompatible devices, systems, or formats. If medical information isn't easy to access, it may have a negative impact on gaining clinical insights and a patient's health and wellness. The ability to translate many types of medical device data into a unified FHIR format enables MedTech service to successfully link devices, health data, labs, and remote in-person care to support the clinician, care team, patient, and family. As a result, this capability can facilitate the discovery of important clinical insights and trend capture. It can also help make connections to new device applications and enable advanced research projects.
iot-edge Development Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/development-environment.md
Only the IoT Edge runtime is supported for production deployments, but the follo
| IoT EdgeHub dev tool | iotedgehubdev | Windows, Linux, macOS | Simulating a device to debug modules. | | IoT Edge dev container | iotedgedev | Windows, Linux, macOS | Developing without installing dependencies. | | IoT Edge runtime in a container | iotedgec | Windows, Linux, macOS, ARM | Testing on a device that may not support the runtime. |
-| IoT Edge device container | toolboc/azure-iot-edge-device-container | Windows, Linux, macOS, ARM | Testing a scenario with many IoT Edge devices at scale. |
### IoT EdgeHub dev tool
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md
All IoT Edge devices use certificates to create secure connections between the r
## Install production certificates
-When you first install IoT Edge and provision your device, the device is set up with temporary certificates so that you can test the service.
-These temporary certificates expire in 90 days, or can be reset by restarting your machine.
+When you first install IoT Edge and provision your device, the device is set up with temporary certificates (known as quickstart CA) so that you can test the service.
+These temporary certificates expire in 90 days.
Once you move into a production scenario, or you want to create a gateway device, you need to provide your own certificates. This article demonstrates the steps to install certificates on your IoT Edge devices.
If you are using IoT Edge for Linux on Windows, you need to use the SSH key loca
sudo iotedge config apply ```
+## Automatic certificate renewal
+
+IoT Edge has built-in ability to renew certificates before expiry.
+
+Certificates renewal requires an issuance method that IoT Edge can manage. Generally, this means an EST server is required, but IoT Edge can also automatically renew the quickstart CA without configuration. Certificate renewal is configured per type of certificate. To configure, go to the relevant certificate configuration section in `config.toml` and add:
+
+```toml
+# To use auto renew with other types of certs, swap `edge_ca` with other certificate types
+# And put into the relevant section
+[edge_ca]
+method = "est"
+#...
+[edge_ca.auto_renew]
+rotate_key = true
+threshold = "80%"
+retry = "4%"
+```
+
+Here:
+- `rotate_key` controls if the private key should be rotated.
+- `threshold` sets when IoT Edge should start renewing the certificate . It can be specified as:
+ - *Percentage* - integer between `0` and `100` followed by `%`. Renewal starts relative to the certificate lifetime. For example, when set to `80%`, a certificate that is valid for 100 days begins renewal at 20 days before its expiry.
+ - *Absolute time* - integer followed by `m` (minutes) or `d` (days). Renewal starts relative to the certificate expiration time. For example, when set to `4d` for 4 days or `10m` for 10 minutes, the certificate begins renewing at that time before expiry. To avoid unintentional misconfiguration where the `threshold` is bigger than the certificate lifetime, we recommend to use *percentage* instead whenever possible.
+- `retry` controls how often renewal should be retried on failure. Like `threshold`, it can similarly be specified as a *percentage* or *absolute time* using the same format.
+ :::moniker-end <!-- end iotedge-2020-11 -->
-## Customize certificate lifetime
+## Customize quickstart CA lifetime
IoT Edge automatically generates certificates on the device in several cases, including:
-<!-- 1.2 -->
-If you don't provide your own production certificates when you install and provision IoT Edge, the IoT Edge security manager automatically generates an **edge CA certificate**. This self-signed certificate is only meant for development and testing scenarios, not production. This certificate expires after 90 days.
-<!-- end 1.2 -->
- <!-- 1.1. --> :::moniker range="iotedge-2018-06"
-* If you don't provide your own production certificates when you install and provision IoT Edge, the IoT Edge security manager automatically generates a **device CA certificate**. This self-signed certificate is only meant for development and testing scenarios, not production. This certificate expires after 90 days.
+* If you don't provide your own production certificates when you install and provision IoT Edge, the IoT Edge security manager automatically generates a **device CA certificate**. This self-signed certificate is known as the quickstart CA and only meant for development and testing scenarios, not production. This certificate expires after 90 days.
* The IoT Edge security manager also generates a **workload CA certificate** signed by the device CA certificate :::moniker-end <!-- end 1.1 -->
+<!-- 1.2 -->
+If you don't provide your own production certificates when you install and provision IoT Edge, the IoT Edge security manager automatically generates an **edge CA certificate**. This self-signed certificate is known as the quickstart CA and only meant for development and testing scenarios, not production. This certificate expires after 90 days.
+<!-- end 1.2 -->
+ For more information about the function of the different certificates on an IoT Edge device, see [Understand how Azure IoT Edge uses certificates](iot-edge-certs.md).
-For these two automatically generated certificates, you have the option of setting a flag in the config file to configure the number of days for the lifetime of the certificates.
+You have the option of setting a flag in the config file to configure the number of days for the lifetime of the certificates.
>[!NOTE] >There is a third auto-generated certificate that the IoT Edge security manager creates, the **IoT Edge hub server certificate**. This certificate always has a 30 day lifetime, but is automatically renewed before expiring. The auto-generated CA lifetime value set in the config file doesn't affect this certificate.
-<!-- 1.2 -->
-Upon expiry after the specified number of days, IoT Edge has to be restarted to regenerate the edge CA certificate. The edge CA certificate won't be renewed automatically.
-<!-- end 1.2 -->
- <!-- 1.1. --> :::moniker range="iotedge-2018-06" Upon expiry after the specified number of days, IoT Edge has to be restarted to regenerate the device CA certificate. The device CA certificate won't be renewed automatically.
Upon expiry after the specified number of days, IoT Edge has to be restarted to
:::moniker-end <!-- end iotedge-2020-11 -->
+<!-- 1.2 -->
+
+### Renew quickstart Edge CA
+
+By default, IoT Edge automatically regenerates the Edge CA certificate when at 80% of the certificate lifetime. So for certificate with 90 day lifetime, IoT Edge automatically regenerates the Edge CA certificate at 72 days from issuance.
+
+To configure the auto-renewal logic, add this part to the "Edge CA certificate" section in `config.toml`.
+
+```toml
+[edge_ca.auto_renew]
+rotate_key = true
+threshold = "70%"
+retry = "2%"
+```
+<!-- end 1.2 -->
+ ## Next steps Installing certificates on an IoT Edge device is a necessary step before deploying your solution in production. Learn more about how to [Prepare to deploy your IoT Edge solution in production](production-checklist.md).
iot-edge Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/version-history.md
This table provides recent version history for IoT Edge package releases, and hi
| Release notes and assets | Type | Date | Highlights | | | - | - | - |
-| [1.4](https://github.com/Azure/azure-iotedge/releases/tag/1.4.0) | Stable | August 2022 | Automatic image clean-up of unused Docker images <br> Ability to pass a [custom JSON payload to DPS on provisioning](../iot-dps/how-to-send-additional-data.md#iot-edge-support) <br> Ability to require all modules in a deployment be downloaded before restart <br> Use of the TCG TPM2 Software Stack which enables TPM hierarchy authorization values, specifying the TPM index at which to persist the DPS authentication key, and accommodating more [TPM configurations](http://github.com/Azure/iotedge/blob/897aed8c5573e8cad4b602e5a1298bdc64cd28b4/edgelet/contrib/config/linux/template.toml#L262-L288)
+| [1.4](https://github.com/Azure/azure-iotedge/releases/tag/1.4.0) | Long-term support (LTS) | August 2022 | Automatic image clean-up of unused Docker images <br> Ability to pass a [custom JSON payload to DPS on provisioning](../iot-dps/how-to-send-additional-data.md#iot-edge-support) <br> Ability to require all modules in a deployment be downloaded before restart <br> Use of the TCG TPM2 Software Stack which enables TPM hierarchy authorization values, specifying the TPM index at which to persist the DPS authentication key, and accommodating more [TPM configurations](http://github.com/Azure/iotedge/blob/897aed8c5573e8cad4b602e5a1298bdc64cd28b4/edgelet/contrib/config/linux/template.toml#L262-L288)
| [1.3](https://github.com/Azure/azure-iotedge/releases/tag/1.3.0) | Stable | June 2022 | Support for Red Hat Enterprise Linux 8 on AMD and Intel 64-bit architectures.<br>Edge Hub now enforces that inbound/outbound communication uses minimum TLS version 1.2 by default<br>Updated runtime modules (edgeAgent, edgeHub) based on .NET 6 | [1.2](https://github.com/Azure/azure-iotedge/releases/tag/1.2.0) | Stable | April 2021 | [IoT Edge devices behind gateways](how-to-connect-downstream-iot-edge-device.md?view=iotedge-2020-11&preserve-view=true)<br>[IoT Edge MQTT broker (preview)](how-to-publish-subscribe.md?view=iotedge-2020-11&preserve-view=true)<br>New IoT Edge packages introduced, with new installation and configuration steps. For more information, see [Update from 1.0 or 1.1 to latest release](how-to-update-iot-edge.md#special-case-update-from-10-or-11-to-latest-release).<br>Includes [Microsoft Defender for IoT micro-agent for Edge](../defender-for-iot/device-builders/overview.md).<br> Integration with Device Update. For more information, see [Update IoT Edge](how-to-update-iot-edge.md). | [1.1](https://github.com/Azure/azure-iotedge/releases/tag/1.1.0) | Long-term support (LTS) | February 2021 | [Long-term support plan and supported systems updates](support.md) |
iot-hub-device-update Device Update Configure Repo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-configure-repo.md
Such as:
- You need to deliver over-the-air updates to your devices from a private package repository with approved versions of libraries and components - You need devices to get packages from a specific vendor's repository
-Following this document, learn how to configure a package repository using [OSConfig for IoT](https://docs.microsoft.com/azure/osconfig/overview-osconfig-for-iot) and deploy packages based updates from that repository to your device fleet using [Device Update for IoT Hub](understand-device-update.md). Package-based updates are targeted updates that alter only a specific component or application on the device. They lead to lower consumption of bandwidth and help reduce the time to download and install the update. Package-based updates also typically allow for less downtime of devices when you apply an update and avoid the overhead of creating images.
+Following this document, learn how to configure a package repository using [OSConfig for IoT](/azure/osconfig/overview-osconfig-for-iot) and deploy packages based updates from that repository to your device fleet using [Device Update for IoT Hub](understand-device-update.md). Package-based updates are targeted updates that alter only a specific component or application on the device. They lead to lower consumption of bandwidth and help reduce the time to download and install the update. Package-based updates also typically allow for less downtime of devices when you apply an update and avoid the overhead of creating images.
## Prerequisites You need an Azure account with an [IoT Hub](../iot-hub/iot-concepts-and-iot-hub.md) and Microsoft Azure Portal or Azure CLI to interact with devices via your IoT Hub. Follow the next steps to get started: - Create a Device Update account and instance in your IoT Hub. See [how to create it](create-device-update-account.md).-- Install the [IoT Hub Identity Service](https://azure.github.io/iot-identity-service/installation.html) (or skip if [IoT Edge 1.2](https://docs.microsoft.com/azure/iot-edge/how-to-provision-single-device-linux-symmetric?view=iotedge-2020-11&preserve-view=true&tabs=azure-portal%2Cubuntu#install-iot-edge) or higher is already installed on the device).
+- Install the [IoT Hub Identity Service](https://azure.github.io/iot-identity-service/installation.html) (or skip if [IoT Edge 1.2](/azure/iot-edge/how-to-provision-single-device-linux-symmetric?view=iotedge-2020-11&preserve-view=true&tabs=azure-portal%2Cubuntu#install-iot-edge) or higher is already installed on the device).
- Install the Device Update agent on the device. See [how to](device-update-ubuntu-agent.md#manually-prepare-a-device).-- Install the OSConfig agent on the device. See [how to](https://docs.microsoft.com/azure/osconfig/howto-install?tabs=package#step-11-connect-a-device-to-packagesmicrosoftcom).-- Now that both the agent and IoT Hub Identity Service are present on the device, the next step is to configure the device with an identity so it can connect to Azure. See example [here](https://docs.microsoft.com/azure/osconfig/howto-install?tabs=package#job-2--connect-to-azure)
+- Install the OSConfig agent on the device. See [how to](/azure/osconfig/howto-install?tabs=package#step-11-connect-a-device-to-packagesmicrosoftcom).
+- Now that both the agent and IoT Hub Identity Service are present on the device, the next step is to configure the device with an identity so it can connect to Azure. See example [here](/azure/osconfig/howto-install?tabs=package#job-2--connect-to-azure)
## How to configure package repository for package updates Follow the below steps to update Azure IoT Edge on Ubuntu Server 18.04 x64 by configuring a source repository. The tools and concepts in this tutorial still apply even if you plan to use a different OS platform configuration.
-1. Configure the package repository of your choice with the OSConfigΓÇÖs configure package repo module. See [how to](https://docs.microsoft.com/azure/osconfig/howto-pmc?tabs=portal%2Csingle#example-1--specify-desired-package-sources). This repository should be the location where you wish to store packages to be downloaded to the device.
+1. Configure the package repository of your choice with the OSConfigΓÇÖs configure package repo module. See [how to](/azure/osconfig/howto-pmc?tabs=portal%2Csingle#example-1--specify-desired-package-sources). This repository should be the location where you wish to store packages to be downloaded to the device.
2. Upload your packages to the above configured repository. 3. Create an [APT manifest](device-update-apt-manifest.md) to provide the Device Update agent with the information it needs to download and install the packages (and their dependencies) from the repository. 4. Follow steps from [here](device-update-ubuntu-agent.md#prerequisites) to do a package update with Device Update. Device Update is used to deploy package updates to a large number of devices and at scale.
iot-hub Iot Hub Bulk Identity Mgmt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-bulk-identity-mgmt.md
The **ImportDevicesAsync** method takes two parameters:
SharedAccessBlobPermissions.Read ```
-* A *string* that contains a URI of an [Azure Storage](https://azure.microsoft.com/documentation/services/storage/) blob container to use as *output* from the job. The job creates a block blob in this container to store any error information from the completed import **Job**. The SAS token must include these permissions:
+* A *string* that contains a URI of an [Azure Storage](/azure/storage/) blob container to use as *output* from the job. The job creates a block blob in this container to store any error information from the completed import **Job**. The SAS token must include these permissions:
```csharp SharedAccessBlobPermissions.Write | SharedAccessBlobPermissions.Read
To further explore the capabilities of IoT Hub, see:
To explore using the IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning, see:
-* [Azure IoT Hub Device Provisioning Service](../iot-dps/index.yml)
+* [Azure IoT Hub Device Provisioning Service](../iot-dps/index.yml)
iot-hub Iot Hub Devguide Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-endpoints.md
The following list describes the endpoints:
* **Service endpoints**. Each IoT hub exposes a set of endpoints for your solution back end to communicate with your devices. With one exception, these endpoints are only exposed using the [AMQP](https://www.amqp.org/) and AMQP over WebSockets protocols. The direct method invocation endpoint is exposed over the HTTPS protocol.
- * *Receive device-to-cloud messages*. This endpoint is compatible with [Azure Event Hubs](https://azure.microsoft.com/documentation/services/event-hubs/). A back-end service can use it to read the [device-to-cloud messages](iot-hub-devguide-messages-d2c.md) sent by your devices. You can create custom endpoints on your IoT hub in addition to this built-in endpoint.
+ * *Receive device-to-cloud messages*. This endpoint is compatible with [Azure Event Hubs](/azure/event-hubs/). A back-end service can use it to read the [device-to-cloud messages](iot-hub-devguide-messages-d2c.md) sent by your devices. You can create custom endpoints on your IoT hub in addition to this built-in endpoint.
* *Send cloud-to-device messages and receive delivery acknowledgments*. These endpoints enable your solution back end to send reliable [cloud-to-device messages](iot-hub-devguide-messages-c2d.md), and to receive the corresponding delivery or expiration acknowledgments.
iot-hub Iot Hub Devguide Identity Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-identity-registry.md
Device identities can also be exported and imported from an IoT Hub via the Serv
The device data that a given IoT solution stores depends on the specific requirements of that solution. But, as a minimum, a solution must store device identities and authentication keys. Azure IoT Hub includes an identity registry that can store values for each device such as IDs, authentication keys, and status codes. A solution can use other Azure services such as table storage, blob storage, or Cosmos DB to store any additional device data.
-*Device provisioning* is the process of adding the initial device data to the stores in your solution. To enable a new device to connect to your hub, you must add a device ID and keys to the IoT Hub identity registry. As part of the provisioning process, you might need to initialize device-specific data in other solution stores. You can also use the Azure IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning to one or more IoT hubs without requiring human intervention. To learn more, see the [provisioning service documentation](https://azure.microsoft.com/documentation/services/iot-dps).
+*Device provisioning* is the process of adding the initial device data to the stores in your solution. To enable a new device to connect to your hub, you must add a device ID and keys to the IoT Hub identity registry. As part of the provisioning process, you might need to initialize device-specific data in other solution stores. You can also use the Azure IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning to one or more IoT hubs without requiring human intervention. To learn more, see the [provisioning service documentation](/azure/iot-dps).
## Device heartbeat
To try out some of the concepts described in this article, see the following IoT
To explore using the IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning, see:
-* [Azure IoT Hub Device Provisioning Service](https://azure.microsoft.com/documentation/services/iot-dps)
+* [Azure IoT Hub Device Provisioning Service](/azure/iot-dps)
iot-hub Iot Hub Devguide Messages Read Builtin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-read-builtin.md
# Read device-to-cloud messages from the built-in endpoint
-By default, messages are routed to the built-in service-facing endpoint (**messages/events**) that is compatible with [Event Hubs](https://azure.microsoft.com/documentation/services/event-hubs/). This endpoint is currently only exposed using the [AMQP](https://www.amqp.org/) protocol on port 5671 and [AMQP over WebSockets](http://docs.oasis-open.org/amqp-bindmap/amqp-wsb/v1.0/cs01/amqp-wsb-v1.0-cs01.html) on port 443. An IoT hub exposes the following properties to enable you to control the built-in Event Hub-compatible messaging endpoint **messages/events**.
+By default, messages are routed to the built-in service-facing endpoint (**messages/events**) that is compatible with [Event Hubs](/azure/event-hubs/). This endpoint is currently only exposed using the [AMQP](https://www.amqp.org/) protocol on port 5671 and [AMQP over WebSockets](http://docs.oasis-open.org/amqp-bindmap/amqp-wsb/v1.0/cs01/amqp-wsb-v1.0-cs01.html) on port 443. An IoT hub exposes the following properties to enable you to control the built-in Event Hub-compatible messaging endpoint **messages/events**.
| Property | Description | | - | -- |
iot-hub Iot Hub Mqtt 5 Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-mqtt-5-reference.md
description: Learn about IoT Hub's MQTT 5 API reference
-
+
Last updated 11/19/2020
iot-hub Iot Hub Mqtt 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-mqtt-5.md
description: Learn about IoT Hub's MQTT 5 support
-
+
Last updated 11/19/2020
iot-hub Iot Hub Preview Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-preview-mode.md
description: Learn how to turn on preview mode for IoT Hub, why you would want to, and some warnings
-
+
Last updated 11/24/2020
iot-hub Iot Hub Tls Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-tls-support.md
description: Learn about using secure TLS connections for devices and services communicating with IoT Hub
-
+
Last updated 06/29/2021
iot-hub Virtual Network Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/virtual-network-support.md
description: How to use virtual networks connectivity pattern with IoT Hub
-
+
Last updated 10/20/2021
load-balancer Load Balancer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-overview.md
Key scenarios that you can accomplish using Azure Standard Load Balancer include
- Increase availability by distributing resources **[within](./tutorial-load-balancer-standard-public-zonal-portal.md)** and **[across](./quickstart-load-balancer-standard-public-portal.md)** zones. -- Configure **[outbound connectivity ](./load-balancer-outbound-connections.md)** for Azure virtual machines.
+- Configure **[outbound connectivity](./load-balancer-outbound-connections.md)** for Azure virtual machines.
- Use **[health probes](./load-balancer-custom-probe-overview.md)** to monitor load-balanced resources.
machine-learning Concept Responsible Ai Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-responsible-ai-dashboard.md
Title: Assess AI systems and make data-driven decisions with Azure Machine Learning Responsible AI dashboard
-description: The Responsible AI dashboard is a comprehensive UI and set of SDK/YAML components to help data scientists debug their machine learning models and make data-driven decisions.
+description: Learn how to use the comprehensive UI and SDK/YAML components in the Responsible AI dashboard to debug your machine learning models and make data-driven decisions.
Last updated 08/17/2022
-# Assess AI systems and make data-driven decisions with Azure Machine Learning Responsible AI dashboard (preview)
+# Assess AI systems by using the Responsible AI dashboard (preview)
-Implementing Responsible AI in practice requires rigorous engineering. Rigorous engineering, however, can be tedious, manual, and time-consuming without the right tooling and infrastructure. Machine learning professionals need tools to implement responsible AI in practice effectively and efficiently.
+Implementing Responsible AI in practice requires rigorous engineering. But rigorous engineering can be tedious, manual, and time-consuming without the right tooling and infrastructure.
-The Responsible AI dashboard provides a single pane of glass that brings together several mature Responsible AI tools in the areas of model [performance and fairness assessment](http://fairlearn.org/), data exploration, [machine learning interpretability](https://interpret.ml/), [error analysis](https://erroranalysis.ai/), [counterfactual analysis and perturbations](https://github.com/interpretml/DiCE), and [causal inference](https://github.com/microsoft/EconML) for a holistic assessment and debugging of models and making informed data-driven decisions. Having access to all of these tools in one interface empowers you to:
+The Responsible AI dashboard provides a single interface to help you implement Responsible AI in practice effectively and efficiently. It brings together several mature Responsible AI tools in the areas of:
-1. Evaluate and debug your machine learning models by identifying model errors and fairness issues, diagnosing why those errors are happening, and informing your mitigation steps.
-2. Boost your data-driven decision-making abilities by addressing questions such as *ΓÇ£what is the minimum change the end user could apply to their features to get a different outcome from the model?ΓÇ¥ and/or ΓÇ£what is the causal effect of reducing or increasing a feature (for example, red meat consumption) on a real-world outcome (for example, diabetes progression)?ΓÇ¥*
+- [Model performance and fairness assessment](http://fairlearn.org/)
+- Data exploration
+- [Machine learning interpretability](https://interpret.ml/)
+- [Error analysis](https://erroranalysis.ai/)
+- [Counterfactual analysis and perturbations](https://github.com/interpretml/DiCE)
+- [Causal inference](https://github.com/microsoft/EconML)
-The dashboard could be customized to include the only subset of tools that are relevant to your use case.
+The dashboard offers a holistic assessment and debugging of models so you can make informed data-driven decisions. Having access to all of these tools in one interface empowers you to:
-Responsible AI dashboard is also accompanied by a [PDF scorecard](how-to-responsible-ai-scorecard.md), which enables you to export Responsible AI metadata and insights of your data and models for sharing offline with the product and compliance stakeholders.
+- Evaluate and debug your machine learning models by identifying model errors and fairness issues, diagnosing why those errors are happening, and informing your mitigation steps.
+- Boost your data-driven decision-making abilities by addressing questions such as:
+
+ "What is the minimum change that users can apply to their features to get a different outcome from the model?"
+
+ "What is the causal effect of reducing or increasing a feature (for example, red meat consumption) on a real-world outcome (for example, diabetes progression)?"
+
+You can customize the dashboard to include only the subset of tools that are relevant to your use case.
+
+The Responsible AI dashboard is accompanied by a [PDF scorecard](how-to-responsible-ai-scorecard.md). The scorecard enables you to export Responsible AI metadata and insights into your data and models. You can then share them offline with the product and compliance stakeholders.
## Responsible AI dashboard components
-The Responsible AI dashboard brings together, in a comprehensive view, various new and pre-existing tools, integrating them with the Azure Machine Learning [CLIv2, Python SDKv2](concept-v2.md) and [studio](overview-what-is-machine-learning-studio.md). These tools include:
+The Responsible AI dashboard brings together, in a comprehensive view, various new and pre-existing tools. The dashboard integrates these tools with [Azure Machine Learning CLI v2, Azure Machine Learning Python SDK v2](concept-v2.md), and [Azure Machine Learning studio](overview-what-is-machine-learning-studio.md). The tools include:
-1. [Data explorer](concept-data-analysis.md) to understand and explore your dataset distributions and statistics.
-2. [Model overview and fairness assessment](concept-fairness-ml.md) to evaluate the performance of your model and evaluate your modelΓÇÖs group fairness issues (how diverse groups of people are impacted by your modelΓÇÖs predictions).
-3. [Error Analysis](concept-error-analysis.md) to view and understand how errors are distributed in your dataset.
-4. [Model interpretability](how-to-machine-learning-interpretability.md) (aggregate/individual feature importance values) to understand your modelΓÇÖs predictions and how those overall and individual predictions are made.
-5. [Counterfactual What-If](concept-counterfactual-analysis.md) to observe how feature perturbations would impact your model predictions while providing you with the closest data points with opposing or different model predictions.
-6. [Causal analysis](concept-causal-inference.md) to use historical data to view the causal effects of treatment features on real-world outcomes.
+- [Data explorer](concept-data-analysis.md), to understand and explore your dataset distributions and statistics.
+- [Model overview and fairness assessment](concept-fairness-ml.md), to evaluate the performance of your model and evaluate your model's group fairness issues (how your model's predictions affect diverse groups of people).
+- [Error analysis](concept-error-analysis.md), to view and understand how errors are distributed in your dataset.
+- [Model interpretability](how-to-machine-learning-interpretability.md) (importance values for aggregate and individual features), to understand your model's predictions and how those overall and individual predictions are made.
+- [Counterfactual what-if](concept-counterfactual-analysis.md), to observe how feature perturbations would affect your model predictions while providing the closest data points with opposing or different model predictions.
+- [Causal analysis](concept-causal-inference.md), to use historical data to view the causal effects of treatment features on real-world outcomes.
-Together, these components will enable you to debug machine learning models, while informing your data-driven and model-driven business decisions. The following diagram and two sections explain how these tools could be incorporated into your AI lifecycle to achieve improved models and solid data insights.
+Together, these tools will help you debug machine learning models, while informing your data-driven and model-driven business decisions. The following diagram shows how you can incorporate them into your AI lifecycle to improve your models and get solid data insights.
### Model debugging Assessing and debugging machine learning models is critical for model reliability, interpretability, fairness, and compliance. It helps determine how and why AI systems behave the way they do. You can then use this knowledge to improve model performance. Conceptually, model debugging consists of three stages: -- **Identify**, to understand and recognize model errors and/or fairness issues by addressing the following questions:
- - *What kinds of errors does my model have?*
- - *In what areas are errors most prevalent?*
-- **Diagnose**, to explore the reasons behind the identified errors by addressing:
- - *What are the cause