Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Partner Xid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-xid.md | Get the custom policy starter packs from GitHub, then update the XML files in th <Domain>X-ID</Domain> <DisplayName>X-ID</DisplayName> <TechnicalProfiles>- <TechnicalProfile Id="X-ID-Oauth2"> + <TechnicalProfile Id="X-ID-OIDC"> <DisplayName>X-ID</DisplayName> <Description>Login with your X-ID account</Description>- <Protocol Name="OAuth2" /> + <Protocol Name="OpenIdConnect" /> <Metadata> <Item Key="METADATA">https://oidc-uat.x-id.io/.well-known/openid-configuration</Item> <!-- Update the Client ID below to the X-ID Application ID --> Add the new identity provider to the user journey. 3. Set the value of **TargetClaimsExchangeId** to a friendly name. 4. Add a **ClaimsExchange** element. 5. Set the **ID** to the value of the target claims exchange ID. This change links the xID button to `X-IDExchange` action. -6. Update the **TechnicalProfileReferenceId** value to the technical profile ID you created (`X-ID-Oauth2`). +6. Update the **TechnicalProfileReferenceId** value to the technical profile ID you created (`X-ID-OIDC`). 7. Add an Orchestration step to call xID UserInfo endpoint to return claims about the authenticated user `X-ID-Userdata`. The following XML demonstrates the user journey orchestration with xID identity provider. The following XML demonstrates the user journey orchestration with xID identity <OrchestrationStep Order="2" Type="ClaimsExchange"> <ClaimsExchanges>- <ClaimsExchange Id="X-IDExchange" TechnicalProfileReferenceId="X-ID-Oauth2" /> + <ClaimsExchange Id="X-IDExchange" TechnicalProfileReferenceId="X-ID-OIDC" /> </ClaimsExchanges> </OrchestrationStep> |
active-directory | Onboard Enable Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-tenant.md | -# Enable Permissions Management in your organization +# Enable Microsoft Entra Permissions Management in your organization -This article describes how to enable Permissions Management in your organization. Once you've enabled Permissions Management, you can connect it to your Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) platforms. +This article describes how to enable Microsoft Entra Permissions Management in your organization. Once you've enabled Permissions Management, you can connect it to your Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) platforms. > [!NOTE] > To complete this task, you must have *Microsoft Entra Permissions Management Administrator* permissions. You can't enable Permissions Management as a user from another tenant who has signed in via B2B or via Azure Lighthouse. |
active-directory | Onboard Gcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-gcp.md | There are several moving parts across GCP and Azure, which are required to be co > 1. Return to the Permissions Management window, and in the **Permissions Management Onboarding - Azure AD OIDC App Creation**, select **Next**. ### 2. Set up a GCP OIDC project.-1. In the **Permissions Management Onboarding - GCP OIDC Account Details & IDP Access** page, enter the **OIDC Project ID** and **OIDC Project Number** of the GCP project in which the OIDC provider and pool will be created. You can change the role name to your requirements. +1. In the **Permissions Management Onboarding - GCP OIDC Account Details & IDP Access** page, enter the **OIDC Project Number** and **OIDC Project ID**of the GCP project in which the OIDC provider and pool will be created. You can change the role name to your requirements. > [!NOTE] > You can find the **Project number** and **Project ID** of your GCP project on the GCP **Dashboard** page of your project in the **Project info** panel. There are several moving parts across GCP and Azure, which are required to be co Optionally, specify **G-Suite IDP Secret Name** and **G-Suite IDP User Email** to enable G-Suite integration. - You can either download and run the script at this point or you can do it in the Google Cloud Shell. -1. Select **Next**. +1. You can either download and run the script at this point or you can run it in the Google Cloud Shell. ++1. Select **Next** after sucessfully running the setup script. Choose from 3 options to manage GCP projects. |
active-directory | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/overview.md | -# What's Permissions Management? +# What's Microsoft Entra Permissions Management? ## Overview -Permissions Management is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multicloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). +Microsoft Entra Permissions Management is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multicloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). Permissions Management detects, automatically right-sizes, and continuously monitors unused and excessive permissions. Once your organization has explored and implemented the discover, remediation an ## Next steps -- For information on how to onboard Permissions Management for your organization, see [Enable Permissions Management in your organization](onboard-enable-tenant.md).+- Deepen your learning with the [Introduction to Microsoft Entra Permissions Management](https://go.microsoft.com/fwlink/?linkid=2240016) learn module. +- Sign up for a [45-day free trial](https://aka.ms/TryPermissionsManagement) of Permissions Management. - For a list of frequently asked questions (FAQs) about Permissions Management, see [FAQs](faqs.md). |
active-directory | Concept Token Protection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-token-protection.md | This preview supports the following configurations: - The following Windows client devices aren't supported: - Windows Server - Surface Hub+ - Windows-based Microsoft Teams Rooms (MTR) systems ## Deployment |
active-directory | Scenario Web App Call Api Acquire Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-acquire-token.md | public ModelAndView getUserFromGraph(HttpServletRequest httpRequest, HttpServlet // Code omitted here ``` +# [Node.js](#tab/nodejs) ++In the Node.js sample, the code that acquires a token is in the *acquireToken* method of the **AuthProvider** class. +++This access token is then used to handle requests to the `/profile` endpoint: ++ # [Python](#tab/python) In the Python sample, the code that calls the API is in `app.py`. Move on to the next article in this scenario, Move on to the next article in this scenario, [Call a web API](scenario-web-app-call-api-call-api.md?tabs=java). +# [Node.js](#tab/nodejs) ++Move on to the next article in this scenario, +[Call a web API](scenario-web-app-call-api-call-api.md?tabs=nodejs). + # [Python](#tab/python) Move on to the next article in this scenario, |
active-directory | Scenario Web App Call Api App Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-app-configuration.md | Code examples in this article and the following one are extracted from the [ASP. Code examples in this article and the following one are extracted from the [Java web application that calls Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-webapp), a web-app sample that uses MSAL for Java. The sample currently lets MSAL for Java produce the authorization-code URL and handles the navigation to the authorization endpoint for the Microsoft identity platform. It's also possible to use Sprint security to sign the user in. You might want to refer to the sample for full implementation details. +# [Node.js](#tab/nodejs) ++Code examples in this article and the following one are extracted from the [Node.js & Express.js web application that calls Microsoft Graph](https://github.com/Azure-Samples/ms-identity-node), a web app sample that uses MSAL Node. ++The sample currently lets MSAL Node produce the authorization-code URL and handles the navigation to the authorization endpoint for the Microsoft identity platform. This is shown below: ++ # [Python](#tab/python) Code snippets in this article and the following are extracted from the [Python web application calling Microsoft graph](https://github.com/Azure-Samples/ms-identity-python-webapp) sample using the [identity package](https://pypi.org/project/identity/) (a wrapper around MSAL Python). Microsoft.Identity.Web simplifies your code by setting the correct OpenID Connec *Microsoft.Identity.Web.OWIN* simplifies your code by setting the correct OpenID Connect settings, subscribing to the code received event, and redeeming the code. No extra code is required to redeem the authorization code. See [Microsoft.Identity.Web source code](https://github.com/AzureAD/microsoft-identity-web/blob/9fdcf15c66819b31b1049955eed5d3e5391656f5/src/Microsoft.Identity.Web.OWIN/AppBuilderExtension.cs#L95) for details on how this works. +# [Node.js](#tab/nodejs) ++The *handleRedirect* method in **AuthProvider** class processes the authorization code received from Azure AD. This is shown below: ++ # [Java](#tab/java) See [Web app that signs in users: Code configuration](scenario-web-app-sign-user-app-configuration.md?tabs=java#initialization-code) to understand how the Java sample gets the authorization code. After the app receives the code, the [AuthFilter.java#L51-L56](https://github.com/Azure-Samples/ms-identity-java-webapp/blob/d55ee4ac0ce2c43378f2c99fd6e6856d41bdf144/src/main/java/com/microsoft/azure/msalwebsample/AuthFilter.java#L51-L56): IAuthenticationResult getAuthResultBySilentFlow(HttpServletRequest httpRequest, The detail of the `SessionManagementHelper` class is provided in the [MSAL sample for Java](https://github.com/Azure-Samples/ms-identity-java-webapp/blob/d55ee4ac0ce2c43378f2c99fd6e6856d41bdf144/src/main/java/com/microsoft/azure/msalwebsample/SessionManagementHelper.java). +# [Node.js](#tab/nodejs) ++In the Node.js sample, the application session is used to store the token cache. Using MSAL Node cache methods, the token cache in session is read before a token request is made, and then updated once the token request is successfully completed. This is shown below: ++ # [Python](#tab/python) In the Python sample, the identity package takes care of the token cache, using the global `session` object for storage. Move on to the next article in this scenario, Move on to the next article in this scenario, [Remove accounts from the cache on global sign out](scenario-web-app-call-api-sign-in.md?tabs=java). +# [Node.js](#tab/nodejs) ++Move on to the next article in this scenario, +[Remove accounts from the cache on global sign out](scenario-web-app-call-api-sign-in.md?tabs=nodejs). + # [Python](#tab/python) Move on to the next article in this scenario, |
active-directory | Scenario Web App Call Api Call Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-call-api.md | private String getUserInfoFromGraph(String accessToken) throws Exception { } ``` +# [Node.js](#tab/nodejs) ++After successfully retrieving a token, the code uses the **axios** package to query the API endpoint and retrieve a JSON result. ++ # [Python](#tab/python) After successfully retrieving a token, the code uses the requests package to query the API endpoint and retrieve a JSON result. |
active-directory | Scenario Web App Call Api Sign In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-sign-in.md | The ASP.NET sample doesn't remove accounts from the cache on global sign-out. The Java sample doesn't remove accounts from the cache on global sign-out. +# [Node.js](#tab/nodejs) ++The Node sample doesn't remove accounts from the cache on global sign-out. + # [Python](#tab/python) The Python sample doesn't remove accounts from the cache on global sign-out. Move on to the next article in this scenario, Move on to the next article in this scenario, [Acquire a token for the web app](./scenario-web-app-call-api-acquire-token.md?tabs=java). +# [Node.js](#tab/nodejs) ++Move on to the next article in this scenario, +[Acquire a token for the web app](./scenario-web-app-call-api-acquire-token.md?tabs=nodejs). + # [Python](#tab/python) Move on to the next article in this scenario, |
active-directory | Scenario Web App Sign User App Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-app-configuration.md | In the Azure portal, the reply URIs that you register on the **Authentication** # [Node.js](#tab/nodejs) -Here, the configuration parameters reside in *.env* as environment variables: +Here, the configuration parameters reside in *.env.dev* as environment variables: These parameters are used to create a configuration object in *authConfig.js* file, which will eventually be used to initialize MSAL Node: |
active-directory | Scenario Web App Sign User Sign In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-sign-in.md | public class AuthPageController { When the user selects the **Sign in** link, which triggers the `/auth/signin` route, the sign-in controller takes over to authenticate the user with Microsoft identity platform. # [Python](#tab/python) In Java, sign-out is handled by calling the Microsoft identity platform `logout` When the user selects the **Sign out** button, the app triggers the `/signout` route, which destroys the session and redirects the browser to Microsoft identity platform sign-out endpoint. # [Python](#tab/python) |
active-directory | Tutorial V2 Nodejs Webapp Msal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-webapp-msal.md | The web app sample in this tutorial uses the [express-session](https://www.npmjs ## Add app registration details -1. Create an *.env* file in the root of your project folder. Then add the following code: +1. Create an *.env.dev* file in the root of your project folder. Then add the following code: Fill in these details with the values you obtain from Azure app registration portal: Fill in these details with the values you obtain from Azure app registration por ## Add code for user sign-in and token acquisition -1. Create a new file named *auth.js* under the *routes* folder and add the following code there: +1. Create a new folder named *auth*, and add a new file named *AuthProvider.js* under it. This will contain the **AuthProvider** class, which encapsulates the necessary authentication logic using MSAL Node. Add the following code there: +++1. Next, create a new file named *auth.js* under the *routes* folder and add the following code there: :::code language="js" source="~/ms-identity-node/App/routes/auth.js"::: -2. Next, update the *index.js* route by replacing the existing code with the following code snippet: +2. Update the *index.js* route by replacing the existing code with the following code snippet: :::code language="js" source="~/ms-identity-node/App/routes/index.js"::: |
active-directory | Howto Vm Sign In Azure Ad Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md | To connect to the remote computer: > [!IMPORTANT] > Remote connection to VMs that are joined to Azure AD is allowed only from Windows 10 or later PCs that are either Azure AD registered (minimum required build is 20H1) or Azure AD joined or hybrid Azure AD joined to the *same* directory as the VM. Additionally, to RDP by using Azure AD credentials, users must belong to one of the two Azure roles, Virtual Machine Administrator Login or Virtual Machine User Login. >-> If you're using an Azure AD-registered Windows 10 or later PC, you must enter credentials in the `AzureAD\UPN` format (for example, `AzureAD\john@contoso.com`). At this time, you can use Azure Bastion to log in with Azure AD authentication [via the Azure CLI and the native RDP client mstsc](../../bastion/connect-native-client-windows.md). +> If you're using an Azure AD-registered Windows 10 or later PC, you must enter credentials in the `AzureAD\UPN` format (for example, `AzureAD\john@contoso.com`). At this time, you can use Azure Bastion to log in with Azure AD authentication [via the Azure CLI and the native RDP client mstsc](../../bastion/native-client.md). To log in to your Windows Server 2019 virtual machine by using Azure AD: |
active-directory | Monitor Sign In Health For Resilience | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/monitor-sign-in-health-for-resilience.md | -To increase infrastructure resilience, set up monitoring of application sign-in health for your critical applications so that you receive an alert if an impacting incident occurs. To assist you in this effort, you can configure alerts based on the sign-in health workbook. +To increase infrastructure resilience, set up monitoring of application sign-in health for your critical applications. You can receive an alert when an impacting incident occurs. This article walks through setting up the App sign-in health workbook to monitor for disruptions to your users' sign-ins. -This workbook enables administrators to monitor authentication requests for applications in your tenant. It provides these key capabilities: +You can configure alerts based on the App sign-in health workbook. This workbook enables administrators to monitor authentication requests for applications in their tenants. It provides these key capabilities: -* Configure the workbook to monitor all or individual apps with near real-time data. --* Configure alerts to notify you when authentication patterns change so that you can investigate and take action. --* Compare trends over a period, for example week over week, which is the workbookΓÇÖs default setting. +- Configure the workbook to monitor all or individual apps with near real-time data. +- Configure alerts for authentication pattern changes so that you can investigate and respond. +- Compare trends over a period of time. Week over week is the workbook's default setting. > [!NOTE]-> To see all available workbooks, and the prerequisites for using them, please see [How to use Azure Monitor workbooks for reports](../reports-monitoring/howto-use-azure-monitor-workbooks.md). +> See all available workbooks and the prerequisites for using them in [How to use Azure Monitor workbooks for reports](../reports-monitoring/howto-use-azure-monitor-workbooks.md). During an impacting event, two things may happen: -* The number of sign-ins for an application may drop precipitously because users can't sign in. --* The number of sign-in failures can increase. --This article walks through setting up the sign-in health workbook to monitor for disruptions to your usersΓÇÖ sign-ins. +- The number of sign-ins for an application may abruptly drop when users can't sign in. +- The number of sign-in failures may increase. ## Prerequisites -* An Azure AD tenant. --* A user with global administrator or security administrator role for the Azure AD tenant. --* A Log Analytics workspace in your Azure subscription to send logs to Azure Monitor logs. -- * Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md) --* Azure AD logs integrated with Azure Monitor logs -- * Learn how to [Integrate Azure AD Sign- in Logs with Azure Monitor Stream.](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) +- An Azure AD tenant. +- A user with global administrator or security administrator role for the Azure AD tenant. +- A Log Analytics workspace in your Azure subscription to send logs to Azure Monitor logs. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md). +- Azure AD logs integrated with Azure Monitor logs. Learn how to [Integrate Azure AD Sign- in Logs with Azure Monitor Stream.](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) -## Configure the App sign in health workbook +## Configure the App sign-in health workbook -To access workbooks, open the **Azure portal**, select **Azure Active Directory**, and then select **Workbooks**. +To access workbooks in the **Azure portal**, select **Azure Active Directory**, select **Workbooks**. The following screenshot shows the Workbooks Gallery in the Azure portal. -You'll see workbooks under Usage, Conditional Access, and Troubleshoot. The App sign in health workbook appears in the usage section. -Once you use a workbook, it may appear in the Recently modified workbooks section. +Workbooks appear under **Usage**, **Conditional Access**, and **Troubleshoot**. The App sign in health workbook appears in the **Health** section. After you use a workbook, it may appear in the **Recently modified workbooks** section. - +You can use the App sign-in health workbook to visualize what is happening with your sign-ins. As shown in the following screenshot, the workbook presents two graphs. -The App sign in health workbook enables you to visualize what is happening with your sign-ins. +In the preceding screenshot, there are two graphs: -By default the workbook presents two graphs. These graphs compare what is happening to your app(s) now, versus the same period a week ago. The blue lines are current, and the orange lines are the previous week. -- --**The first graph is Hourly usage (number of successful users)**. Comparing your current number of successful users to a typical usage period helps you to spot a drop in usage that may require investigation. A drop in successful usage rate can help detect performance and utilization issues that the failure rate can't. For example if users can't reach your application to attempt to sign in, there would be no failures, only a drop in usage. A sample query for this data can be found in the following section. --**The second graph is hourly failure rate**. A spike in failure rate may indicate an issue with your authentication mechanisms. Failure rate can only be measured if users can attempt to authenticate. If users Can't gain access to make the attempt, failures Won't show. --You can configure an alert that notifies a specific group when the usage or failure rate exceeds a specified threshold. A sample query for this data can be found in the following section. +- **Hourly usage (number of successful users)**. Comparing your current number of successful users to a typical usage period helps you to spot a drop in usage that may require investigation. A drop-in successful usage rate can help detect performance and utilization issues that the failure rate can't detect. For example, when users can't reach your application to attempt to sign in, there's a drop in usage but no failures. See the sample query for this data in the next section of this article. +- **Hourly failure rate**. A spike in failure rate may indicate an issue with your authentication mechanisms. Failure rate measures only appear when users can attempt to authenticate. When users can't gain access to make the attempt, there are no failures. ## Configure the query and alerts -You create alert rules in Azure Monitor and can automatically run saved queries or custom log searches at regular intervals. --Use the following instructions to create email alerts based on the queries reflected in the graphs. Sample scripts below will send an email notification when --* the successful usage drops by 90% from the same hour two days ago, as in the hourly usage graph in the previous section. --* the failure rate increases by 90% from the same hour two days ago, as in the hourly failure rate graph in the previous section. -- To configure the underlying query and set alerts, complete the following steps. You'll use the Sample Query as the basis for your configuration. An explanation of the query structure appears at the end of this section. +You create alert rules in Azure Monitor and can automatically run saved queries or custom log searches at regular intervals. You can configure an alert that notifies a specific group when the usage or failure rate exceeds a specified threshold. -For more information on how to create, view, and manage log alerts using Azure Monitor see [Manage log alerts](../../azure-monitor/alerts/alerts-log.md). +Use the following instructions to create email alerts based on the queries reflected in the graphs. The sample scripts send an email notification when: -1. In the workbook, select **Edit**, then select the **query icon** just above the right-hand side of the graph. +- The successful usage drops by 90% from the same hour two days ago, as shown in the preceding hourly usage graph example. +- The failure rate increases by 90% from the same hour two days ago, as shown in the preceding hourly failure rate graph example. - [](./media/monitor-sign-in-health-for-resilience/edit-workbook.png) +To configure the underlying query and set alerts, complete the following steps using the sample query as the basis for your configuration. The query structure description appears at the end of this section. Learn how to create, view, and manage log alerts using Azure Monitor in [Manage log alerts](../../azure-monitor/alerts/alerts-log.md). - The query log opens. +1. In the workbook, select **Edit** as shown in the following screenshot. Select the **query icon** in the upper right corner of the graph. - [](./media/monitor-sign-in-health-for-resilience/query-log.png) -ΓÇÄ + :::image type="content" source="./media/monitor-sign-in-health-for-resilience/edit-workbook.png" alt-text="Screenshot showing edit workbook."::: -2. Copy one of the sample scripts for a new Kusto query. - * [Kusto query for increase in failure rate](#kusto-query-for-increase-in-failure-rate) - * [Kusto query for drop in usage](#kusto-query-for-drop-in-usage) +2. View the query log as shown in the following screenshot. -3. Paste the query in the window and select **Run**. Ensure you see the Completed message shown in the image below, and results below that message. + :::image type="content" source="./media/monitor-sign-in-health-for-resilience/query-log.png" alt-text="Screenshot showing the query log."::: - [](./media/monitor-sign-in-health-for-resilience/run-query.png) +3. Copy one of the following sample scripts for a new Kusto query. -4. Highlight the query, and select + **New alert rule**. - - [](./media/monitor-sign-in-health-for-resilience/new-alert-rule.png) + - [Kusto query for increase in failure rate](#kusto-query-for-increase-in-failure-rate) + - [Kusto query for drop in usage](#kusto-query-for-drop-in-usage) +4. Paste the query in the window. Select **Run**. Look for the **Completed** message and the query results as shown in the following screenshot. -5. Configure alert conditions. -ΓÇÄIn the Condition section, select the link **Whenever the average custom log search is greater than logic defined count**. In the configure signal logic pane, scroll to Alert logic + :::image type="content" source="./media/monitor-sign-in-health-for-resilience/run-query.png" alt-text="Screenshot showing the run query results."::: - [](./media/monitor-sign-in-health-for-resilience/configure-alerts.png) +5. Highlight the query. Select **+ New alert rule**. - * **Threshold value**: 0. This value will alert on any results. -- * **Evaluation period (in minutes)**: 2880. This value looks at an hour of time -- * **Frequency (in minutes)**: 60. This value sets the evaluation period to once per hour for the previous hour. -- * Select **Done**. --6. In the **Actions** section, configure these settings: + :::image type="content" source="./media/monitor-sign-in-health-for-resilience/new-alert-rule.png" alt-text="Screenshot showing the new alert rule screen."::: - [](./media/monitor-sign-in-health-for-resilience/create-alert-rule.png) +6. Configure alert conditions. As shown in the following example screenshot, in the **Condition** section, under **Measurement**, select **Table rows** for **Measure**. Select **Count** for **Aggregation type**. Select **2 days** for **Aggregation granularity**. - * Under **Actions**, choose **Select action group**, and add the group you want to be notified of alerts. + :::image type="content" source="./media/monitor-sign-in-health-for-resilience/configure-alerts.png" alt-text="Screenshot showing configure alerts screen."::: + + - **Table rows**. You can use the number of rows returned to work with events such as Windows event logs, Syslog, and application exceptions. + - **Aggregation type**. Data points applied with Count. + - **Aggregation granularity**. This value defines the period that works with **Frequency of evaluation**. - * Under **Customize actions** select **Email alerts**. +7. In **Alert Logic**, configure the parameters as shown in the example screenshot. - * Add a **subject line**. + :::image type="content" source="./media/monitor-sign-in-health-for-resilience/alert-logic.png" alt-text="Screenshot showing alert logic screen."::: + + - **Threshold value**: 0. This value alerts on any results. + - **Frequency of evaluation**: 1 hour. This value sets the evaluation period to once per hour for the previous hour. -7. Under **Alert rule details**, configure these settings: +8. In the **Actions** section, configure settings as shown in the example screenshot. - * Add a descriptive name and a description. + :::image type="content" source="./media/monitor-sign-in-health-for-resilience/create-alert-rule.png" alt-text="Screenshot showing the Create an alert rule screen."::: + + - Select **Select action group** and add the group for which you want alert notifications. + - Under **Customize actions**, select **Email alerts**. + - Add a **subject line**. - * Select the **resource group** to which to add the alert. +9. In the **Details** section, configure settings as shown in the example screenshot. - * Select the default **severity** of the alert. + :::image type="content" source="./media/monitor-sign-in-health-for-resilience/details-section.png" alt-text="Screenshot showing the Details section."::: + + - Add a **Subscription** name and a description. + - Select the **Resource group** to which you want to add the alert. + - Select the default **Severity**. + - Select **Enable upon creation** if you want it to immediately go live. Otherwise, select **Mute actions**. - * Select **Enable alert rule upon creation** if you want it live immediately, else select **Suppress alerts**. +10. In the **Review + create** section, configure settings as shown in the example screenshot. -8. Select **Create alert rule**. + :::image type="content" source="./media/monitor-sign-in-health-for-resilience/review-create.png" alt-text="Screenshot showing the Review + create section."::: -9. Select **Save**, enter a name for the query, **Save as a Query with a category of Alert**. Then select **Save** again. +11. Select **Save**. Enter a name for the query. For **Save as**, select **Query**. For **Category**, select **Alert**. Again, select **Save**. - [](./media/monitor-sign-in-health-for-resilience/save-query.png) + :::image type="content" source="./media/monitor-sign-in-health-for-resilience/save-query.png" alt-text="Screenshot showing the save query button."::: ### Refine your queries and alerts -Modify your queries and alerts for maximum effectiveness. +To modify your queries and alerts for maximum effectiveness: -* Be sure to test your alerts. --* Modify alert sensitivity and frequency so that you get important notifications. Admins can become desensitized to alerts if they get too many and miss something important. --* Ensure the email from which alerts come in your administratorΓÇÖs email clients is added to allowed senders list. Otherwise you may miss notifications due to a spam filter on your email client. --* Alerts query in Azure Monitor can only include results from past 48 hours. [This is a current limitation by design](https://github.com/MicrosoftDocs/azure-docs/issues/22637). +- Always test alerts. +- Modify alert sensitivity and frequency to receive important notifications. Admins can become desensitized to alerts and miss something important if they get too many. +- In administrator's email clients, add the email from which alerts come to the allowed senders list. This approach prevents missed notifications due to a spam filter on their email clients. +- [By design](https://github.com/MicrosoftDocs/azure-docs/issues/22637), alert queries in Azure Monitor can only include results from the past 48 hours. ## Sample scripts ### Kusto query for increase in failure rate - The ratio at the bottom can be adjusted as necessary and represents the percent change in traffic in the last hour as compared to the same time yesterday. 0.5 means that there is a 50% difference in the traffic. +In the following query, we detect increasing failure rates. As necessary, you can adjust the ratio at the bottom. It represents the percent change in traffic in the last hour as compared to yesterday's traffic at same time. A 0.5 result indicates a 50% difference in the traffic. ```kusto- let today = SigninLogs | where TimeGenerated > ago(1h) // Query failure rate in the last hour | project TimeGenerated, UserPrincipalName, AppDisplayName, status = case(Status.errorCode == "0", "success", "failure") let today = SigninLogs | sort by TimeGenerated desc | serialize rowNumber = row_number(); let yesterday = SigninLogs-| where TimeGenerated between((ago(1h) - totimespan(1d))..(now() - totimespan(1d))) // Query failure rate at the same time yesterday +| where TimeGenerated between((ago(1h) ΓÇô totimespan(1d))..(now() ΓÇô totimespan(1d))) // Query failure rate at the same time yesterday | project TimeGenerated, UserPrincipalName, AppDisplayName, status = case(Status.errorCode == "0", "success", "failure") // Optionally filter by a specific application //| where AppDisplayName == **APP NAME** today | where day != time(6.00:00:00) // exclude Sat | where day != time(0.00:00:00) // exclude Sun | where day != time(1.00:00:00) // exclude Mon-| where abs(failureRate - failureRateYesterday) > 0.5 -+| where abs(failureRate ΓÇô failureRateYesterday) > 0.5 ```- ### Kusto query for drop in usage -In the following query, we are comparing traffic in the last hour to the same time yesterday. -We are excluding Saturday, Sunday, and Monday because itΓÇÖs expected on those days that there would be large variability in the traffic at the same time the previous day. +In the following query, we compare traffic in the last hour to yesterday's traffic at the same time. We exclude Saturday, Sunday, and Monday because we expect large variability in the previous day's traffic at the same time. -The ratio at the bottom can be adjusted as necessary and represents the percent change in traffic in the last hour as compared to the same time yesterday. 0.5 means that there is a 50% difference in the traffic. +As necessary, you can adjust the ratio at the bottom. It represents the percent change in traffic in the last hour as compared to yesterday's traffic at same time. A 0.5 result indicates a 50% difference in the traffic. Adjust these values to fit your business operation model. -*You should adjust these values to fit your business operation model*. --```Kusto - let today = SigninLogs // Query traffic in the last hour +```kusto +Let today = SigninLogs // Query traffic in the last hour | where TimeGenerated > ago(1h) | project TimeGenerated, AppDisplayName, UserPrincipalName // Optionally filter by AppDisplayName to scope query to a single application The ratio at the bottom can be adjusted as necessary and represents the percent | sort by TimeGenerated desc | serialize rn = row_number(); let yesterday = SigninLogs // Query traffic at the same hour yesterday-| where TimeGenerated between((ago(1h) - totimespan(1d))..(now() - totimespan(1d))) // Count distinct users in the same hour yesterday +| where TimeGenerated between((ago(1h) ΓÇô totimespan(1d))..(now() ΓÇô totimespan(1d))) // Count distinct users in the same hour yesterday | project TimeGenerated, AppDisplayName, UserPrincipalName // Optionally filter by AppDisplayName to scope query to a single application //| where AppDisplayName contains "Office 365 Exchange Online" yesterday ) on rn // Calculate the difference in number of users in the last hour compared to the same time yesterday-| project TimeGenerated, users, usersYesterday, difference = abs(users - usersYesterday), max = max_of(users, usersYesterday) +| project TimeGenerated, users, usersYesterday, difference = abs(users ΓÇô usersYesterday), max = max_of(users, usersYesterday) | extend ratio = (difference * 1.0) / max // Ratio is the percent difference in traffic in the last hour as compared to the same time yesterday // Day variable is the number of days since the previous Sunday. Optionally ignore results on Sat, Sun, and Mon because large variability in traffic is expected. | extend day = dayofweek(now()) on rn | where day != time(0.00:00:00) // exclude Sun | where day != time(1.00:00:00) // exclude Mon | where ratio > 0.7 // Threshold percent difference in sign-in traffic as compared to same hour yesterday- ``` ## Create processes to manage alerts -Once you have set up the query and alerts, create business processes to manage the alerts. --* Who will monitor the workbook and when? --* When an alert is generated, who will investigate? --* What are the communication needs? Who will create the communications and who will receive them? +After you set up queries and alerts, create business processes to manage the alerts. -* If an outage occurs, what business processes need to be triggered? +- Who monitors the workbook and when? +- When alerts occur, who investigates them? +- What are the communication needs? Who creates the communications and who receives them? +- When an outage occurs, what business processes apply? ## Next steps |
active-directory | Entitlement Management External Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-external-users.md | To ensure people outside of your organization can request access packages and ge > [!NOTE] > If you create a connected organization for an Azure AD tenant from a different Microsoft cloud, you also need to configure cross-tenant access settings appropriately. For more information on how to configure these settings, see [Configure cross-tenant access settings](../external-identities/cross-cloud-settings.md). -### Review your Conditional Access policies (Preview) +### Review your Conditional Access policies - Make sure to exclude the Entitlement Management app from any Conditional Access policies that impact guest users. Otherwise, a conditional access policy could block them from accessing MyAccess or being able to sign in to your directory. For example, guests likely don't have a registered device, aren't in a known location, and don't want to re-register for multi-factor authentication (MFA), so adding these requirements in a Conditional Access policy will block guests from using entitlement management. For more information, see [What are conditions in Azure Active Directory Conditional Access?](../conditional-access/concept-conditional-access-conditions.md). -- A common policy for Entitlement Management customers is to block all apps from guests except Entitlement Management for guests. This policy allows guests to enter MyAccess and request an access package. This package should contain a group (it is called Guests from MyAccess in the example below), which should be excluded from the block all apps policy. Once the package is approved, the guest will be in the directory. Given that the end user has the access package assignment and is part of the group, the end user will be able to access all other apps. Other common policies include excluding Entitlement Management app from MFA and compliant device. +- A common policy for Entitlement Management customers is to block all apps from guests except Entitlement Management for guests. This policy allows guests to enter My Access and request an access package. This package should contain a group (it is called Guests from My Access in the example below), which should be excluded from the block all apps policy. Once the package is approved, the guest will be in the directory. Given that the end user has the access package assignment and is part of the group, the end user will be able to access all other apps. Other common policies include excluding Entitlement Management app from MFA and compliant device. :::image type="content" source="media/entitlement-management-external-users/exclude-app-guests.png" alt-text="Screenshot of exclude app options."::: |
active-directory | Dagster Cloud Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dagster-cloud-provisioning-tutorial.md | + + Title: 'Tutorial: Configure Dagster Cloud for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and de-provision user accounts from Azure AD to Dagster Cloud. +++writer: twimmers ++ms.assetid: bb2db717-b16a-45f9-a76d-502bfc077e95 ++++ Last updated : 06/16/2023++++# Tutorial: Configure Dagster Cloud for automatic user provisioning ++This tutorial describes the steps you need to perform in both Dagster Cloud and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Dagster Cloud](https://dagster.io/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Dagster Cloud. +> * Remove users in Dagster Cloud when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Dagster Cloud. +> * Provision groups and group memberships in Dagster Cloud. +> * [Single sign-on](dagster-cloud-tutorial.md) to Dagster Cloud (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* A user account in Dagster Cloud with Admin permissions. +++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and Dagster Cloud](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Dagster Cloud to support provisioning with Azure AD +Contact Dagster Cloud support to configure Dagster Cloud to support provisioning with Azure AD. ++## Step 3. Add Dagster Cloud from the Azure AD application gallery ++Add Dagster Cloud from the Azure AD application gallery to start managing provisioning to Dagster Cloud. If you have previously setup Dagster Cloud for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who will be in scope for provisioning ++The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Dagster Cloud ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD. ++### To configure automatic user provisioning for Dagster Cloud in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++  ++1. In the applications list, select **Dagster Cloud**. ++  ++1. Select the **Provisioning** tab. ++  ++1. Set the **Provisioning Mode** to **Automatic**. ++  ++1. Under the **Admin Credentials** section, input your Dagster Cloud Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Dagster Cloud. ++  ++1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++  ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Dagster Cloud**. ++1. Review the user attributes that are synchronized from Azure AD to Dagster Cloud in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Dagster Cloud for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Dagster Cloud API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Dagster Cloud| + ||||| + |userName|String|✓|✓ + |active|Boolean|| + |displayName|String|| + |emails[type eq "work"].value|String|| + |name.givenName|String|| + |name.familyName|String|| + |externalId|String|| ++1. If you'd like to synchronize Azure AD groups to Dagster Cloud then under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Dagster Cloud**. ++1. Review the group attributes that are synchronized from Azure AD to Dagster Cloud in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Dagster Cloud for update operations. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Dagster Cloud| + ||||| + |displayName|String|✓|✓ + |externalId|String|| + |members|Reference|| + +1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Dagster Cloud, change the **Provisioning Status** to **On** in the **Settings** section. ++  ++1. Define the users and/or groups that you would like to provision to Dagster Cloud by choosing the desired values in **Scope** in the **Settings** section. ++  ++1. When you're ready to provision, click **Save**. ++  ++This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Vault Platform Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vault-platform-provisioning-tutorial.md | This section guides you through the steps to configure the Azure AD provisioning |emails[type eq "work"].value|String||✓ |name.givenName|String||✓ |name.familyName|String||✓+ |addresses[type eq "work"].locality|String||✓ |addresses[type eq "work"].country|String||✓ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String||✓ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||✓ |
active-directory | Wats Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wats-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi 1. Determine what data to [map between Azure AD and WATS](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure WATS to support provisioning with Azure AD-Contact WATS support to configure WATS to support provisioning with Azure AD. +Please refer to the [WATS Provisioning](https://support.virinco.com/hc/en-us/articles/7978299009948-WATS-Provisioning-SCIM-) article to set up any necessary requirements for provisioning through Azure AD. -## Step 3. Add WATS from the Azure AD application gallery --Add WATS from the Azure AD application gallery to start managing provisioning to WATS. If you have previously setup WATS for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). --## Step 4. Define who will be in scope for provisioning +## Step 3. Define who will be in scope for provisioning The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). The Azure AD provisioning service allows you to scope who will be provisioned ba * If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. -## Step 5. Configure automatic user provisioning to WATS +## Step 4. Configure automatic user provisioning to WATS This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD. This section guides you through the steps to configure the Azure AD provisioning This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. -## Step 6. Monitor your deployment +## Step 5. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment: * Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully |
active-directory | Admin Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/admin-api.md | example: | Property | Type | Description | | -- | -- | -- |-|`uri`| string (uri) | uri of the logo (optional if image is specified) | +|`uri`| string (uri) | uri of the logo | |`description` | string | the description of the logo |-|`image` | string | the base-64 encoded image (optional if uri is specified) | #### displayConsent type |
active-directory | Issuance Request Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/issuance-request-api.md | The payload contains the following properties: | `registration` | [RequestRegistration](#requestregistration-type)| Provides information about the issuer that can be displayed in the authenticator app. | | `type` | string | The verifiable credential type. Should match the type as defined in the verifiable credential manifest. For example: `VerifiedCredentialExpert`. For more information, see [Create the verified credential expert card in Azure](verifiable-credentials-configure-issuer.md). | | `manifest` | string| The URL of the verifiable credential manifest document. For more information, see [Gather credentials and environment details to set up your sample application](verifiable-credentials-configure-issuer.md).|-| `claims` | string| Optional. Used for the `ID token hint` flow to include a collection of assertions made about the subject in the verifiable credential. For PIN code flow, it's important that you provide the user's first name and last name. For more information, see [Verifiable credential names](verifiable-credentials-configure-issuer.md#verifiable-credential-names). | +| `claims` | string| Optional. Can only be used for the [ID token hint](rules-and-display-definitions-model.md#idtokenhintattestation-type) attestation flow to include a collection of assertions made about the subject in the verifiable credential. | | `pin` | [PIN](#pin-type)| Optional. PIN code can only be used with the [ID token hint](rules-and-display-definitions-model.md#idtokenhintattestation-type) attestation flow. A PIN number to provide extra security during issuance. You generate a PIN code, and present it to the user in your app. The user must provide the PIN code that you generated. | There are currently four claims attestation types that you can send in the payload. Microsoft Entra Verified ID uses four ways to insert claims into a verifiable credential and attest to that information with the issuer's DID. The following are the four types: |
active-directory | Rules And Display Definitions Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/rules-and-display-definitions-model.md | When you want the user to enter information themselves. This type is also called | Property | Type | Description | | -- | -- | -- |-|`uri`| string (url) | url of the logo (optional if image is specified) | +|`uri`| string (url) | url of the logo. | |`description` | string | the description of the logo |-|`image` | string | the base-64 encoded image (optional if url is specified) | ### displayConsent type |
active-directory | Verifiable Credentials Configure Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md | After you create your key vault, Verifiable Credentials generates a set of keys 1. In **Add access policies**, under **USER**, select the account you use to follow this tutorial. -1. For **Key permissions**, verify that the following permissions are selected: **Create**, **Delete**, and **Sign**. By default, **Create** and **Delete** are already enabled. **Sign** should be the only key permission you need to update. +1. For **Key permissions**, verify that the following permissions are selected: **Get**, **Create**, **Delete**, and **Sign**. By default, **Create** and **Delete** are already enabled. **Sign** should be the only key permission you need to update. :::image type="content" source="media/verifiable-credentials-configure-tenant/set-key-vault-admin-access-policy.png" alt-text="Screenshot that shows how to configure the admin access policy." border="false"::: |
active-directory | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md | +## May 2023 ++- Wallet Library was announced at Build 2023 in session [Reduce fraud and improve engagement using Digital Wallets](https://build.microsoft.com/en-US/sessions/4ca41843-1b3f-4ee6-955e-9e2326733be8). The Wallet Library enables customers to add verifiable credentials technology to their own mobile apps. The libraries are available for [Android](https://github.com/microsoft/entra-verifiedid-wallet-library-android/tree/dev) and [iOS](https://github.com/microsoft/entra-verifiedid-wallet-library-ios/tree/dev). + ## March 2023 - Admin API now supports [application access tokens](admin-api.md#authentication) and in addition to user bearer tokens. Microsoft Entra Verified ID is now generally available (GA) as the new member of ## June 2022 -- We're adding support for the [did:web](https://w3c-ccg.github.io/did-method-web/) method. Any new tenant that starts using the Verifiable Credentials Service after June 14, 2022 will have Web as a new, default, trust system when [onboarding](verifiable-credentials-configure-tenant.md#set-up-verified-id). VC Administrators can still choose to use ION when setting a tenant. If you want to use did:web instead of ION or viceversa, you'll need to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service).+- We're adding support for the [did:web](https://w3c-ccg.github.io/did-method-web/) method. Any new tenant that starts using the Verifiable Credentials Service after June 14, 2022 will have Web as a new, default, trust system when [onboarding](verifiable-credentials-configure-tenant.md#set-up-verified-id). VC Administrators can still choose to use ION when setting a tenant. If you want to use did:web instead of ION or viceversa, you need to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service). - We're rolling out several features to improve the overall experience of creating verifiable credentials in the Entra Verified ID platform: - Introducing Managed Credentials, which are verifiable credentials that no longer use Azure Storage to store the [display & rules JSON definitions](rules-and-display-definitions-model.md). Their display and rule definitions are different from earlier versions. - Create Managed Credentials using the [new quickstart experience](how-to-use-quickstart.md). Applications that use the Microsoft Entra Verified ID service must use the Reque | Europe | `https://beta.eu.did.msidentity.com/v1.0/{tenantID}/verifiablecredentials/request` | | Non-EU | `https://beta.did.msidentity.com/v1.0/{tenantID}/verifiablecredentials/request` | -To confirm which endpoint you should use, we recommend checking your Azure AD tenant's region as described above. If the Azure AD tenant is in the EU, you should use the Europe endpoint. +To confirm which endpoint you should use, we recommend checking your Azure AD tenant's region as described previously. If the Azure AD tenant is in the EU, you should use the Europe endpoint. ### Credential Revocation with Enhanced Privacy Sample contract file: ### Microsoft Authenticator DID Generation Update -We're making protocol updates in Microsoft Authenticator to support Single Long Form DID, thus deprecating the use of pairwise. With this update, your DID in Microsoft Authenticator will be used of every issuer and relaying party exchange. Holders of verifiable credentials using Microsoft Authenticator must get their verifiable credentials reissued as any previous credentials aren't going to continue working. +We're making protocol updates in Microsoft Authenticator to support Single Long Form DID, thus deprecating the use of pairwise. With this update, your DID in Microsoft Authenticator is used for every issuer and relaying party exchange. Holders of verifiable credentials using Microsoft Authenticator must get their verifiable credentials reissued as any previous credentials aren't going to continue working. ## December 2021 |
aks | Node Auto Repair | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-auto-repair.md | If AKS identifies an unhealthy node that remains unhealthy for *five* minutes, A AKS engineers investigate alternative remediations if auto-repair is unsuccessful. -If you want the remediator to reimage the node, you can add the `nodeCondition "customerMarkedAsUnhealthy": true`. - ## Node auto-drain [Scheduled events][scheduled-events] can occur on the underlying VMs in any of your node pools. For [spot node pools][spot-node-pools], scheduled events may cause a *preempt* node event for the node. Certain node events, such as *preempt*, cause AKS node auto-drain to attempt a cordon and drain of the affected node. This process enables rescheduling for any affected workloads on that node. You might notice the node receives a taint with `"remediator.aks.microsoft.com/unschedulable"`, because of `"kubernetes.azure.com/scalesetpriority: spot"`. |
aks | Quickstart Event Grid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-event-grid.md | Title: Subscribe to Azure Kubernetes Service events with Azure Event Grid -description: Use Azure Event Grid to subscribe to Azure Kubernetes Service events + Title: Subscribe to Azure Kubernetes Service (AKS) events with Azure Event Grid +description: Learn how to use Azure Event Grid to subscribe to Azure Kubernetes Service (AKS) events. Previously updated : 07/12/2021 Last updated : 06/16/2023 # Quickstart: Subscribe to Azure Kubernetes Service (AKS) events with Azure Event Grid Azure Event Grid is a fully managed event routing service that provides uniform event consumption using a publish-subscribe model. -In this quickstart, you'll create an AKS cluster and subscribe to AKS events. +In this quickstart, you create an Azure Kubernetes Service (AKS) cluster and subscribe to AKS events with Azure Event Grid. ## Prerequisites In this quickstart, you'll create an AKS cluster and subscribe to AKS events. ### [Azure CLI](#tab/azure-cli) -Create an AKS cluster using the [az aks create][az-aks-create] command. The following example creates a resource group *MyResourceGroup* and a cluster named *MyAKS* with one node in the *MyResourceGroup* resource group: +1. Create an Azure resource group using the [`az group create`][az-group-create] command. -```azurecli-interactive -az group create --name MyResourceGroup --location eastus -az aks create -g MyResourceGroup -n MyAKS --location eastus --node-count 1 --generate-ssh-keys -``` + ```azurecli-interactive + az group create --name myResourceGroup --location eastus + ``` ++2. Create an AKS cluster using the [`az aks create`][az-aks-create] command. ++ ```azurecli-interactive + az aks create -g myResourceGroup -n myManagedCluster --location eastus --node-count 1 --generate-ssh-keys + ``` ### [Azure PowerShell](#tab/azure-powershell) -Create an AKS cluster using the [New-AzAksCluster][new-azakscluster] command. The following example creates a resource group *MyResourceGroup* and a cluster named *MyAKS* with one node in the *MyResourceGroup* resource group: +1. Create an Azure resource group using the [`New-AzResourceGroup`][new-azresourcegroup] cmdlet. ++ ```azurepowershell-interactive + New-AzResourceGroup -Name myResourceGroup -Location eastus + ``` ++2. Create an AKS cluster using the [`New-AzAksCluster`][new-azakscluster] cmdlet. -```azurepowershell-interactive -New-AzResourceGroup -Name MyResourceGroup -Location eastus -New-AzAksCluster -ResourceGroupName MyResourceGroup -Name MyAKS -Location eastus -NodeCount 1 -GenerateSshKey -``` + ```azurepowershell-interactive + New-AzAksCluster -ResourceGroupName MyResourceGroup -Name MyAKS -Location eastus -NodeCount 1 -GenerateSshKey + ``` New-AzAksCluster -ResourceGroupName MyResourceGroup -Name MyAKS -Location eastus ### [Azure CLI](#tab/azure-cli) -Create a namespace and event hub using [az eventhubs namespace create][az-eventhubs-namespace-create] and [az eventhubs eventhub create][az-eventhubs-eventhub-create]. The following example creates a namespace *MyNamespace* and an event hub *MyEventGridHub* in *MyNamespace*, both in the *MyResourceGroup* resource group. --```azurecli-interactive -az eventhubs namespace create --location eastus --name MyNamespace -g MyResourceGroup -az eventhubs eventhub create --name MyEventGridHub --namespace-name MyNamespace -g MyResourceGroup -``` --> [!NOTE] -> The *name* of your namespace must be unique. --Subscribe to the AKS events using [az eventgrid event-subscription create][az-eventgrid-event-subscription-create]: --```azurecli-interactive -SOURCE_RESOURCE_ID=$(az aks show -g MyResourceGroup -n MyAKS --query id --output tsv) -ENDPOINT=$(az eventhubs eventhub show -g MyResourceGroup -n MyEventGridHub --namespace-name MyNamespace --query id --output tsv) -az eventgrid event-subscription create --name MyEventGridSubscription \ source-resource-id $SOURCE_RESOURCE_ID \endpoint-type eventhub \endpoint $ENDPOINT-``` --Verify your subscription to AKS events using `az eventgrid event-subscription list`: --```azurecli-interactive -az eventgrid event-subscription list --source-resource-id $SOURCE_RESOURCE_ID -``` --The following example output shows you're subscribed to events from the *MyAKS* cluster and those events are delivered to the *MyEventGridHub* event hub: --```output -[ - { - "deadLetterDestination": null, - "deadLetterWithResourceIdentity": null, - "deliveryWithResourceIdentity": null, - "destination": { - "deliveryAttributeMappings": null, - "endpointType": "EventHub", - "resourceId": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNamespace/eventhubs/MyEventGridHub" - }, - "eventDeliverySchema": "EventGridSchema", - "expirationTimeUtc": null, - "filter": { - "advancedFilters": null, - "enableAdvancedFilteringOnArrays": null, - "includedEventTypes": [ - "Microsoft.ContainerService.NewKubernetesVersionAvailable" - ], - "isSubjectCaseSensitive": null, - "subjectBeginsWith": "", - "subjectEndsWith": "" - }, - "id": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/MyResourceGroup/providers/Microsoft.ContainerService/managedClusters/MyAKS/providers/Microsoft.EventGrid/eventSubscriptions/MyEventGridSubscription", - "labels": null, - "name": "MyEventGridSubscription", - "provisioningState": "Succeeded", - "resourceGroup": "MyResourceGroup", - "retryPolicy": { - "eventTimeToLiveInMinutes": 1440, - "maxDeliveryAttempts": 30 - }, - "systemData": null, - "topic": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/MyResourceGroup/providers/microsoft.containerservice/managedclusters/MyAKS", - "type": "Microsoft.EventGrid/eventSubscriptions" - } -] -``` +1. Create a namespace using the [`az eventhubs namespace create`][az-eventhubs-namespace-create] command. Your namespace name must be unique. ++ ```azurecli-interactive + az eventhubs namespace create --location eastus --name myNamespace -g myResourceGroup + ``` ++2. Create an event hub using the [`az eventhubs eventhub create`][az-eventhubs-eventhub-create] command. ++ ```azurecli-interactive + az eventhubs eventhub create --name myEventGridHub --namespace-name myNamespace -g myResourceGroup + ``` ++3. Subscribe to the AKS events using the [`az eventgrid event-subscription create`][az-eventgrid-event-subscription-create] command. ++ ```azurecli-interactive + SOURCE_RESOURCE_ID=$(az aks show -g MyResourceGroup -n MyAKS --query id --output tsv) ++ ENDPOINT=$(az eventhubs eventhub show -g MyResourceGroup -n MyEventGridHub --namespace-name MyNamespace --query id --output tsv) ++ az eventgrid event-subscription create --name MyEventGridSubscription \ + --source-resource-id $SOURCE_RESOURCE_ID \ + --endpoint-type eventhub \ + --endpoint $ENDPOINT + ``` ++4. Verify your subscription to AKS events using the [`az eventgrid event-subscription list`][az-eventgrid-event-subscription-list] command. ++ ```azurecli-interactive + az eventgrid event-subscription list --source-resource-id $SOURCE_RESOURCE_ID + ``` ++ The following example output shows you're subscribed to events from the `myManagedCluster` cluster and those events are delivered to the `myEventGridHub` event hub: ++ ```output + [ + { + "deadLetterDestination": null, + "deadLetterWithResourceIdentity": null, + "deliveryWithResourceIdentity": null, + "destination": { + "deliveryAttributeMappings": null, + "endpointType": "EventHub", + "resourceId": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/Microsoft.EventHub/namespaces/myNamespace/eventhubs/myEventGridHub" + }, + "eventDeliverySchema": "EventGridSchema", + "expirationTimeUtc": null, + "filter": { + "advancedFilters": null, + "enableAdvancedFilteringOnArrays": null, + "includedEventTypes": [ + "Microsoft.ContainerService.NewKubernetesVersionAvailable" + ], + "isSubjectCaseSensitive": null, + "subjectBeginsWith": "", + "subjectEndsWith": "" + }, + "id": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/myManagedCluster/providers/Microsoft.EventGrid/eventSubscriptions/myEventGridSubscription", + "labels": null, + "name": "myEventGridSubscription", + "provisioningState": "Succeeded", + "resourceGroup": "myResourceGroup", + "retryPolicy": { + "eventTimeToLiveInMinutes": 1440, + "maxDeliveryAttempts": 30 + }, + "systemData": null, + "topic": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/microsoft.containerservice/managedclusters/myManagedCluster", + "type": "Microsoft.EventGrid/eventSubscriptions" + } + ] + ``` ### [Azure PowerShell](#tab/azure-powershell) -Create a namespace and event hub using [New-AzEventHubNamespace][new-azeventhubnamespace] and [New-AzEventHub][new-azeventhub]. The following example creates a namespace *MyNamespace* and an event hub *MyEventGridHub* in *MyNamespace*, both in the *MyResourceGroup* resource group. --```azurepowershell-interactive -New-AzEventHubNamespace -Location eastus -Name MyNamespace -ResourceGroupName MyResourceGroup -New-AzEventHub -Name MyEventGridHub -Namespace MyNamespace -ResourceGroupName MyResourceGroup -``` --> [!NOTE] -> The *name* of your namespace must be unique. --Subscribe to the AKS events using [New-AzEventGridSubscription][new-azeventgridsubscription]: --```azurepowershell-interactive -$SOURCE_RESOURCE_ID = (Get-AzAksCluster -ResourceGroupName MyResourceGroup -Name MyAKS).Id -$ENDPOINT = (Get-AzEventHub -ResourceGroupName MyResourceGroup -EventHubName MyEventGridHub -Namespace MyNamespace).Id -$params = @{ - EventSubscriptionName = 'MyEventGridSubscription' - ResourceId = $SOURCE_RESOURCE_ID - EndpointType = 'eventhub' - Endpoint = $ENDPOINT -} -New-AzEventGridSubscription @params -``` --Verify your subscription to AKS events using `Get-AzEventGridSubscription`: --```azurepowershell-interactive -Get-AzEventGridSubscription -ResourceId $SOURCE_RESOURCE_ID | Select-Object -ExpandProperty PSEventSubscriptionsList -``` --The following example output shows you're subscribed to events from the *MyAKS* cluster and those events are delivered to the *MyEventGridHub* event hub: --```Output -EventSubscriptionName : MyEventGridSubscription -Id : /subscriptions/SUBSCRIPTION_ID/resourceGroups/MyResourceGroup/providers/Microsoft.ContainerService/managedClusters/MyAKS/providers/Microsoft.EventGrid/eventSubscriptions/MyEventGridSubscription -Type : Microsoft.EventGrid/eventSubscriptions -Topic : /subscriptions/SUBSCRIPTION_ID/resourceGroups/myresourcegroup/providers/microsoft.containerservice/managedclusters/myaks -Filter : Microsoft.Azure.Management.EventGrid.Models.EventSubscriptionFilter -Destination : Microsoft.Azure.Management.EventGrid.Models.EventHubEventSubscriptionDestination -ProvisioningState : Succeeded -Labels : -EventTtl : 1440 -MaxDeliveryAttempt : 30 -EventDeliverySchema : EventGridSchema -ExpirationDate : -DeadLetterEndpoint : -Endpoint : /subscriptions/SUBSCRIPTION_ID/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNamespace/eventhubs/MyEventGridHub -``` +1. Create a namespace using the [`New-AzEventHubNamespace`][new-azeventhubnamespace] cmdlet. Your namespace name must be unique. ++ ```azurepowershell-interactive + New-AzEventHubNamespace -Location eastus -Name MyNamespace -ResourceGroupName MyResourceGroup + ``` ++2. Create an event hub using the [`New-AzEventHub`][new-azeventhub] cmdlet. ++ ```azurepowershell-interactive + New-AzEventHub -Name MyEventGridHub -Namespace MyNamespace -ResourceGroupName MyResourceGroup + ``` ++3. Subscribe to the AKS events using the [`New-AzEventGridSubscription`][new-azeventgridsubscription] cmdlet. ++ ```azurepowershell-interactive + $SOURCE_RESOURCE_ID = (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myManagedCluster).Id ++ $ENDPOINT = (Get-AzEventHub -ResourceGroupName myResourceGroup -EventHubName myEventGridHub -Namespace myNamespace).Id ++ $params = @{ + EventSubscriptionName = 'myEventGridSubscription' + ResourceId = $SOURCE_RESOURCE_ID + EndpointType = 'eventhub' + Endpoint = $ENDPOINT + } ++ New-AzEventGridSubscription @params + ``` ++4. Verify your subscription to AKS events using the [`Get-AzEventGridSubscription`][get-azeventgridsubscription] cmdlet. ++ ```azurepowershell-interactive + Get-AzEventGridSubscription -ResourceId $SOURCE_RESOURCE_ID | Select-Object -ExpandProperty PSEventSubscriptionsList + ``` ++ The following example output shows you're subscribed to events from the `myManagedCluster` cluster and those events are delivered to the `myEventGridHub` event hub: ++ ```Output + EventSubscriptionName : myEventGridSubscription + Id : /subscriptions/SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/myManagedCluster/providers/Microsoft.EventGrid/eventSubscriptions/myEventGridSubscription + Type : Microsoft.EventGrid/eventSubscriptions + Topic : /subscriptions/SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/microsoft.containerservice/managedclusters/myManagedCluster + Filter : Microsoft.Azure.Management.EventGrid.Models.EventSubscriptionFilter + Destination : Microsoft.Azure.Management.EventGrid.Models.EventHubEventSubscriptionDestination + ProvisioningState : Succeeded + Labels : + EventTtl : 1440 + MaxDeliveryAttempt : 30 + EventDeliverySchema : EventGridSchema + ExpirationDate : + DeadLetterEndpoint : + Endpoint : /subscriptions/SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/Microsoft.EventHub/namespaces/myNamespace/eventhubs/myEventGridHub + ``` -When AKS events occur, you'll see those events appear in your event hub. For example, when the list of available Kubernetes versions for your clusters changes, you'll see a `Microsoft.ContainerService.NewKubernetesVersionAvailable` event. For more information on the events AKS emits, see [Azure Kubernetes Service (AKS) as an Event Grid source][aks-events]. +When AKS events occur, the events appear in your event hub. For example, when the list of available Kubernetes versions for your clusters changes, you see a `Microsoft.ContainerService.NewKubernetesVersionAvailable` event. For more information on the events AKS emits, see [Azure Kubernetes Service (AKS) as an Event Grid source][aks-events]. ## Delete the cluster and subscriptions ### [Azure CLI](#tab/azure-cli) -Use the [az group delete][az-group-delete] command to remove the resource group, the AKS cluster, namespace, and event hub, and all related resources. +* Remove the resource group, AKS cluster, namespace, event hub, and all related resources using the [`az group delete`][az-group-delete] command. -```azurecli-interactive -az group delete --name MyResourceGroup --yes --no-wait -``` + ```azurecli-interactive + az group delete --name myResourceGroup --yes --no-wait + ``` ### [Azure PowerShell](#tab/azure-powershell) -Use the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource group, the AKS cluster, namespace, and event hub, and all related resources. +* Remove the resource group, AKS cluster, namespace, event hub, and all related resources using the [`Remove-AzResourceGroup`][remove-azresourcegroup] cmdlet. -```azurepowershell-interactive -Remove-AzResourceGroup -Name MyResourceGroup -``` + ```azurepowershell-interactive + Remove-AzResourceGroup -Name myResourceGroup + ``` -> [!NOTE] -> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete]. -> -> If you used a managed identity, the identity is managed by the platform and does not require removal. + > [!NOTE] + > When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster isn't removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete]. + > + > If you used a managed identity, the identity is managed by the platform and doesn't require removal. ## Next steps In this quickstart, you deployed a Kubernetes cluster and then subscribed to AKS events in Azure Event Hubs. -To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial. +To learn more about AKS, and walk through a complete code to deployment example, continue to the following Kubernetes cluster tutorial. > [!div class="nextstepaction"] > [AKS tutorial][aks-tutorial] To learn more about AKS, and walk through a complete code to deployment example, [az-group-delete]: /cli/azure/group#az_group_delete [sp-delete]: kubernetes-service-principal.md#other-considerations [remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup+[az-group-create]: /cli/azure/group#az_group_create +[az-eventgrid-event-subscription-list]: /cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-list +[get-azeventgridsubscription]: /powershell/module/az.eventgrid/get-azeventgridsubscription +[new-azresourcegroup]: /powershell/module/az.resources/new-azresourcegroup |
aks | Supported Kubernetes Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md | Note important changes to make, before you upgrade to any of the available minor | 1.24 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload Identity V1.1.1<br>ASC Defender 1.0.56<br>AAD Pod Identity 1.8.13.6<br>Gitops 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 18.04 Cgroups V1 <br>ContainerD 1.7<br>| No Breaking Changes | None | 1.25 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload Identity V1.1.1<br>ASC Defender 1.0.56<br>AAD Pod Identity 1.8.13.6<br>Gitops 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 18.04 Cgroups V1 <br>ContainerD 1.7<br>| Ubuntu 22.04 by default with cgroupv2 and Overlay VPA 0.13.0 |CgroupsV2 - If you deploy Java applications with the JDK, prefer to use JDK 11.0.16 and later or JDK 15 and later, which fully support cgroup v2 | 1.26 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload Identity V1.1.1<br>ASC Defender 1.0.56<br>AAD Pod Identity 1.8.13.6<br>Gitops 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7<br>|No Breaking Changes |None-| 1.27 Preview | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.10.0<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload Identity V1.1.1<br>ASC Defender 1.0.56<br>AAD Pod Identity 1.8.13.6<br>Gitops 1.7.0<br>KMS 0.5.0|Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V1 <br>ContainerD 1.7<br>|Keda 2.10.0 |None +| 1.27 Preview | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.10.0<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload Identity V1.1.1<br>ASC Defender 1.0.56<br>AAD Pod Identity 1.8.13.6<br>Gitops 1.7.0<br>KMS 0.5.0|Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V1 <br>ContainerD 1.7 for Linux and 1.6 for Windows<br>|Keda 2.10.0 |Because of Ubuntu 22.04 FIPS certification status, we'll switch AKS FIPS nodes from 18.04 to 20.04 from 1.27 preview onwards. ## Alias minor version > [!NOTE] |
aks | Workload Identity Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md | The following client libraries are the **minimum** version required | Language | Library | Image | Example | Has Windows | |--|--|-|-|-|-| .NET | [microsoft-authentication-library-for-dotnet](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | ghcr.io/azure/azure-workload-identity/msal-net | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-net/akvdotnet) | Yes | -| Go | [microsoft-authentication-library-for-go](https://github.com/AzureAD/microsoft-authentication-library-for-go) | ghcr.io/azure/azure-workload-identity/msal-go | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-go) | Yes | -| Java | [microsoft-authentication-library-for-java](https://github.com/AzureAD/microsoft-authentication-library-for-java) | ghcr.io/azure/azure-workload-identity/msal-java | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-java) | No | -| JavaScript | [microsoft-authentication-library-for-js](https://github.com/AzureAD/microsoft-authentication-library-for-js) | ghcr.io/azure/azure-workload-identity/msal-node | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-node) | No | -| Python | [microsoft-authentication-library-for-python](https://github.com/AzureAD/microsoft-authentication-library-for-python) | ghcr.io/azure/azure-workload-identity/msal-python | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-python) | No | +| .NET | [microsoft-authentication-library-for-dotnet](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | ghcr.io/azure/azure-workload-identity/msal-net:latest | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-net/akvdotnet) | Yes | +| Go | [microsoft-authentication-library-for-go](https://github.com/AzureAD/microsoft-authentication-library-for-go) | ghcr.io/azure/azure-workload-identity/msal-go:latest | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-go) | Yes | +| Java | [microsoft-authentication-library-for-java](https://github.com/AzureAD/microsoft-authentication-library-for-java) | ghcr.io/azure/azure-workload-identity/msal-java:latest | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-java) | No | +| JavaScript | [microsoft-authentication-library-for-js](https://github.com/AzureAD/microsoft-authentication-library-for-js) | ghcr.io/azure/azure-workload-identity/msal-node:latest | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-node) | No | +| Python | [microsoft-authentication-library-for-python](https://github.com/AzureAD/microsoft-authentication-library-for-python) | ghcr.io/azure/azure-workload-identity/msal-python:latest | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-python) | No | ## Limitations If you've used [Azure AD pod-managed identity][use-azure-ad-pod-identity], think ### Service account annotations +All annotations are optional. If the annotation is not specified, the default value will be used. + |Annotation |Description |Default | |--||--| |`azure.workload.identity/client-id` |Represents the Azure AD application<br> client ID to be used with the pod. || If you've used [Azure AD pod-managed identity][use-azure-ad-pod-identity], think ### Pod annotations +All annotations are optional. If the annotation is not specified, the default value will be used. + |Annotation |Description |Default | |--||--| |`azure.workload.identity/service-account-token-expiration` |Represents the `expirationSeconds` field for the projected service account token. It's an optional field that you configure to prevent any downtime caused by errors during service account token refresh. Kubernetes service account token expiry isn't correlated with Azure AD tokens. Azure AD tokens expire in 24 hours after they're issued. <sup>1</sup> |3600<br> Supported range is 3600-86400. | |
api-management | Api Management Configuration Repository Git | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-configuration-repository-git.md | The `apis` folder contains a folder for each API in the service instance, which * `apis\<api name>\operations\` - Folder containing `<operation name>.description.html` files that map to the operations in the API. Each file contains the description of a single operation in the API, which maps to the `description` property of the [operation entity](/rest/api/apimanagement/current-ga/operation) in the REST API. ### apiVersionSets folder-The `apiVerionSets` folder contains a folder for each API version set created for an API, and contains the following items. +The `apiVersionSets` folder contains a folder for each API version set created for an API, and contains the following items. * `apiVersionSets\<api version set Id>\configuration.json` - Configuration for the version set. This is the same information that would be returned if you were to call the [Get a specific version set](/rest/api/apimanagement/current-ga/api-version-set/get) operation. |
api-management | Api Management Howto Mutual Certificates For Clients | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-mutual-certificates-for-clients.md | You can also create policy expressions with the [`context` variable](api-managem > [!IMPORTANT] > * Starting May 2021, the `context.Request.Certificate` property only requests the certificate when the API Management instance's [`hostnameConfiguration`](/rest/api/apimanagement/current-ga/api-management-service/create-or-update#hostnameconfiguration) sets the `negotiateClientCertificate` property to True. By default, `negotiateClientCertificate` is set to False.-> * If TLS renegotiation is disabled in your client, you may see TLS errors when requesting the certificate using the `context.Request.Certificate` property. If this occurs, enable TLS renegotation settings in the client. +> * If TLS renegotiation is disabled in your client, you may see TLS errors when requesting the certificate using the `context.Request.Certificate` property. If this occurs, enable TLS renegotiation settings in the client. ### Checking the issuer and subject |
api-management | Api Management Template Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-template-resources.md | The following localization options are supported: |ValidationErrorCredentialsInvalid|Email or password is invalid. Please correct the errors and try again.| |WebAuthenticationRequestIsNotValid|Request is not valid| |WebAuthenticationUserIsNotConfirm|Please confirm your registration before attempting to sign in.| -|WebAuthenticationInvalidEmailFormated|Email is invalid: {0}| +|WebAuthenticationInvalidEmailFormatted|Email is invalid: {0}| |WebAuthenticationUserNotFound|User not found| |WebAuthenticationTenantNotRegistered|Your account belongs to an Azure Active Directory tenant which is not authorized to access this portal.| |WebAuthenticationAuthenticationFailed|Authentication has failed.| |
api-management | How To Deploy Self Hosted Gateway Azure Arc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-azure-arc.md | Deploying the API Management gateway on an Azure Arc-enabled Kubernetes cluster ```azurecli az k8s-extension create --cluster-type connectedClusters --cluster-name <cluster-name> \- --resource-group <rg-name> --name <extension-name> --extension-type Microsoft.ApiManagement.Gateway \ - --scope namespace --target-namespace <namespace> \ - --configuration-settings gateway.endpoint='<Configuration URL>' \ - --configuration-protected-settings gateway.authKey='<token>' \ - --configuration-settings service.type='LoadBalancer' --release-train preview + --resource-group <rg-name> --name <extension-name> --extension-type Microsoft.ApiManagement.Gateway \ + --scope namespace --target-namespace <namespace> \ + --configuration-settings gateway.configuration.uri='<Configuration URL>' \ + --config-protected-settings gateway.auth.token='<token>' \ + --configuration-settings service.type='LoadBalancer' --release-train preview ``` > [!TIP]- > `-protected-` flag for `authKey` is optional, but recommended. + > `-protected-` flag for `gateway.auth.token` is optional, but recommended. 1. Verify deployment status using the following CLI command: ```azurecli Deploying the API Management gateway on an Azure Arc-enabled Kubernetes cluster ## Deploy the API Management gateway extension using Azure portal 1. In the Azure portal, navigate to your Azure Arc-connected cluster.-1. In the left menu, select **Extensions (preview)** > **+ Add** > **API Management gateway (preview)**. +1. In the left menu, select **Extensions** > **+ Add** > **API Management gateway (preview)**. 1. Select **Create**. 1. In the **Install API Management gateway** window, configure the gateway extension: * Select the subscription and resource group for your API Management instance. Deploying the API Management gateway on an Azure Arc-enabled Kubernetes cluster ## Available extension configurations +The self-hosted gateway extension for Azure Arc provides many configuration settings to customize the extension for your environment. This section lists required deployment settings and optional settings for integration with Log Analytics. For a complete list of settings, see the self-hosted gateway extension [reference](self-hosted-gateway-arc-reference.md). ++### Required settings + The following extension configurations are **required**. | Setting | Description | | - | -- | -| `gateway.endpoint` | The gateway endpoint's Configuration URL. | -| `gateway.authKey` | Token for access to the gateway. | +| `gateway.configuration.uri` | Configuration endpoint in API Management service for the self-hosted gateway. | +| `gateway.auth.token` | Gateway token (authentication key) to authenticate to API Management service. Typically starts with `GatewayKey`. | | `service.type` | Kubernetes service configuration for the gateway: `LoadBalancer`, `NodePort`, or `ClusterIP`. | ### Log Analytics settings To enable monitoring of the self-hosted gateway, configure the following Log Ana * Discover all [Azure Arc-enabled Kubernetes extensions](../azure-arc/kubernetes/extensions.md). * Learn more about [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md). * Learn more about guidance to [run the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md).+* For configuration options, see the self-hosted gateway extension [reference](self-hosted-gateway-arc-reference.md). |
api-management | Import Logic App As Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-logic-app-as-api.md | In this article, you learn how to: > - Test the API in the Azure portal > [!NOTE]-> API Management supports automated import of a Logic App (Consumption) resource. which runs in the multi-tenant Logic Apps environment. Learn more about [single-tenant versus muti-tenant Logic Apps](../logic-apps/single-tenant-overview-compare.md). +> API Management supports automated import of a Logic App (Consumption) resource. which runs in the multi-tenant Logic Apps environment. Learn more about [single-tenant versus multi-tenant Logic Apps](../logic-apps/single-tenant-overview-compare.md). ## Prerequisites |
api-management | Json To Xml Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/json-to-xml-policy.md | The `json-to-xml` policy converts a request or response body from JSON to XML. consider-accept-header="true | false" parse-date="true | false" namespace-separator="separator character"- namespace-prefix="namepsace prefix" + namespace-prefix="namespace prefix" attribute-block-name="name" /> ``` |
api-management | Sap Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/sap-api.md | In this article, you'll: 1. From the side navigation menu, under the **APIs** section, select **APIs**. 1. Under **Create a new definition**, select **OpenAPI specification**. - :::image type="content" source="./media/import-api-from-oas/oas-api.png" alt-text="OpenAPI specifiction"::: + :::image type="content" source="./media/import-api-from-oas/oas-api.png" alt-text="OpenAPI specification"::: 1. Click **Select a file**, and select the `openapi-spec.json` file that you saved locally in a previous step. |
api-management | Self Hosted Gateway Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md | -This article explains how the self-hosted gateway feature of Azure API Management enables hybrid and multi-cloud API management, presents its high-level architecture, and highlights its capabilities. +This article explains how the self-hosted gateway feature of Azure API Management enables hybrid and multicloud API management, presents its high-level architecture, and highlights its capabilities. For an overview of the features across the various gateway offerings, see [API gateway in API Management](api-management-gateways-overview.md#feature-comparison-managed-versus-self-hosted-gateways). [!INCLUDE [api-management-availability-premium-dev](../../includes/api-management-availability-premium-dev.md)] -## Hybrid and multi-cloud API management +## Hybrid and multicloud API management -The self-hosted gateway feature expands API Management support for hybrid and multi-cloud environments and enables organizations to efficiently and securely manage APIs hosted on-premises and across clouds from a single API Management service in Azure. +The self-hosted gateway feature expands API Management support for hybrid and multicloud environments and enables organizations to efficiently and securely manage APIs hosted on-premises and across clouds from a single API Management service in Azure. With the self-hosted gateway, customers have the flexibility to deploy a containerized version of the API Management gateway component to the same environments where they host their APIs. All self-hosted gateways are managed from the API Management service they're federated with, thus providing customers with the visibility and unified management experience across all internal and external APIs. We provide a variety of container images for self-hosted gateways to meet your n You can find a full list of available tags [here](https://mcr.microsoft.com/product/azure-api-management/gateway/tags). -<sup>1</sup>Preview versions are not officially supported and are for experimental purposes only.<br/> +<sup>1</sup>Preview versions aren't officially supported and are for experimental purposes only. See the [self-hosted gateway support policies](self-hosted-gateway-support-policies.md#self-hosted-gateway-container-image-support-coverage). <br/> ### Use of tags in our official deployment options To operate properly, each self-hosted gateway needs outbound connectivity on por | Description | Required for v1 | Required for v2 | Notes | |:|:|:|:| | Hostname of the configuration endpoint | `<apim-service-name>.management.azure-api.net` | `<apim-service-name>.configuration.azure-api.net` | Connectivity to v2 endpoint requires DNS resolution of the default hostname. |-| Public IP address of the API Management instance | ✔️ | ✔️ | IP addresses of primary location is sufficient. | +| Public IP address of the API Management instance | ✔️ | ✔️ | IP address of primary location is sufficient. | | Public IP addresses of Azure Storage [service tag](../virtual-network/service-tags-overview.md) | ✔️ | Optional<sup>2</sup> | IP addresses must correspond to primary location of API Management instance. | | Hostname of Azure Blob Storage account | ✔️ | Optional<sup>2</sup> | Account associated with instance (`<blob-storage-account-name>.blob.core.windows.net`) | | Hostname of Azure Table Storage account | ✔️ | Optional<sup>2</sup> | Account associated with instance (`<table-storage-account-name>.table.core.windows.net`) | To operate properly, each self-hosted gateway needs outbound connectivity on por > * The associated storage account names are listed in the service's **Network connectivity status** page in the Azure portal. > * Public IP addresses underlying the associated storage accounts are dynamic and can change without notice. +### Authentication options ++To authenticate the connection between the self-hosted gateway and the cloud-based API Management instance's configuration endpoint, you have the following options in the gateway container's [configuration settings](self-hosted-gateway-settings-reference.md). ++|Option |Considerations | +||| +| [Azure Active Directory authentication](self-hosted-gateway-enable-azure-ad.md) | Configure one or more Azure AD apps for access to gateway<br/><br/>Manage access separately per app<br/><br/>Configure longer expiry times for secrets in accordance with your organization's policies<br/><br/>Use standard Azure AD procedures to assign or revoke user or group permissions to app and to rotate secrets<br/><br/> | +| Gateway access token (also called authentication key) | Token expires every 30 days at maximum and must be renewed in the containers<br/><br/>Backed by a gateway key that can be rotated independently (for example, to revoke access) <br/><br/>Regenerating gateway key invalidates all access tokens created with it | + ### Connectivity failures When connectivity to Azure is lost, the self-hosted gateway is unable to receive configuration updates, report its status, or upload telemetry. As of v2.1.1 and above, you can manage the ciphers that are being used through t - Learn more about the various gateways in our [API gateway overview](api-management-gateways-overview.md) - Learn more about the support policy for the [self-hosted gateway](self-hosted-gateway-support-policies.md)-- Learn more about [API Management in a Hybrid and Multi-Cloud World](https://aka.ms/hybrid-and-multi-cloud-api-management)+- Learn more about [API Management in a hybrid and multicloud world](https://aka.ms/hybrid-and-multi-cloud-api-management) - Learn more about guidance for [running the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md) - [Deploy self-hosted gateway to Docker](how-to-deploy-self-hosted-gateway-docker.md) - [Deploy self-hosted gateway to Kubernetes](how-to-deploy-self-hosted-gateway-kubernetes.md) |
app-service | Overview Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-security.md | Except for the **Isolated** pricing tier, all tiers run your apps on the shared - Serve internal application using an internal load balancer (ILB), which allows access only from inside your Azure Virtual Network. The ILB has an IP address from your private subnet, which provides total isolation of your apps from the internet. - [Use an ILB behind a web application firewall (WAF)](environment/integrate-with-application-gateway.md). The WAF offers enterprise-level protection to your public-facing applications, such as DDoS protection, URI filtering, and SQL injection prevention. +## DDoS protection ++For web workloads, we highly recommend utilizing [Azure DDoS protection](../ddos-protection/ddos-protection-overview.md) and a [web application firewall](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to deploy [Azure Front Door](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [protection against network-level DDoS attacks](../frontdoor/front-door-ddos.md). + For more information, see [Introduction to Azure App Service Environments](environment/intro.md). |
app-service | Security Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-recommendations.md | This article contains security recommendations for Azure App Service. Implementi ## General | Recommendation | Comments |-|-|-|-| +|-|-| | Stay up to date | Use the latest versions of supported platforms, programming languages, protocols, and frameworks. | ## Identity and access management This article contains security recommendations for Azure App Service. Implementi | Use the isolated pricing tier | Except for the isolated pricing tier, all tiers run your apps on the shared network infrastructure in Azure App Service. The isolated tier gives you complete network isolation by running your apps inside a dedicated [App Service environment](environment/intro.md). An App Service environment runs in your own instance of [Azure Virtual Network](../virtual-network/index.yml).| | Use secure connections when accessing on-premises resources | You can use [Hybrid connections](app-service-hybrid-connections.md), [Virtual Network integration](./overview-vnet-integration.md), or [App Service environment's](environment/intro.md) to connect to on-premises resources. | | Limit exposure to inbound network traffic | Network security groups allow you to restrict network access and control the number of exposed endpoints. For more information, see [How To Control Inbound Traffic to an App Service Environment](environment/app-service-app-service-environment-control-inbound-traffic.md). |+| Protect against DDoS attacks | For web workloads, we highly recommend utilizing [Azure DDoS protection](../ddos-protection/ddos-protection-overview.md) and a [web application firewall](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to deploy [Azure Front Door](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [protection against network-level DDoS attacks](../frontdoor/front-door-ddos.md). | ## Monitoring |
application-gateway | Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/features.md | -> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to employ [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md). +> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to deploy [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md). Application Gateway includes the following features: Web Application Firewall (WAF) is a service that provides centralized protection Web applications are increasingly targets of malicious attacks that exploit common known vulnerabilities. Common among these exploits are SQL injection attacks, cross site scripting attacks to name a few. Preventing such attacks in application code can be challenging and may require rigorous maintenance, patching and monitoring at many layers of the application topology. A centralized web application firewall helps make security management much simpler and gives better assurance to application administrators against threats or intrusions. A WAF solution can also react to a security threat faster by patching a known vulnerability at a central location versus securing each of individual web applications. Existing application gateways can be converted to a Web Application Firewall enabled application gateway easily. -For more information, see [What is Azure Web Application Firewall?](../web-application-firewall/overview.md). +Refer to [Application DDoS protection](../web-application-firewall/shared/application-ddos-protection.md) for guidance on how to use Azure WAF with Application Gateway to protect against DDoS attacks. For more information, see [What is Azure Web Application Firewall?](../web-application-firewall/overview.md). ## Ingress Controller for AKS Application Gateway Ingress Controller (AGIC) allows you to use Application Gateway as the ingress for an [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) cluster. |
application-gateway | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview.md | To learn about Application Gateway features, see [Azure Application Gateway feat To learn about Application Gateway infrastructure, see [Azure Application Gateway infrastructure configuration](configuration-infrastructure.md). +## Security ++* Protect your applications against L7 layer DDoS protection using WAF. For more information, see [Application DDoS protection](../web-application-firewall/shared/application-ddos-protection.md). ++* Protect your apps from malicious actors with Bot manager rules based on MicrosoftΓÇÖs own Threat Intelligence. ++* Secure applications against L3 and L4 DDoS attacks with [Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md) plan. ++* Privately connect to your backend behind Application Gateway with [Private Link](private-link.md) and embrace a zero-trust access model. ++* Eliminate risk of data exfiltration and control privacy of communication from within the virtual network with a fully [Private-only Application Gateway deployment](application-gateway-private-deployment.md). ++* Provide a centralized security experience for your application via Azure Policy, Azure Advisor, and Microsoft Sentinel integration that ensures consistent security features across apps. ++ ## Pricing and SLA For Application Gateway pricing information, see [Application Gateway pricing](https://azure.microsoft.com/pricing/details/application-gateway/). |
applied-ai-services | Form Recognizer Container Install Run | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md | - ports: + ports: - "5000:5050" azure-cognitive-service-layout: container_name: azure-cognitive-service-layout docker-compose up The following code sample is a self-contained `docker compose` example to run the Form Recognizer General Document container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your ID and Read container instances. ```yml- version: "3.9" - - azure-cognitive-service-receipt: - container_name: azure-cognitive-service-id-document - image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document-3.0 - environment: - - EULA=accept - - billing={FORM_RECOGNIZER_ENDPOINT_URI} - - apiKey={FORM_RECOGNIZER_KEY} - - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000 - ports: - - "5000:5050" - azure-cognitive-service-read: - container_name: azure-cognitive-service-read - image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.0 - environment: - - EULA=accept - - billing={FORM_RECOGNIZER_ENDPOINT_URI} - - apiKey={FORM_RECOGNIZER_KEY} --+version: "3.9" ++ azure-cognitive-service-receipt: + container_name: azure-cognitive-service-id-document + image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document-3.0 + environment: + - EULA=accept + - billing={FORM_RECOGNIZER_ENDPOINT_URI} + - apiKey={FORM_RECOGNIZER_KEY} + - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000 + ports: + - "5000:5050" + azure-cognitive-service-read: + container_name: azure-cognitive-service-read + image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.0 + environment: + - EULA=accept + - billing={FORM_RECOGNIZER_ENDPOINT_URI} + - apiKey={FORM_RECOGNIZER_KEY} ``` Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command: |
automation | Overview Monitoring Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview-monitoring-agent.md | To enable tracking of Windows Services data, you must upgrade CT extension and u #### [For Arc-enabled Windows VMs](#tab/win-arc-vm) ```powershell-interactive-ΓÇô az connectedmachine extension create --name ChangeTracking-Linux --publisher Microsoft.Azure.ChangeTrackingAndInventory --type ChangeTracking-Windows --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> --enable-auto-upgrade true +ΓÇô az connectedmachine extension create --name ChangeTracking-Windows --publisher Microsoft.Azure.ChangeTrackingAndInventory --type ChangeTracking-Windows --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> --enable-auto-upgrade true ``` #### [For Arc-enabled Linux VMs](#tab/lin-arc-vm) ```powershell-interactive-- az connectedmachine extension create --name ChangeTracking-Windows --publisher Microsoft.Azure.ChangeTrackingAndInventory --type ChangeTracking-Linux --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> --enable-auto-upgrade true+- az connectedmachine extension create --name ChangeTracking-Linux --publisher Microsoft.Azure.ChangeTrackingAndInventory --type ChangeTracking-Linux --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> --enable-auto-upgrade true ``` |
azure-arc | Extensions Release | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md | For more information, see [Understand Azure Policy for Kubernetes clusters](../. ## Azure Key Vault Secrets Provider -- **Supported distributions**: AKS on Azure Stack HCI, AKS hybrid clusters provisioned from Azure, Cluster API Azure, Google Kubernetes Engine, Canonical Kubernetes Distribution, OpenShift Kubernetes Distribution, Amazon Elastic Kubernetes Service, VMWare Tanzu Kubernetes Grid+- **Supported distributions**: AKS on Azure Stack HCI, AKS hybrid clusters provisioned from Azure, Cluster API Azure, Google Kubernetes Engine, Canonical Kubernetes Distribution, OpenShift Kubernetes Distribution, Amazon Elastic Kubernetes Service, VMware Tanzu Kubernetes Grid The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integration of Azure Key Vault as a secrets store with a Kubernetes cluster via a CSI volume. For Azure Arc-enabled Kubernetes clusters, you can install the Azure Key Vault Secrets Provider extension to fetch secrets. For more information, see [Use the Azure Key Vault Secrets Provider extension to ## Microsoft Defender for Containers -- **Supported distributions**: AKS hybrid clusters provisioned from Azure, Cluster API Azure, Azure Red Hat OpenShift, Red Hat OpenShift (version 4.6 or newer), Google Kubernetes Engine Standard, Amazon Elastic Kubernetes Service, VMWare Tanzu Kubernetes Grid, Rancher Kubernetes Engine, Canonical Kubernetes Distribution+- **Supported distributions**: AKS hybrid clusters provisioned from Azure, Cluster API Azure, Azure Red Hat OpenShift, Red Hat OpenShift (version 4.6 or newer), Google Kubernetes Engine Standard, Amazon Elastic Kubernetes Service, VMware Tanzu Kubernetes Grid, Rancher Kubernetes Engine, Canonical Kubernetes Distribution Microsoft Defender for Containers is the cloud-native solution that is used to secure your containers so you can improve, monitor, and maintain the security of your clusters, containers, and their applications. It gathers information related to security like audit log data from the Kubernetes cluster, and provides recommendations and threat alerts based on gathered data. For more information, see [Enable Microsoft Defender for Containers](../../defen ## Azure Arc-enabled Open Service Mesh -- **Supported distributions**: AKS, AKS on Azure Stack HCI, AKS hybrid clusters provisioned from Azure, Cluster API Azure, Google Kubernetes Engine, Canonical Kubernetes Distribution, Rancher Kubernetes Engine, OpenShift Kubernetes Distribution, Amazon Elastic Kubernetes Service, VMWare Tanzu Kubernetes Grid+- **Supported distributions**: AKS, AKS on Azure Stack HCI, AKS hybrid clusters provisioned from Azure, Cluster API Azure, Google Kubernetes Engine, Canonical Kubernetes Distribution, Rancher Kubernetes Engine, OpenShift Kubernetes Distribution, Amazon Elastic Kubernetes Service, VMware Tanzu Kubernetes Grid [Open Service Mesh (OSM)](https://docs.openservicemesh.io/) is a lightweight, extensible, Cloud Native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments. With the integration between Azure API Management and Azure Arc on Kubernetes, y For more information, see [Deploy an Azure API Management gateway on Azure Arc (preview)](../../api-management/how-to-deploy-self-hosted-gateway-azure-arc.md). > [!IMPORTANT]-> API Management self-hosted gateway on Azure Arc is currently in public preview. During preview, the API Management gateway extension is available in the following regions: West Europe, East US. +> API Management self-hosted gateway on Azure Arc is currently in public preview. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ## Azure Arc-enabled Machine Learning |
azure-arc | System Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/system-requirements.md | Title: Azure Arc resource bridge (preview) system requirements description: Learn about system requirements for Azure Arc resource bridge (preview). Previously updated : 03/23/2023 Last updated : 06/15/2023 # Azure Arc resource bridge (preview) system requirements The control plane IP has the following requirements: - Open communication with the management machine. - The control plane needs to be able to resolve the management machine and vice versa.-+- Static IP address assigned; the IP should be outside the DHCP range but still available on the network segment. This IP address can't be assigned to any other machine on the network. If you're using Azure Kubernetes Service on Azure Stack HCI (AKS hybrid) and installing resource bridge, then the control plane IP for the resource bridge can't be used by the AKS hybrid cluster. For specific instructions on deploying Arc resource bridge with AKS on Azure Stack HCI, see [AKS on HCI (AKS hybrid) - Arc resource bridge deployment](/azure/aks/hybrid/deploy-arc-resource-bridge-windows-server). - If using a proxy, the proxy server has to also be reachable from IPs within the IP prefix, including the reserved appliance VM IP. |
azure-cache-for-redis | Cache How To Premium Clustering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-clustering.md | The Redis clustering protocol requires each client to connect to each shard dire ### How do I connect to my cache when clustering is enabled? -You can connect to your cache using the same [endpoints](cache-configure.md#properties), [ports](cache-configure.md#properties), and [keys](cache-configure.md#access-keys) that you use when connecting to a cache that doesn't have clustering enabled. Redis manages the clustering on the backend so you don't have to manage it from your client. +You can connect to your cache using the same [endpoints](cache-configure.md#properties), [ports](cache-configure.md#properties), and [keys](cache-configure.md#access-keys) that you use when connecting to a cache that doesn't have clustering enabled. Redis manages the clustering on the backend so you don't have to manage it from your client as long as the client library supports Redis clustering. ### Can I directly connect to the individual shards of my cache? |
azure-cache-for-redis | Cache Retired Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-retired-features.md | Cloud Service version 4 caches can't be upgraded to version 6 until they're migr For more information, see [Caches with a dependency on Cloud Services (classic)](./cache-faq.yml). -Starting on April 30, 2023, Cloud Service caches receive only critical security updates and critical bug fixes. Cloud Service caches won't support any new features released after April 30, 2023. We highly recommend migrating your caches to Azure Virtual Machine Scale Set. +Cloud Service cache will continue to function beyond June 30, 2023, however, starting on April 30, 2023, Cloud Service caches receive only critical security updates and bug fixes with limited support. Cloud Service caches won't support any new features released after April 30, 2023. We highly recommend migrating your caches to Azure Virtual Machine Scale Set as soon as possible. #### Do I need to update my application to be able to use Redis version 6? No, the upgrade can't be rolled back. ## Next steps <!-- Add a context sentence for the following links --> - [What's new](cache-whats-new.md)-- [Azure Cache for Redis FAQ](cache-faq.yml)+- [Azure Cache for Redis FAQ](cache-faq.yml) |
azure-functions | Functions Bindings Event Grid Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-output.md | The type of the output parameter used with an Event Grid output binding depends # [In-process](#tab/in-process) -The following example shows a C# function that binds to a `CloudEvent` using version 3.x of the extension, which is in preview: +The following example shows a C# function that publishes a `CloudEvent` using version 3.x of the extension: ```cs using System.Threading.Tasks; namespace Azure.Extensions.WebJobs.Sample } ``` -The following example shows a C# function that binds to an `EventGridEvent` using version 3.x of the extension, which is in preview: +The following example shows a C# function that publishes an `EventGridEvent` using version 3.x of the extension: ```cs using System.Threading.Tasks; namespace Azure.Extensions.WebJobs.Sample } ``` -The following example shows a C# function that writes an [Microsoft.Azure.EventGrid.Models.EventGridEvent][EventGridEvent] message to an Event Grid custom topic, using the method return value as the output: +The following example shows a C# function that publishes an [EventGridEvent][EventGridEvent] message to an Event Grid custom topic, using the method return value as the output: ```csharp [FunctionName("EventGridOutput")] public static EventGridEvent Run([TimerTrigger("0 */5 * * * *")] TimerInfo myTim } ``` -The following example shows how to use the `IAsyncCollector` interface to send a batch of messages. +It is also possible to use an `out` parameter to accomplish the same thing: +```csharp +[FunctionName("EventGridOutput")] +[return: EventGrid(TopicEndpointUri = "MyEventGridTopicUriSetting", TopicKeySetting = "MyEventGridTopicKeySetting")] +public static void Run( + [TimerTrigger("0 */5 * * * *")] TimerInfo myTimer, + EventGrid(TopicEndpointUri = "MyEventGridTopicUriSetting", TopicKeySetting = "MyEventGridTopicKeySetting") out eventGridEvent, + ILogger log) +{ + eventGridEvent = EventGridEvent("message-id", "subject-name", "event-data", "event-type", DateTime.UtcNow, "1.0"); +} +``` ++The following example shows how to use the `IAsyncCollector` interface to send a batch of `EventGridEvent` messages. ```csharp [FunctionName("EventGridAsyncOutput")] public static async Task Run( } ``` +Starting in version 3.3.0, it is possible to use Azure Active Directory when authenticating the output binding: ++```csharp +[FunctionName("EventGridAsyncOutput")] +public static async Task Run( + [TimerTrigger("0 */5 * * * *")] TimerInfo myTimer, + [EventGrid(Connection = "MyEventGridConnection"]IAsyncCollector<CloudEvent> outputEvents, + ILogger log) +{ + for (var i = 0; i < 3; i++) + { + var myEvent = new CloudEvent("message-id-" + i, "subject-name", "event-data"); + await outputEvents.AddAsync(myEvent); + } +} +``` ++When using the Connection property, the `topicEndpointUri` must be specified as a child of the connection setting, and the `TopicEndpointUri` and `TopicKeySetting` properties should not be used. For local development, use the local.settings.json file to store the connection information: +```json +{ + "Values": { + "myConnection__topicEndpointUri": "{topicEndpointUri}" + } +} +``` +When deployed, use the application settings to store this information. ++ # [Isolated process](#tab/isolated-process) The following example shows how the custom type is used in both the trigger and an Event Grid output binding: public class Function { } ``` -You can also use a POJO class to send EventGrid messages. +You can also use a POJO class to send Event Grid messages. ```java public class Function { Functions version 1.x doesn't support isolated worker process. C# script functions support the following types: + [Azure.Messaging.CloudEvent][CloudEvent]-+ [Azure.Messaging.EventGrid][EventGridEvent2] ++ [Azure.Messaging.EventGrid][EventGridEvent] + [Newtonsoft.Json.Linq.JObject][JObject] + [System.String][String] There are two options for outputting an Event Grid message from a function: * [Dispatch an Event Grid event](./functions-bindings-event-grid-trigger.md) -[EventGridEvent]: /dotnet/api/microsoft.azure.eventgrid.models.eventgridevent +[EventGridEvent]: /dotnet/api/azure.messaging.eventgrid.eventgridevent [CloudEvent]: /dotnet/api/azure.messaging.cloudevent |
azure-maps | Creator Qgis Plugin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-qgis-plugin.md | Title: View and edit data with the Azure Maps QGIS plugin + Title: Work with datasets using the QGIS plugin description: How to view and edit indoor map data using the Azure Maps QGIS plugin |
azure-monitor | Azure Monitor Agent Troubleshoot Linux Vm Rsyslog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md | Title: Syslog troubleshooting on AMA Linux Agent -description: Guidance for troubleshooting rsyslog issues on Linux virtual machines, scale sets with Azure Monitor agent and Data Collection Rules. + Title: Syslog troubleshooting on Azure Monitor Agent for Linux +description: Guidance for troubleshooting rsyslog issues on Linux virtual machines, scale sets with Azure Monitor Agent, and data collection rules. Last updated 5/31/2023 -# Syslog troubleshooting guide for Azure Monitor Linux Agent +# Syslog troubleshooting guide for Azure Monitor Agent for Linux -Overview of Azure Monitor Linux Agent syslog collection and supported RFC standards: +Overview of Azure Monitor Agent for Linux Syslog collection and supported RFC standards: -- AMA installs an output configuration for the system syslog daemon during the installation process. The configuration file specifies the way events flow between the syslog daemon and AMA.+- Azure Monitor Agent installs an output configuration for the system Syslog daemon during the installation process. The configuration file specifies the way events flow between the Syslog daemon and Azure Monitor Agent. - For `rsyslog` (most Linux distributions), the configuration file is `/etc/rsyslog.d/10-azuremonitoragent.conf`. For `syslog-ng`, the configuration file is `/etc/syslog-ng/conf.d/azuremonitoragent.conf`.-- AMA listens to a UNIX domain socket to receive events from `rsyslog` / `syslog-ng`. The socket path for this communication is `/run/azuremonitoragent/default_syslog.socket`-- The syslog daemon will use queues when AMA ingestion is delayed, or when AMA isn't reachable.-- AMA ingests syslog events via the aforementioned socket and filters them based on facility / severity combination from DCR configuration in `/etc/opt/microsoft/azuremonitoragent/config-cache/configchunks/`. Any `facility` / `severity` not present in the DCR will be dropped.-- AMA attempts to parse events in accordance with **RFC3164** and **RFC5424**. Additionally, it knows how to parse the message formats listed [here](./azure-monitor-agent-overview.md#data-sources-and-destinations).-- AMA identifies the destination endpoint for Syslog events from the DCR configuration and attempts to upload the events. +- Azure Monitor Agent listens to a UNIX domain socket to receive events from `rsyslog` / `syslog-ng`. The socket path for this communication is `/run/azuremonitoragent/default_syslog.socket`. +- The Syslog daemon uses queues when Azure Monitor Agent ingestion is delayed or when Azure Monitor Agent isn't reachable. +- Azure Monitor Agent ingests Syslog events via the previously mentioned socket and filters them based on facility or severity combination from data collection rule (DCR) configuration in `/etc/opt/microsoft/azuremonitoragent/config-cache/configchunks/`. Any `facility` or `severity` not present in the DCR is dropped. +- Azure Monitor Agent attempts to parse events in accordance with **RFC3164** and **RFC5424**. It also knows how to parse the message formats listed on [this website](./azure-monitor-agent-overview.md#data-sources-and-destinations). +- Azure Monitor Agent identifies the destination endpoint for Syslog events from the DCR configuration and attempts to upload the events. > [!NOTE]- > AMA uses local persistency by default, all events received from `rsyslog` / `syslog-ng` are queued in `/var/opt/microsoft/azuremonitoragent/events` if they fail to be uploaded. - + > Azure Monitor Agent uses local persistency by default. All events received from `rsyslog` or `syslog-ng` are queued in `/var/opt/microsoft/azuremonitoragent/events` if they fail to be uploaded. + ## Issues -### Rsyslog data not uploaded due to full disk space issue on Azure Monitor Linux Agent +You might encounter the following issues. ++### Rsyslog data isn't uploaded because of a full disk space issue on Azure Monitor Agent for Linux ++The next sections describe the issue. #### Symptom-**Syslog data is not uploading**: When inspecting the error logs at `/var/opt/microsoft/azuremonitoragent/log/mdsd.err`, you'll see entries about *Error while inserting item to Local persistent store…No space left on device* similar to the following snippet: +**Syslog data is not uploading**: When you inspect the error logs at `/var/opt/microsoft/azuremonitoragent/log/mdsd.err`, you see entries about *Error while inserting item to Local persistent store…No space left on device* similar to the following snippet: ``` 2021-11-23T18:15:10.9712760Z: Error while inserting item to Local persistent store syslog.error: IO error: No space left on device: While appending to file: /var/opt/microsoft/azuremonitoragent/events/syslog.error/000555.log: No space left on device ``` #### Cause-Linux AMA buffers events to `/var/opt/microsoft/azuremonitoragent/events` prior to ingestion. On a default Linux AMA install, this directory will take ~650MB of disk space at idle. The size on disk will increase when under sustained logging load. It will get cleaned up about every 60 seconds and will reduce back to ~650 MB when the load returns to idle. +Azure Monitor Agent for Linux buffers events to `/var/opt/microsoft/azuremonitoragent/events` prior to ingestion. On a default Azure Monitor Agent for Linux installation, this directory takes ~650 MB of disk space at idle. The size on disk increases when it's under sustained logging load. It gets cleaned up about every 60 seconds and reduces back to ~650 MB when the load returns to idle. -#### Confirming the issue of full disk -The `df` command shows almost no space available on `/dev/sda1`, as shown below: +#### Confirm the issue of a full disk +The `df` command shows almost no space available on `/dev/sda1`, as shown here: ```bash df -h tmpfs 63G 0 63G 0% /sys/fs/cgroup tmpfs 13G 0 13G 0% /run/user/1000 ``` -The `du` command can be used to inspect the disk to determine which files are causing the disk to be full. For example: +You can use the `du` command to inspect the disk to determine which files are causing the disk to be full. For example: ```bash cd /var/log The `du` command can be used to inspect the disk to determine which files are ca 18G syslog.1 ``` -In some cases, `du` may not report any significantly large files/directories. It may be possible that a [file marked as (deleted) is taking up the space](https://unix.stackexchange.com/questions/182077/best-way-to-free-disk-space-from-deleted-files-that-are-held-open). This issue can happen when some other process has attempted to delete a file, but there remains a process with the file still open. The `lsof` command can be used to check for such files. In the example below, we see that `/var/log/syslog` is marked as deleted, but is taking up 3.6 GB of disk space. It hasn't been deleted because a process with PID 1484 still has the file open. +In some cases, `du` might not report any large files or directories. It might be possible that a [file marked as (deleted) is taking up the space](https://unix.stackexchange.com/questions/182077/best-way-to-free-disk-space-from-deleted-files-that-are-held-open). This issue can happen when some other process has attempted to delete a file, but a process with the file is still open. You can use the `lsof` command to check for such files. In the following example, we see that `/var/log/syslog` is marked as deleted but it takes up 3.6 GB of disk space. It hasn't been deleted because a process with PID 1484 still has the file open. ```bash sudo lsof +L1 rsyslogd 1484 syslog 14w REG 8,1 3601566564 0 35280 /var/log/syslog ( ``` ### Rsyslog default configuration logs all facilities to /var/log/-On some popular distros (for example Ubuntu 18.04 LTS), rsyslog ships with a default configuration file (`/etc/rsyslog.d/50-default.conf`) which logs events from nearly all facilities to disk at `/var/log/syslog`. Note that for RedHat/CentOS family syslog events will be stored under `/var/log/` but in a different file: `/var/log/messages`. +On some popular distros (for example, Ubuntu 18.04 LTS), rsyslog ships with a default configuration file (`/etc/rsyslog.d/50-default.conf`), which logs events from nearly all facilities to disk at `/var/log/syslog`. RedHat/CentOS family Syslog events are stored under `/var/log/` but in a different file: `/var/log/messages`. -AMA doesn't rely on syslog events being logged to `/var/log/`. Instead, it configures rsyslog service to forward events over a socket directly to the azuremonitoragent service process (mdsd). +Azure Monitor Agent doesn't rely on Syslog events being logged to `/var/log/`. Instead, it configures the rsyslog service to forward events over a socket directly to the `azuremonitoragent` service process (mdsd). #### Fix: Remove high-volume facilities from /etc/rsyslog.d/50-default.conf-If you're sending a high log volume through rsyslog and your system is setup to log events for these facilities, consider modifying the default rsyslog config to avoid logging and storing them under `/var/log/`. The events for this facility would still be forwarded to AMA because rsyslog is using a different configuration for forwarding placed in `/etc/rsyslog.d/10-azuremonitoragent.conf`. +If you're sending a high log volume through rsyslog and your system is set up to log events for these facilities, consider modifying the default rsyslog config to avoid logging and storing them under `/var/log/`. The events for this facility would still be forwarded to Azure Monitor Agent because rsyslog uses a different configuration for forwarding placed in `/etc/rsyslog.d/10-azuremonitoragent.conf`. ++1. For example, to remove `local4` events from being logged at `/var/log/syslog` or `/var/log/messages`, change this line in `/etc/rsyslog.d/50-default.conf` from this snippet: -1. For example, to remove local4 events from being logged at `/var/log/syslog` or `/var/log/messages`, change this line in `/etc/rsyslog.d/50-default.conf` from this: ```config *.*;auth,authpriv.none -/var/log/syslog ``` - To this (add local4.none;): + To this snippet (add `local4.none;`): ```config *.*;local4.none;auth,authpriv.none -/var/log/syslog ```-2. `sudo systemctl restart rsyslog` -### Azure Monitor Linux Agent Event Buffer is Filling Disk -If you observe the `/var/opt/microsoft/azuremonitor/events` directory growing unbounded (10 GB or higher) and not reducing in size, [file a ticket](#file-a-ticket) with **Summary** as 'AMA Event Buffer is filling disk' and **Problem type** as 'I need help configuring data collection from a VM'. +1. `sudo systemctl restart rsyslog` ++### Azure Monitor Agent for Linux event buffer is filling a disk ++If you observe the `/var/opt/microsoft/azuremonitor/events` directory growing unbounded (10 GB or higher) and not reducing in size, [file a ticket](#file-a-ticket). For **Summary**, enter **Azure Monitor Agent Event Buffer is filling disk**. For **Problem type**, enter **I need help configuring data collection from a VM**. [!INCLUDE [azure-monitor-agent-file-a-ticket](../../../includes/azure-monitor-agent/azure-monitor-agent-file-a-ticket.md)] |
azure-monitor | Data Collection Syslog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-syslog.md | Title: Collect syslog with Azure Monitor Agent -description: Configure collection of syslog logs using a data collection rule on virtual machines with the Azure Monitor Agent. + Title: Collect Syslog events with Azure Monitor Agent +description: Configure collection of Syslog events by using a data collection rule on virtual machines with Azure Monitor Agent. Last updated 05/10/2023 -# Collect syslog with Azure Monitor Agent overview +# Collect Syslog events with Azure Monitor Agent -Syslog is an event logging protocol that's common to Linux. You can use the Syslog daemon built into Linux devices and appliances to collect local events of the types you specify, and have it send those events to Log Analytics Workspace. Applications send messages that might be stored on the local machine or delivered to a Syslog collector. When the Azure Monitor agent for Linux is installed, it configures the local Syslog daemon to forward messages to the agent when syslog collection is enabled in [data collection rule (DCR)](../essentials/data-collection-rule-overview.md). The Azure Monitor Agent then sends the messages to Azure Monitor/Log Analytics workspace where a corresponding syslog record is created in [Syslog table](https://learn.microsoft.com/azure/azure-monitor/reference/tables/syslog). +Syslog is an event logging protocol that's common to Linux. You can use the Syslog daemon that's built in to Linux devices and appliances to collect local events of the types you specify. Then you can have it send those events to a Log Analytics workspace. Applications send messages that might be stored on the local machine or delivered to a Syslog collector. ++When the Azure Monitor agent for Linux is installed, it configures the local Syslog daemon to forward messages to the agent when Syslog collection is enabled in [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md). Azure Monitor Agent then sends the messages to an Azure Monitor or Log Analytics workspace where a corresponding Syslog record is created in a [Syslog table](/azure/azure-monitor/reference/tables/syslog).  The following facilities are supported with the Syslog collector: * uucp * local0-local7 -For some device types that don't allow local installation of the Azure Monitor agent, the agent can be installed instead on a dedicated Linux-based log forwarder. The originating device must be configured to send Syslog events to the Syslog daemon on this forwarder instead of the local daemon. Please see [Sentinel tutorial](../../sentinel/forward-syslog-monitor-agent.md) for more information. +For some device types that don't allow local installation of Azure Monitor Agent, the agent can be installed instead on a dedicated Linux-based log forwarder. The originating device must be configured to send Syslog events to the Syslog daemon on this forwarder instead of the local daemon. For more information, see the [Sentinel tutorial](../../sentinel/forward-syslog-monitor-agent.md). ## Configure Syslog -The Azure Monitor agent for Linux will only collect events with the facilities and severities that are specified in its configuration. You can configure Syslog through the Azure portal or by managing configuration files on your Linux agents. +The Azure Monitor Agent for Linux only collects events with the facilities and severities that are specified in its configuration. You can configure Syslog through the Azure portal or by managing configuration files on your Linux agents. ### Configure Syslog in the Azure portal-Configure Syslog from the Data Collection Rules menu of the Azure Monitor. This configuration is delivered to the configuration file on each Linux agent. -* Select Add data source. -* For Data source type, select Linux syslog +Configure Syslog from the **Data Collection Rules** menu of Azure Monitor. This configuration is delivered to the configuration file on each Linux agent. ++1. Select **Add data source**. +1. For **Data source type**, select **Linux syslog**. -You can collect syslog events with different log level for each facility. By default, all syslog facility types will be collected. If you do not want to collect for example events of `auth` type, select `none` in the `Minimum log level` list box for `auth` facility and save the changes. If you need to change default log level for syslog events and collect only events with log level starting ΓÇ£NOTICEΓÇ¥ or higher priority, select ΓÇ£LOG_NOTICEΓÇ¥ in ΓÇ£Minimum log levelΓÇ¥ list box. +You can collect Syslog events with a different log level for each facility. By default, all Syslog facility types are collected. If you don't want to collect, for example, events of `auth` type, select **NONE** in the **Minimum log level** list box for `auth` facility and save the changes. If you need to change the default log level for Syslog events and collect only events with a log level starting at **NOTICE** or a higher priority, select **LOG_NOTICE** in the **Minimum log level** list box. By default, all configuration changes are automatically pushed to all agents that are configured in the DCR. ### Create a data collection rule -Create a *data collection rule* in the same region as your Log Analytics workspace. -A data collection rule is an Azure resource that allows you to define the way data should be handled as it's ingested into the workspace. +Create a *data collection rule* in the same region as your Log Analytics workspace. A DCR is an Azure resource that allows you to define the way data should be handled as it's ingested into the workspace. 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for and open **Monitor**. 1. Under **Settings**, select **Data Collection Rules**. 1. Select **Create**. - :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-data-collection-rule.png" alt-text="Screenshot of the data collections rules pane with the create option selected."::: -+ :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-data-collection-rule.png" alt-text="Screenshot that shows the Data Collection Rules pane with the Create option selected."::: #### Add resources+ 1. Select **Add resources**.-1. Use the filters to find the virtual machine that you'll use to collect logs. - :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-rule-scope.png" alt-text="Screenshot of the page to select the scope for the data collection rule. "::: +1. Use the filters to find the virtual machine you want to use to collect logs. ++ :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-rule-scope.png" alt-text="Screenshot that shows the page to select the scope for the data collection rule. "::: 1. Select the virtual machine. 1. Select **Apply**. 1. Select **Next: Collect and deliver**. -#### Add data source +#### Add a data source 1. Select **Add data source**. 1. For **Data source type**, select **Linux syslog**.- :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-rule-data-source.png" alt-text="Screenshot of page to select data source type and minimum log level."::: ++ :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-rule-data-source.png" alt-text="Screenshot that shows the page to select the data source type and minimum log level."::: 1. For **Minimum log level**, leave the default values **LOG_DEBUG**. 1. Select **Next: Destination**. -#### Add destination +#### Add a destination 1. Select **Add destination**. - :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-rule-add-destination.png" alt-text="Screenshot of the destination tab with the add destination option selected."::: + :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-rule-add-destination.png" alt-text="Screenshot that shows the Destination tab with the Add destination option selected."::: 1. Enter the following values: |Field |Value | A data collection rule is an Azure resource that allows you to define the way d 1. Select **Add data source**. 1. Select **Next: Review + create**. -### Create rule +### Create a rule 1. Select **Create**.-1. Wait 20 minutes before moving on to the next section. +1. Wait 20 minutes before you move on to the next section. -If your VM doesn't have the Azure Monitor agent installed, the data collection rule deployment triggers the installation of the agent on the VM. +If your VM doesn't have Azure Monitor Agent installed, the DCR deployment triggers the installation of the agent on the VM. -## Configure Syslog on Linux Agent -When the Azure Monitoring Agent is installed on Linux machine it installs a default Syslog configuration file that defines the facility and severity of the messages that are collected if syslog is enabled in DCR. The configuration file is different depending on the Syslog daemon that the client has installed. +## Configure Syslog on the Linux agent +When Azure Monitor Agent is installed on a Linux machine, it installs a default Syslog configuration file that defines the facility and severity of the messages that are collected if Syslog is enabled in a DCR. The configuration file is different depending on the Syslog daemon that the client has installed. ### Rsyslog-On many Linux distributions, the rsyslogd daemon is responsible for consuming, storing, and routing log messages sent using the Linux syslog API. Azure Monitor agent uses the unix domain socket output module (omuxsock) in rsyslog to forward log messages to the Azure Monitor Agent. The AMA installation includes default config files that get placed under the following directory: -`/etc/opt/microsoft/azuremonitoragent/syslog/rsyslogconf/05-azuremonitoragent-loadomuxsock.conf` -`/etc/opt/microsoft/azuremonitoragent/syslog/rsyslogconf/05-azuremonitoragent-loadomuxsock.conf` +On many Linux distributions, the rsyslogd daemon is responsible for consuming, storing, and routing log messages sent by using the Linux Syslog API. Azure Monitor Agent uses the UNIX domain socket output module (`omuxsock`) in rsyslog to forward log messages to Azure Monitor Agent. ++The Azure Monitor Agent installation includes default config files that get placed under the following directory: `/etc/opt/microsoft/azuremonitoragent/syslog/rsyslogconf/` -When syslog is added to data collection rule, these configuration files will be installed under `etc/rsyslog.d` system directory and rsyslog will be automatically restarted for the changes to take effect. These files are used by rsyslog to load the output module and forward the events to Azure Monitoring agent daemon using defined rules. The builtin omuxsock module cannot be loaded more than once. Therefore, the configurations for loading of the module and forwarding of the events with corresponding forwarding format template are split in two different files. Its default contents are shown in the following example. This example collects Syslog messages sent from the local agent for all facilities with all log levels. +When Syslog is added to a DCR, these configuration files are installed under the `etc/rsyslog.d` system directory and rsyslog is automatically restarted for the changes to take effect. These files are used by rsyslog to load the output module and forward the events to the Azure Monitor Agent daemon by using defined rules. ++The built-in `omuxsock` module can't be loaded more than once. For this reason, the configurations for loading of the module and forwarding of the events with corresponding forwarding format template are split in two different files. Its default contents are shown in the following example. This example collects Syslog messages sent from the local agent for all facilities with all log levels. ``` $ cat /etc/rsyslog.d/10-azuremonitoragent.conf # Azure Monitor Agent configuration: forward logs to azuremonitoragent $ cat /etc/rsyslog.d/05-azuremonitoragent-loadomuxsock.conf # Azure Monitor Agent configuration: load rsyslog forwarding module. $ModLoad omuxsock ```-Note that on some legacy systems such as CentOS 7.3 we have seen rsyslog log formatting issues when using traditional forwarding format to send syslog events to Azure Monitor Agent and for these systems, Azure Monitor Agent is automatically placing legacy forwarder template instead: ++On some legacy systems, such as CentOS 7.3, we've seen rsyslog log formatting issues when a traditional forwarding format is used to send Syslog events to Azure Monitor Agent. For these systems, Azure Monitor Agent automatically places a legacy forwarder template instead: + `template(name="AMA_RSYSLOG_TraditionalForwardFormat" type="string" string="%TIMESTAMP% %HOSTNAME% %syslogtag%%msg:::sp-if-no-1st-sp%%msg%\n")` +### Syslog-ng -### Syslog-ng +The configuration file for syslog-ng is installed at `/etc/opt/microsoft/azuremonitoragent/syslog/syslog-ngconf/azuremonitoragent.conf`. When Syslog collection is added to a DCR, this configuration file is placed under the `/etc/syslog-ng/conf.d/azuremonitoragent.conf` system directory and syslog-ng is automatically restarted for the changes to take effect. -The configuration file for syslog-ng is installed at `/etc/opt/microsoft/azuremonitoragent/syslog/syslog-ngconf/azuremonitoragent.conf`. When Syslog collection is added to data collection rule, this configuration file will be placed under `/etc/syslog-ng/conf.d/azuremonitoragent.conf` system directory and syslog-ng will be automatically restarted for the changes to take effect. Its default contents are shown in this example. This example collects Syslog messages sent from the local agent for all facilities and all severities. +The default contents are shown in the following example. This example collects Syslog messages sent from the local agent for all facilities and all severities. ``` $ cat /etc/syslog-ng/conf.d/azuremonitoragent.conf # Azure MDSD configuration: syslog forwarding config for mdsd agent options {}; log { source(s_src); # will be automatically parsed from /etc/syslog-ng/syslog-n destination(d_azure_mdsd); }; ``` -Note* Azure Monitor supports collection of messages sent by rsyslog or syslog-ng, where rsyslog is the default daemon. The default Syslog daemon on version 5 of Red Hat Enterprise Linux, CentOS, and Oracle Linux version (sysklog) isn't supported for Syslog event collection. To collect Syslog data from this version of these distributions, the rsyslog daemon should be installed and configured to replace sysklog. --Note* -If you edit the Syslog configuration, you must restart the Syslog daemon for the changes to take effect. -+>[!Note] +> Azure Monitor supports collection of messages sent by rsyslog or syslog-ng, where rsyslog is the default daemon. The default Syslog daemon on version 5 of Red Hat Enterprise Linux, CentOS, and Oracle Linux version (sysklog) isn't supported for Syslog event collection. To collect Syslog data from this version of these distributions, the rsyslog daemon should be installed and configured to replace sysklog. +If you edit the Syslog configuration, you must restart the Syslog daemon for the changes to take effect. ## Prerequisites-You will need: +You need: -- Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac).-- [Data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint).-- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.+- A Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac). +- A [data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint). +- [Permissions to create DCR objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace. ## Syslog record properties The following table provides different examples of log queries that retrieve Sys ## Next steps -Learn more about: +Learn more about: -- [Azure Monitor Agent](azure-monitor-agent-overview.md).-- [Data collection rules](../essentials/data-collection-rule-overview.md).-- [Best practices for cost management in Azure Monitor](../best-practices-cost.md).+- [Azure Monitor Agent](azure-monitor-agent-overview.md) +- [Data collection rules](../essentials/data-collection-rule-overview.md) +- [Best practices for cost management in Azure Monitor](../best-practices-cost.md) |
azure-monitor | Om Agents | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/om-agents.md | Title: Connect Operations Manager to Azure Monitor | Microsoft Docs description: To maintain your investment in System Center Operations Manager and use extended capabilities with Log Analytics, you can integrate Operations Manager with your workspace. Previously updated : 01/30/2023 Last updated : 06/15/2023 To maintain your existing investment in [System Center Operations Manager](/syst Integrating with System Center Operations Manager adds value to your service operations strategy by using the speed and efficiency of Azure Monitor in collecting, storing, and analyzing log data from Operations Manager. Azure Monitor log queries help correlate and work toward identifying the faults of problems and surfacing recurrences in support of your existing problem management process. The flexibility of the query engine to examine performance, event, and alert data with rich dashboards and reporting capabilities to expose this data in meaningful ways demonstrates the strength Azure Monitor brings in complementing Operations Manager. The agents reporting to the Operations Manager management group collect data from your servers based on the [Log Analytics data sources](../agents/agent-data-sources.md) and solutions you've enabled in your workspace. Depending on the solutions enabled:++>[!Note] +>Newer integrations and reconfiguration of the existing integration between Operations Manager management server and Log Analytics will no longer work as this connection will be retired soon. + - The data is sent directly from an Operations Manager management server to the service, or - The data is sent directly from the agent to a Log Analytics workspace because of the volume of data collected on the agent-managed system. To ensure the security of data in transit to Azure Monitor, configure the agent Perform the following series of steps to configure your Operations Manager management group to connect to one of your Log Analytics workspaces. > [!NOTE]-> If Log Analytics data stops coming in from a specific agent or management server, reset the Winsock Catalog by using `netsh winsock reset`. Then reboot the server. Resetting the Winsock Catalog allows network connections that were broken to be reestablished. +> - If Log Analytics data stops coming in from a specific agent or management server, reset the Winsock Catalog by using `netsh winsock reset`. Then reboot the server. Resetting the Winsock Catalog allows network connections that were broken to be reestablished. +> - Newer integrations and reconfiguration of the existing integration between Operations Manager management server and Log Analytics will no longer workas this connection will be retired soon. However, you can still connect your monitored System Center Operations Manager agents to Log Analytics using the following methods based on your scenario. +> 1. Use a Log Analytics Gateway and point the agent to that server. Learn more about [Connect computers without internet access by using the Log Analytics gateway in Azure Monitor](/azure/azure-monitor/agents/gateway). +> 2. Use the AMA (Azure Monitoring Agent) agent side-by-side to connect the agent to Log Analytics. Learn more about [Migrate to Azure Monitor Agent from Log Analytics agent](/azure/azure-monitor/agents/azure-monitor-agent-migration).ΓÇ» +> 3. Configure a direct connection to Log Analytics in the Microsoft Monitoring Agent. (Dual-Home with System Center Operations Manager). During initial registration of your Operations Manager management group with a Log Analytics workspace, the option to specify the proxy configuration for the management group isn't available in the Operations console. The management group has to be successfully registered with the service before this option is available. To work around this situation, update the system proxy configuration by using `netsh` on the system you're running the Operations console from to configure integration, and all management servers in the management group. In the future, if you plan on reconnecting your management group to a Log Analyt ## Next steps -To add functionality and gather data, see [Add Azure Monitor solutions from the Solutions Gallery](/previous-versions/azure/azure-monitor/insights/solutions). +To add functionality and gather data, see [Add Azure Monitor solutions from the Solutions Gallery](/previous-versions/azure/azure-monitor/insights/solutions). |
azure-monitor | Annotations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/annotations.md | description: Learn how to create annotations to track deployment or other signif Last updated 01/24/2023-+ # Release annotations for Application Insights |
azure-monitor | Api Custom Events Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md | The SDKs catch many exceptions automatically, so you don't always have to call ` * **ASP.NET**: [Write code to catch exceptions](./asp-net-exceptions.md). * **Java EE**: [Exceptions are caught automatically](./opentelemetry-enable.md?tabs=java).-* **JavaScript**: Exceptions are caught automatically. If you want to disable automatic collection, add a line to the SDK Loader Script that you insert in your webpages: +* **JavaScript**: Exceptions are caught automatically. If you want to disable automatic collection, add a line to the JavaScript (Web) SDK Loader Script that you insert in your webpages: ```javascript ({ |
azure-monitor | Api Filtering Sampling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md | For apps written using [ASP.NET Core](asp-net-core.md#add-telemetryinitializers) Insert a JavaScript telemetry initializer, if needed. For more information on the telemetry initializers for the Application Insights JavaScript SDK, see [Telemetry initializers](https://github.com/microsoft/ApplicationInsights-JS#telemetry-initializers). -#### [SDK Loader Script](#tab/sdkloaderscript) +#### [JavaScript (Web) SDK Loader Script](#tab/javascriptwebsdkloaderscript) -Insert a telemetry initializer by adding the onInit callback function in the [SDK Loader Script configuration](./javascript-sdk.md?tabs=sdkloaderscript#sdk-loader-script-configuration): +Insert a telemetry initializer by adding the onInit callback function in the [JavaScript (Web) SDK Loader Script configuration](./javascript-sdk.md?tabs=javascriptwebsdkloaderscript#javascript-web-sdk-loader-script-configuration): ```html <script type="text/javascript">-!function(v,y,T){<!-- Removed the SDK Loader Script code for brevity -->}(window,document,{ +!function(v,y,T){<!-- Removed the JavaScript (Web) SDK Loader Script code for brevity -->}(window,document,{ src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js", crossOrigin: "anonymous", onInit: function (sdk) { |
azure-monitor | Asp Net Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md | HttpContext.Features.Get<RequestTelemetry>().Properties["myProp"] = someData ## Enable client-side telemetry for web applications -The preceding steps are enough to help you start collecting server-side telemetry. If your application has client-side components, follow the next steps to start collecting [usage telemetry](./usage-overview.md) SDK Loader Script injection by configuration. +The preceding steps are enough to help you start collecting server-side telemetry. If your application has client-side components, follow the next steps to start collecting [usage telemetry](./usage-overview.md) JavaScript (Web) SDK Loader Script injection by configuration. 1. In `_ViewImports.cshtml`, add injection: As an alternative to using `FullScript`, `ScriptBody` is available starting in A </script> ``` -The `.cshtml` file names referenced earlier are from a default MVC application template. Ultimately, if you want to properly enable client-side monitoring for your application, the JavaScript SDK Loader Script must appear in the `<head>` section of each page of your application that you want to monitor. Add the JavaScript SDK Loader Script to `_Layout.cshtml` in an application template to enable client-side monitoring. +The `.cshtml` file names referenced earlier are from a default MVC application template. Ultimately, if you want to properly enable client-side monitoring for your application, the JavaScript JavaScript (Web) SDK Loader Script must appear in the `<head>` section of each page of your application that you want to monitor. Add the JavaScript JavaScript (Web) SDK Loader Script to `_Layout.cshtml` in an application template to enable client-side monitoring. -If your project doesn't include `_Layout.cshtml`, you can still add [client-side monitoring](./website-monitoring.md) by adding the JavaScript SDK Loader Script to an equivalent file that controls the `<head>` of all pages within your app. Alternatively, you can add the SDK Loader Script to multiple pages, but we don't recommend it. +If your project doesn't include `_Layout.cshtml`, you can still add [client-side monitoring](./website-monitoring.md) by adding the JavaScript JavaScript (Web) SDK Loader Script to an equivalent file that controls the `<head>` of all pages within your app. Alternatively, you can add the JavaScript (Web) SDK Loader Script to multiple pages, but we don't recommend it. > [!NOTE] > JavaScript injection provides a default configuration experience. If you require [configuration](./javascript.md#configuration) beyond setting the connection string, you're required to remove auto-injection as described and manually add the [JavaScript SDK](./javascript.md#add-the-javascript-sdk). |
azure-monitor | Asp Net Dependencies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md | |
azure-monitor | Asp Net Exceptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-exceptions.md | |
azure-monitor | Asp Net Trace Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md | |
azure-monitor | Asp Net | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md | You have now successfully configured server-side application monitoring. If you ## Add client-side monitoring -The previous sections provided guidance on methods to automatically and manually configure server-side monitoring. To add client-side monitoring, use the [client-side JavaScript SDK](javascript.md). You can monitor any web page's client-side transactions by adding a [JavaScript SDK Loader Script](./javascript-sdk.md?tabs=sdkloaderscript#get-started) before the closing `</head>` tag of the page's HTML. +The previous sections provided guidance on methods to automatically and manually configure server-side monitoring. To add client-side monitoring, use the [client-side JavaScript SDK](javascript.md). You can monitor any web page's client-side transactions by adding a [JavaScript JavaScript (Web) SDK Loader Script](./javascript-sdk.md?tabs=javascriptwebsdkloaderscript#get-started) before the closing `</head>` tag of the page's HTML. -Although it's possible to manually add the SDK Loader Script to the header of each HTML page, we recommend that you instead add the SDK Loader Script to a primary page. That action injects the SDK Loader Script into all pages of a site. +Although it's possible to manually add the JavaScript (Web) SDK Loader Script to the header of each HTML page, we recommend that you instead add the JavaScript (Web) SDK Loader Script to a primary page. That action injects the JavaScript (Web) SDK Loader Script into all pages of a site. -For the template-based ASP.NET MVC app from this article, the file that you need to edit is *_Layout.cshtml*. You can find it under **Views** > **Shared**. To add client-side monitoring, open *_Layout.cshtml* and follow the [SDK Loader Script-based setup instructions](./javascript-sdk.md?tabs=sdkloaderscript#get-started) from the article about client-side JavaScript SDK configuration. +For the template-based ASP.NET MVC app from this article, the file that you need to edit is *_Layout.cshtml*. You can find it under **Views** > **Shared**. To add client-side monitoring, open *_Layout.cshtml* and follow the [JavaScript (Web) SDK Loader Script-based setup instructions](./javascript-sdk.md?tabs=javascriptwebsdkloaderscript#get-started) from the article about client-side JavaScript SDK configuration. ## Troubleshooting |
azure-monitor | Codeless Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md | Links are provided to more information for each supported scenario. > [!NOTE] > Auto-instrumentation was known as "codeless attach" before October 2021. -## SDK Loader Script injection by configuration +## JavaScript (Web) SDK Loader Script injection by configuration -If youΓÇÖre using the following supported SDKs, you can configure the SDK Loader Script to inject from the server-side SDK onto each page. +If youΓÇÖre using the following supported SDKs, you can configure the JavaScript (Web) SDK Loader Script to inject from the server-side SDK onto each page. > [!NOTE] > See the linked article for instructions on how to install the server-side SDK. If youΓÇÖre using the following supported SDKs, you can configure the SDK Loader | ASP.NET Core | [Enable client-side telemetry for web applications](./asp-net-core.md?tabs=netcorenew%2Cnetcore6#enable-client-side-telemetry-for-web-applications) | | Node.js | [Automatic web Instrumentation](./nodejs.md#automatic-web-instrumentationpreview) | +For other methods to instrument your application with the Application Insights JavaScript SDK, see [Get started with the JavaScript SDK](./javascript-sdk.md). + ## Next steps * [Application Insights overview](app-insights-overview.md) |
azure-monitor | Configuration With Applicationinsights Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/configuration-with-applicationinsights-config.md | |
azure-monitor | Continuous Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/continuous-monitoring.md | Title: Continuous monitoring of your Azure DevOps release pipeline | Microsoft D description: This article provides instructions to quickly set up continuous monitoring with Azure Pipelines and Application Insights. Last updated 05/01/2020-+ # Add continuous monitoring to your release pipeline |
azure-monitor | Custom Operations Tracking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-operations-tracking.md | |
azure-monitor | Distributed Tracing Telemetry Correlation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-tracing-telemetry-correlation.md | This feature is in `Microsoft.ApplicationInsights.JavaScript`. It's disabled by distributedTracingMode: DistributedTracingModes.W3C ``` -- **[SDK Loader Script-based setup](./javascript-sdk.md?tabs=sdkloaderscript#get-started)**+- **[JavaScript (Web) SDK Loader Script-based setup](./javascript-sdk.md?tabs=javascriptwebsdkloaderscript#get-started)** Add the following configuration: ``` |
azure-monitor | Get Metric | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/get-metric.md | |
azure-monitor | Ilogger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ilogger.md | |
azure-monitor | Javascript Feature Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md | -# Feature extensions for the Application Insights JavaScript SDK (Click Analytics) +# Enable Click Analytics Auto-Collection plug-in Application Insights JavaScript SDK feature extensions are extra features that can be added to the Application Insights JavaScript SDK to enhance its functionality. In this article, we cover the Click Analytics plug-in, which automatically tracks click events on webpages and uses `data-*` attributes or customized tags on HTML elements to populate event telemetry. +> [!IMPORTANT] +> If you haven't already, you need to first [enable Azure Monitor Application Insights Real User Monitoring](./javascript-sdk.md) before you enable the Click Analytics plug-in. -## Get started +## What data does the plug-in collect? ++The following key properties are captured by default when the plug-in is enabled. ++### Custom event properties ++| Name | Description | Sample | +| | |--| +| Name | The name of the custom event. For more information on how a name gets populated, see [Name column](#name).| About | +| itemType | Type of event. | customEvent | +|sdkVersion | Version of Application Insights SDK along with click plug-in.|JavaScript:2_ClickPlugin2| ++### Custom dimensions ++| Name | Description | Sample | +| | |--| +| actionType | Action type that caused the click event. It can be a left or right click. | CL | +| baseTypeSource | Base Type source of the custom event. | ClickEvent | +| clickCoordinates | Coordinates where the click event is triggered. | 659X47 | +| content | Placeholder to store extra `data-*` attributes and values. | [{sample1:value1, sample2:value2}] | +| pageName | Title of the page where the click event is triggered. | Sample Title | +| parentId | ID or name of the parent element. For more information on how a parentId is populated, see [parentId key](#parentid-key). | navbarContainer | ++### Custom measurements ++| Name | Description | Sample | +| | |--| +| timeToAction | Time taken in milliseconds for the user to click the element since the initial page load. | 87407 | -Users can set up the Click Analytics Auto-Collection plug-in via SDK Loader Script or npm and then optionally add a framework extension. ++## Add the Click Analytics plug-in ++Users can set up the Click Analytics Auto-Collection plug-in via JavaScript (Web) SDK Loader Script or npm and then optionally add a framework extension. [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] -### [SDK Loader Script setup](#tab/sdkloaderscript) +### 1. Add the code ++#### [JavaScript (Web) SDK Loader Script](#tab/javascriptwebsdkloaderscript) Ignore this setup if you use the npm setup. Ignore this setup if you use the npm setup. [clickPluginInstance.identifier] : clickPluginConfig }, };- // Application Insights SDK Loader Script code - !function(v,y,T){<!-- Removed the SDK Loader Script code for brevity -->}(window,document,{ + // Application Insights JavaScript (Web) SDK Loader Script code + !function(v,y,T){<!-- Removed the JavaScript (Web) SDK Loader Script code for brevity -->}(window,document,{ src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js", crossOrigin: "anonymous", cfg: configObj // configObj is defined above. Ignore this setup if you use the npm setup. ``` > [!NOTE]-> To add or update SDK Loader Script configuration, see [SDK Loader Script configuration](./javascript-sdk.md?tabs=sdkloaderscript#sdk-loader-script-configuration). +> To add or update JavaScript (Web) SDK Loader Script configuration, see [JavaScript (Web) SDK Loader Script configuration](./javascript-sdk.md?tabs=javascriptwebsdkloaderscript#javascript-web-sdk-loader-script-configuration). -### [npm setup](#tab/npmsetup) +#### [npm package](#tab/npmpackage) Install the npm package: appInsights.loadAppInsights(); -## Add a framework extension +### 2. (Optional) Add a framework extension Add a framework extension, if needed. -### [React](#tab/react) +#### [React](#tab/react) ```javascript import React from 'react'; appInsights.loadAppInsights(); ``` > [!NOTE] -> To add React configuration, see [React configuration](./javascript-framework-extensions.md?tabs=react#configuration). For more information on the React plug-in, see [React plug-in](./javascript-framework-extensions.md?tabs=react#react-application-insights-javascript-sdk-plug-in). +> To add React configuration, see [React configuration](./javascript-framework-extensions.md?tabs=react#add-configuration). For more information on the React plug-in, see [React plug-in](./javascript-framework-extensions.md?tabs=react). -### [React Native](#tab/reactnative) +#### [React Native](#tab/reactnative) ```typescript import { ApplicationInsights } from '@microsoft/applicationinsights-web'; appInsights.loadAppInsights(); ``` > [!NOTE] -> To add React Native configuration, see [Enable Correlation for React Native](./javascript-framework-extensions.md?tabs=reactnative#enable-correlation). For more information on the React Native plug-in, see [React Native plug-in](./javascript-framework-extensions.md?tabs=reactnative#react-native-plugin-for-application-insights-javascript-sdk). +> For more information on the React Native plug-in, see [React Native plug-in](./javascript-framework-extensions.md?tabs=reactnative). -### [Angular](#tab/angular) +#### [Angular](#tab/angular) ```javascript import { ApplicationInsights } from '@microsoft/applicationinsights-web'; export class AppComponent { ``` > [!NOTE] -> To add Angular configuration, see [Enable Correlation for Angular](./javascript-framework-extensions.md?tabs=angular#enable-correlation). For more information on the Angular plug-in, see [Angular plug-in](./javascript-framework-extensions.md?tabs=angular#angular-plugin-for-application-insights-javascript-sdk). +> For more information on the Angular plug-in, see [Angular plug-in](./javascript-framework-extensions.md?tabs=angular). -## Set the authenticated user context +### 3. (Optional) Set the authenticated user context -If you need to set this optional setting, see [Set the authenticated user context](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#setauthenticatedusercontext). This setting isn't required to use Click Analytics. +If you need to set this optional setting, see [Set the authenticated user context](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#setauthenticatedusercontext). This setting isn't required to use the Click Analytics plug-in. ## Use the plug-in You can replace the asterisk (`*`) in `data-*` with any name following the [prod - The name must not contain a semicolon (U+003A). - The name must not contain capital letters. -## What data does the plug-in collect? --The following key properties are captured by default when the plug-in is enabled. --### Custom event properties --| Name | Description | Sample | -| | |--| -| Name | The name of the custom event. For more information on how a name gets populated, see [Name column](#name).| About | -| itemType | Type of event. | customEvent | -|sdkVersion | Version of Application Insights SDK along with click plug-in.|JavaScript:2_ClickPlugin2| --### Custom dimensions --| Name | Description | Sample | -| | |--| -| actionType | Action type that caused the click event. It can be a left or right click. | CL | -| baseTypeSource | Base Type source of the custom event. | ClickEvent | -| clickCoordinates | Coordinates where the click event is triggered. | 659X47 | -| content | Placeholder to store extra `data-*` attributes and values. | [{sample1:value1, sample2:value2}] | -| pageName | Title of the page where the click event is triggered. | Sample Title | -| parentId | ID or name of the parent element. For more information on how a parentId is populated, see [parentId key](#parentid-key). | navbarContainer | --### Custom measurements --| Name | Description | Sample | -| | |--| -| timeToAction | Time taken in milliseconds for the user to click the element since the initial page load. | 87407 | --## Advanced configuration +## Add advanced configuration | Name | Type | Default | Description | | | --| --| - | appInsights.loadAppInsights(); ## Sample app -[Simple web app with the Click Analytics Autocollection Plug-in enabled](https://go.microsoft.com/fwlink/?linkid=2152871) +See a [simple web app with the Click Analytics Autocollection Plug-in enabled](https://go.microsoft.com/fwlink/?linkid=2152871) for how to implement custom event properties such as `Name` and `parentid` and custom behavior and content. See the [sample app readme](https://github.com/Azure-Samples/Application-Insights-Click-Plugin-Demo/blob/main/README.md) for information about where to find click data. ## Examples of `parentId` key export const clickPluginConfigWithParentDataTag = { For example 2, for clicked element `<Button>`, the value of `parentId` is `parentid2`. Even though `parentDataTag` is declared, the `data-parentid` definition takes precedence. > [!NOTE] > If the `data-parentid` attribute was defined within the div element with `className=ΓÇ¥test2ΓÇ¥`, the value for `parentId` would still be `parentid2`.-+ ### Example 3 ```javascript See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/ap ## Next steps +- [Confirm data is flowing](./javascript-sdk.md#5-confirm-data-is-flowing). - See the [documentation on utilizing HEART workbook](usage-heart.md) for expanded product analytics. - See the [GitHub repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [npm Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Autocollection Plug-in. - Use [Events Analysis in the Usage experience](usage-segmentation.md) to analyze top clicks and slice by available dimensions. - Use the [Telemetry Viewer extension](https://github.com/microsoft/ApplicationInsights-JS/tree/master/tools/chrome-debug-extension) to list out the individual events in the network payload and monitor the internal calls within Application Insights.-- See a [sample app](https://go.microsoft.com/fwlink/?linkid=2152871) for how to implement custom event properties such as Name and parentid and custom behavior and content.-- See the [sample app readme](https://github.com/Azure-Samples/Application-Insights-Click-Plugin-Demo/blob/main/README.md) for where to find click data and [Log Analytics](../logs/log-analytics-tutorial.md#write-a-query) if you arenΓÇÖt familiar with the process of writing a query. +- See [Log Analytics](../logs/log-analytics-tutorial.md#write-a-query) if you arenΓÇÖt familiar with the process of writing a query. - Build a [workbook](../visualize/workbooks-overview.md) or [export to Power BI](../logs/log-powerbi.md) to create custom visualizations of click data.-- |
azure-monitor | Javascript Framework Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-framework-extensions.md | Title: Framework extensions for Application Insights JavaScript SDK + Title: Enable a framework extension for Application Insights JavaScript SDK description: Learn how to install and use JavaScript framework extensions for the Application Insights JavaScript SDK. ibiza-# Framework extensions for Application Insights JavaScript SDK +# Enable a framework extension for Application Insights JavaScript SDK In addition to the core SDK, there are also plugins available for specific frameworks, such as the [React plugin](javascript-framework-extensions.md?tabs=react), the [React Native plugin](javascript-framework-extensions.md?tabs=reactnative), and the [Angular plugin](javascript-framework-extensions.md?tabs=angular). These plugins provide extra functionality and integration with the specific framework. -## [React](#tab/react) +> [!IMPORTANT] +> If you haven't already, you need to first [enable Azure Monitor Application Insights Real User Monitoring](./javascript-sdk.md) before you enable a framework extension. ++## Prerequisites ++### [React](#tab/react) ++None. ++### [React Native](#tab/reactnative) ++You must be using a version >= 2.0.0 of `@microsoft/applicationinsights-web`. This plugin only works in react-native apps. It doesn't work with [apps using the Expo framework](https://docs.expo.io/) or Create React Native App, which is based on the Expo framework. ++### [Angular](#tab/angular) -### React Application Insights JavaScript SDK plug-in +None. ++++## What does the plug-in enable? ++### [React](#tab/react) The React plug-in for the Application Insights JavaScript SDK enables: - Tracking of router changes - React components usage statistics -### Get started +### [React Native](#tab/reactnative) ++The React Native plugin for Application Insights JavaScript SDK collects device information. By default, this plugin automatically collects: ++- **Unique Device ID** (Also known as Installation ID.) +- **Device Model Name** (Such as iPhone XS, Samsung Galaxy Fold, Huawei P30 Pro etc.) +- **Device Type** (For example, handset, tablet, etc.) ++### [Angular](#tab/angular) ++The Angular plugin for the Application Insights JavaScript SDK, enables: ++- Tracking of router changes +- Tracking uncaught exceptions ++> [!WARNING] +> Angular plugin is NOT ECMAScript 3 (ES3) compatible. ++> [!IMPORTANT] +> When we add support for a new Angular version, our NPM package becomes incompatible with down-level Angular versions. Continue to use older NPM packages until you're ready to upgrade your Angular version. ++++## Add a plug-in ++To add a plug-in, follow the steps in this section. -Install the npm package: +### 1. Install the package ++#### [React](#tab/react) ```bash npm install @microsoft/applicationinsights-react-js ``` -### Basic usage +#### [React Native](#tab/reactnative) -Initialize a connection to Application Insights: +By default, this plugin relies on the [`react-native-device-info` package](https://www.npmjs.com/package/react-native-device-info). You must install and link to this package. Keep the `react-native-device-info` package up to date to collect the latest device names using your app. ++Since v3, support for accessing the DeviceInfo has been abstracted into an interface `IDeviceInfoModule` to enable you to use / set your own device info module. This interface uses the same function names and result `react-native-device-info`. ++```zsh ++npm install --save @microsoft/applicationinsights-react-native @microsoft/applicationinsights-web +npm install --save react-native-device-info +react-native link react-native-device-info ++``` ++#### [Angular](#tab/angular) ++```bash +npm install @microsoft/applicationinsights-angularplugin-js +``` ++++### 2. Add the extension to your code [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] +#### [React](#tab/react) ++Initialize a connection to Application Insights: + > [!TIP] > If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md), uncomment the lines for Click Analytics and delete `extensions: [reactPlugin],`. var appInsights = new ApplicationInsights({ appInsights.loadAppInsights(); ``` +> [!TIP] +> If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process. ++#### [React Native](#tab/reactnative) ++To use this plugin, you need to construct the plugin and add it as an `extension` to your existing Application Insights instance. ++> [!TIP] +> If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md), uncomment the lines for Click Analytics and delete `extensions: [RNPlugin]`. ++```typescript +import { ApplicationInsights } from '@microsoft/applicationinsights-web'; +import { ReactNativePlugin } from '@microsoft/applicationinsights-react-native'; ++var RNPlugin = new ReactNativePlugin(); +// Add the Click Analytics plug-in. +/* var clickPluginInstance = new ClickAnalyticsPlugin(); +var clickPluginConfig = { + autoCapture: true +}; */ +var appInsights = new ApplicationInsights({ + config: { + connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE', + // If you're adding the Click Analytics plug-in, delete the next line. + extensions: [RNPlugin] + // Add the Click Analytics plug-in. + /* extensions: [RNPlugin, clickPluginInstance], + extensionConfig: { + [clickPluginInstance.identifier]: clickPluginConfig + } */ + } +}); +appInsights.loadAppInsights(); ++``` ++#### Disabling automatic device info collection ++```typescript +import { ApplicationInsights } from '@microsoft/applicationinsights-web'; ++var RNPlugin = new ReactNativePlugin(); +var appInsights = new ApplicationInsights({ + config: { + instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE', + disableDeviceCollection: true, + extensions: [RNPlugin] + } +}); +appInsights.loadAppInsights(); +``` ++#### Using your own device info collection class ++```typescript +import { ApplicationInsights } from '@microsoft/applicationinsights-web'; ++// Simple inline constant implementation +const myDeviceInfoModule = { + getModel: () => "deviceModel", + getDeviceType: () => "deviceType", + // v5 returns a string while latest returns a promise + getUniqueId: () => "deviceId", // This "may" also return a Promise<string> +}; ++var RNPlugin = new ReactNativePlugin(); +RNPlugin.setDeviceInfoModule(myDeviceInfoModule); ++var appInsights = new ApplicationInsights({ + config: { + instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE', + extensions: [RNPlugin] + } +}); ++appInsights.loadAppInsights(); +``` ++> [!TIP] +> If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process. ++#### [Angular](#tab/angular) ++Set up an instance of Application Insights in the entry component in your app: ++> [!IMPORTANT] +> When using the ErrorService, there is an implicit dependency on the `@microsoft/applicationinsights-analytics-js` extension. you MUST include either the `'@microsoft/applicationinsights-web'` or include the `@microsoft/applicationinsights-analytics-js` extension. Otherwise, unhandled errors caught by the error service will not be sent. ++> [!TIP] +> If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md), uncomment the lines for Click Analytics and delete `extensions: [angularPlugin],`. ++```js +import { Component } from '@angular/core'; +import { ApplicationInsights } from '@microsoft/applicationinsights-web'; +import { AngularPlugin } from '@microsoft/applicationinsights-angularplugin-js'; +import { Router } from '@angular/router'; ++@Component({ + selector: 'app-root', + templateUrl: './app.component.html', + styleUrls: ['./app.component.css'] +}) +export class AppComponent { + constructor( + private router: Router + ){ + var angularPlugin = new AngularPlugin(); + // Add the Click Analytics plug-in. + /* var clickPluginInstance = new ClickAnalyticsPlugin(); + var clickPluginConfig = { + autoCapture: true + }; */ + const appInsights = new ApplicationInsights({ + config: { + connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE', + // If you're adding the Click Analytics plug-in, delete the next line. + extensions: [angularPlugin], + // Add the Click Analytics plug-in. + // extensions: [angularPlugin, clickPluginInstance], + extensionConfig: { + [angularPlugin.identifier]: { router: this.router } + // Add the Click Analytics plug-in. + // [clickPluginInstance.identifier]: clickPluginConfig + } + } + }); + appInsights.loadAppInsights(); + } +} +``` ++To track uncaught exceptions, set up ApplicationinsightsAngularpluginErrorService in `app.module.ts`: ++> [!IMPORTANT] +> When using the ErrorService, there is an implicit dependency on the `@microsoft/applicationinsights-analytics-js` extension. you MUST include either the `'@microsoft/applicationinsights-web'` or include the `@microsoft/applicationinsights-analytics-js` extension. Otherwise, unhandled errors caught by the error service will not be sent. ++```js +import { ApplicationinsightsAngularpluginErrorService } from '@microsoft/applicationinsights-angularplugin-js'; ++@NgModule({ + ... + providers: [ + { + provide: ErrorHandler, + useClass: ApplicationinsightsAngularpluginErrorService + } + ] + ... +}) +export class AppModule { } +``` ++To chain more custom error handlers, create custom error handlers that implement IErrorService: ++```javascript +import { IErrorService } from '@microsoft/applicationinsights-angularplugin-js'; ++export class CustomErrorHandler implements IErrorService { + handleError(error: any) { + ... + } +} +``` ++And pass errorServices array through extensionConfig: ++```javascript +extensionConfig: { + [angularPlugin.identifier]: { + router: this.router, + error + } + } +``` ++> [!TIP] +> If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process. ++++## Add configuration ++### [React](#tab/react) + ### Configuration | Name | Default | Description | const App = () => { The `AppInsightsErrorBoundary` requires two props to be passed to it. They're the `ReactPlugin` instance created for the application and a component to be rendered when an error occurs. When an unhandled error occurs, `trackException` is called with the information provided to the error boundary, and the `onError` component appears. -### Enable correlation --Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools. --In JavaScript, correlation is turned off by default to minimize the telemetry we send by default. To enable correlation, see the [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing). --#### Route tracking --The React plug-in automatically tracks route changes and collects other React-specific telemetry. --> [!NOTE] -> `enableAutoRouteTracking` should be set to `false`. If it's set to `true`, then when the route changes, duplicate `PageViews` can be sent. --For `react-router v6` or other scenarios where router history isn't exposed, you can add `enableAutoRouteTracking: true` to your [setup configuration](#basic-usage). --#### PageView --If a custom `PageView` duration isn't provided, `PageView` duration defaults to a value of `0`. --### Sample app --Check out the [Application Insights React demo](https://github.com/microsoft/applicationinsights-react-js/tree/main/sample/applicationinsights-react-sample). --> [!TIP] -> If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process. --## [React Native](#tab/reactnative) --### React Native plugin for Application Insights JavaScript SDK --The React Native plugin for Application Insights JavaScript SDK collects device information. By default, this plugin automatically collects: --- **Unique Device ID** (Also known as Installation ID.)-- **Device Model Name** (Such as iPhone XS, Samsung Galaxy Fold, Huawei P30 Pro etc.)-- **Device Type** (For example, handset, tablet, etc.)--### Requirements --You must be using a version >= 2.0.0 of `@microsoft/applicationinsights-web`. This plugin only works in react-native apps. It doesn't work with [apps using the Expo framework](https://docs.expo.io/) or Create React Native App, which is based on the Expo framework. --### Getting started --By default, this plugin relies on the [`react-native-device-info` package](https://www.npmjs.com/package/react-native-device-info). You must install and link to this package. Keep the `react-native-device-info` package up to date to collect the latest device names using your app. --Since v3, support for accessing the DeviceInfo has been abstracted into an interface `IDeviceInfoModule` to enable you to use / set your own device info module. This interface uses the same function names and result `react-native-device-info`. --```zsh --npm install --save @microsoft/applicationinsights-react-native @microsoft/applicationinsights-web -npm install --save react-native-device-info -react-native link react-native-device-info --``` --### Initializing the plugin --To use this plugin, you need to construct the plugin and add it as an `extension` to your existing Application Insights instance. ---> [!TIP] -> If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md), uncomment the lines for Click Analytics and delete `extensions: [RNPlugin]`. --```typescript -import { ApplicationInsights } from '@microsoft/applicationinsights-web'; -import { ReactNativePlugin } from '@microsoft/applicationinsights-react-native'; --var RNPlugin = new ReactNativePlugin(); -// Add the Click Analytics plug-in. -/* var clickPluginInstance = new ClickAnalyticsPlugin(); -var clickPluginConfig = { - autoCapture: true -}; */ -var appInsights = new ApplicationInsights({ - config: { - connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE', - // If you're adding the Click Analytics plug-in, delete the next line. - extensions: [RNPlugin] - // Add the Click Analytics plug-in. - /* extensions: [RNPlugin, clickPluginInstance], - extensionConfig: { - [clickPluginInstance.identifier]: clickPluginConfig - } */ - } -}); -appInsights.loadAppInsights(); --``` --#### Disabling automatic device info collection --```typescript -import { ApplicationInsights } from '@microsoft/applicationinsights-web'; --var RNPlugin = new ReactNativePlugin(); -var appInsights = new ApplicationInsights({ - config: { - instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE', - disableDeviceCollection: true, - extensions: [RNPlugin] - } -}); -appInsights.loadAppInsights(); -``` --#### Using your own device info collection class --```typescript -import { ApplicationInsights } from '@microsoft/applicationinsights-web'; --// Simple inline constant implementation -const myDeviceInfoModule = { - getModel: () => "deviceModel", - getDeviceType: () => "deviceType", - // v5 returns a string while latest returns a promise - getUniqueId: () => "deviceId", // This "may" also return a Promise<string> -}; --var RNPlugin = new ReactNativePlugin(); -RNPlugin.setDeviceInfoModule(myDeviceInfoModule); --var appInsights = new ApplicationInsights({ - config: { - instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE', - extensions: [RNPlugin] - } -}); --appInsights.loadAppInsights(); -``` +### [React Native](#tab/reactnative) ### IDeviceInfoModule export interface IDeviceInfoModule { If events are getting "blocked" because the `Promise` returned via `getUniqueId` is never resolved / rejected, you can call `setDeviceId()` on the plugin to "unblock" this waiting state. There is also an automatic timeout configured via `uniqueIdPromiseTimeout` (defaults to 5 seconds), which will internally call `setDeviceId()` with any previously configured value. -### Enable Correlation --Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools. --JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation, reference [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing). --#### PageView --If a custom `PageView` duration isn't provided, `PageView` duration defaults to a value of 0. --> [!TIP] -> If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process. -- -## [Angular](#tab/angular) --## Angular plugin for Application Insights JavaScript SDK --The Angular plugin for the Application Insights JavaScript SDK, enables: --- Tracking of router changes-- Tracking uncaught exceptions--> [!WARNING] -> Angular plugin is NOT ECMAScript 3 (ES3) compatible. --> [!IMPORTANT] -> When we add support for a new Angular version, our NPM package becomes incompatible with down-level Angular versions. Continue to use older NPM packages until you're ready to upgrade your Angular version. --### Getting started --Install npm package: --```bash -npm install @microsoft/applicationinsights-angularplugin-js -``` --### Basic usage --Set up an instance of Application Insights in the entry component in your app: ----> [!IMPORTANT] -> When using the ErrorService, there is an implicit dependency on the `@microsoft/applicationinsights-analytics-js` extension. you MUST include either the `'@microsoft/applicationinsights-web'` or include the `@microsoft/applicationinsights-analytics-js` extension. Otherwise, unhandled errors caught by the error service will not be sent. --> [!TIP] -> If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md), uncomment the lines for Click Analytics and delete `extensions: [angularPlugin],`. --```js -import { Component } from '@angular/core'; -import { ApplicationInsights } from '@microsoft/applicationinsights-web'; -import { AngularPlugin } from '@microsoft/applicationinsights-angularplugin-js'; -import { Router } from '@angular/router'; --@Component({ - selector: 'app-root', - templateUrl: './app.component.html', - styleUrls: ['./app.component.css'] -}) -export class AppComponent { - constructor( - private router: Router - ){ - var angularPlugin = new AngularPlugin(); - // Add the Click Analytics plug-in. - /* var clickPluginInstance = new ClickAnalyticsPlugin(); - var clickPluginConfig = { - autoCapture: true - }; */ - const appInsights = new ApplicationInsights({ - config: { - connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE', - // If you're adding the Click Analytics plug-in, delete the next line. - extensions: [angularPlugin], - // Add the Click Analytics plug-in. - // extensions: [angularPlugin, clickPluginInstance], - extensionConfig: { - [angularPlugin.identifier]: { router: this.router } - // Add the Click Analytics plug-in. - // [clickPluginInstance.identifier]: clickPluginConfig - } - } - }); - appInsights.loadAppInsights(); - } -} -``` --To track uncaught exceptions, set up ApplicationinsightsAngularpluginErrorService in `app.module.ts`: --> [!IMPORTANT] -> When using the ErrorService, there is an implicit dependency on the `@microsoft/applicationinsights-analytics-js` extension. you MUST include either the `'@microsoft/applicationinsights-web'` or include the `@microsoft/applicationinsights-analytics-js` extension. Otherwise, unhandled errors caught by the error service will not be sent. --```js -import { ApplicationinsightsAngularpluginErrorService } from '@microsoft/applicationinsights-angularplugin-js'; --@NgModule({ - ... - providers: [ - { - provide: ErrorHandler, - useClass: ApplicationinsightsAngularpluginErrorService - } - ] - ... -}) -export class AppModule { } -``` --To chain more custom error handlers, create custom error handlers that implement IErrorService: --```javascript -import { IErrorService } from '@microsoft/applicationinsights-angularplugin-js'; --export class CustomErrorHandler implements IErrorService { - handleError(error: any) { - ... - } -} -``` --And pass errorServices array through extensionConfig: +### [Angular](#tab/angular) -```javascript -extensionConfig: { - [angularPlugin.identifier]: { - router: this.router, - error - } - } -``` +None. -### Enable Correlation + -Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools. +## Sample app -JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation, reference [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing). +### [React](#tab/react) -#### Route tracking +Check out the [Application Insights React demo](https://github.com/microsoft/applicationinsights-react-js/tree/main/sample/applicationinsights-react-sample). -The Angular Plugin automatically tracks route changes and collects other Angular specific telemetry. +### [React Native](#tab/reactnative) -> [!NOTE] -> `enableAutoRouteTracking` should be set to `false` if it set to true then when the route changes duplicate PageViews may be sent. +Currently unavailable. -#### PageView +### [Angular](#tab/angular) -If a custom `PageView` duration isn't provided, `PageView` duration defaults to a value of 0. --> [!TIP] -> If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process. +Check out the [Application Insights Angular demo](https://github.com/microsoft/applicationinsights-angularplugin-js/tree/main/sample/applicationinsights-angularplugin-sample). ## Next steps -- To learn more about the JavaScript SDK, see the [Application Insights JavaScript SDK documentation](javascript.md).-- To learn about the Kusto Query Language and querying data in Log Analytics, see the [Log query overview](../../azure-monitor/logs/log-query-overview.md).+- [Confirm data is flowing](javascript-sdk.md#5-confirm-data-is-flowing). |
azure-monitor | Javascript Sdk Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk-configuration.md | + + Title: Microsoft Azure Monitor Application Insights JavaScript SDK configuration +description: Microsoft Azure Monitor Application Insights JavaScript SDK configuration. + Last updated : 02/28/2023+ms.devlang: javascript +++++# Microsoft Azure Monitor Application Insights JavaScript SDK configuration ++The Azure Application Insights JavaScript SDK provides configuration for tracking, monitoring, and debugging your web applications. ++> [!div class="checklist"] +> - [SDK configuration](#sdk-configuration) +> - [Cookie configuration and management](#cookies) +> - [Source map un-minify support](#source-map) +> - [Tree shaking optimized code](#tree-shaking) ++## SDK configuration ++These configuration fields are optional and default to false unless otherwise stated. ++| Name | Type | Default | Description | +||||-| +| accountId | string | null | An optional account ID, if your app groups users into accounts. No spaces, commas, semicolons, equals, or vertical bars | +| sessionRenewalMs | numeric | 1800000 | A session is logged if the user is inactive for this amount of time in milliseconds. Default is 30 minutes | +| sessionExpirationMs | numeric | 86400000 | A session is logged if it has continued for this amount of time in milliseconds. Default is 24 hours | +| maxBatchSizeInBytes | numeric | 10000 | Max size of telemetry batch. If a batch exceeds this limit, it's immediately sent and a new batch is started | +| maxBatchInterval | numeric | 15000 | How long to batch telemetry for before sending (milliseconds) | +| disableExceptionTracking | boolean | false | If true, exceptions aren't autocollected. Default is false. | +| disableTelemetry | boolean | false | If true, telemetry isn't collected or sent. Default is false. | +| enableDebug | boolean | false | If true, **internal** debugging data is thrown as an exception **instead** of being logged, regardless of SDK logging settings. Default is false. <br>***Note:*** Enabling this setting results in dropped telemetry whenever an internal error occurs. It can be useful for quickly identifying issues with your configuration or usage of the SDK. If you don't want to lose telemetry while debugging, consider using `loggingLevelConsole` or `loggingLevelTelemetry` instead of `enableDebug`. | +| loggingLevelConsole | numeric | 0 | Logs **internal** Application Insights errors to console. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | +| loggingLevelTelemetry | numeric | 1 | Sends **internal** Application Insights errors as telemetry. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | +| diagnosticLogInterval | numeric | 10000 | (internal) Polling interval (in ms) for internal logging queue | +| samplingPercentage | numeric | 100 | Percentage of events that is sent. Default is 100, meaning all events are sent. Set it if you wish to preserve your data cap for large-scale applications. | +| autoTrackPageVisitTime | boolean | false | If true, on a pageview, the _previous_ instrumented page's view time is tracked and sent as telemetry and a new timer is started for the current pageview. It's sent as a custom metric named `PageVisitTime` in `milliseconds` and is calculated via the Date [now()](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Date/now) function (if available) and falls back to (new Date()).[getTime()](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Date/getTime) if now() is unavailable (IE8 or less). Default is false. | +| disableAjaxTracking | boolean | false | If true, Ajax calls aren't autocollected. Default is false. | +| disableFetchTracking | boolean | false | The default setting for `disableFetchTracking` is `false`, meaning it's enabled. However, in versions prior to 2.8.10, it was disabled by default. When set to `true`, Fetch requests aren't automatically collected. The default setting changed from `true` to `false` in version 2.8.0. | +| excludeRequestFromAutoTrackingPatterns | string[] \| RegExp[] | undefined | Provide a way to exclude specific route from automatic tracking for XMLHttpRequest or Fetch request. If defined, for an Ajax / fetch request that the request url matches with the regex patterns, auto tracking is turned off. Default is undefined. | +| addRequestContext | (requestContext: IRequestionContext) => {[key: string]: any} | undefined | Provide a way to enrich dependencies logs with context at the beginning of api call. Default is undefined. You need to check if `xhr` exists if you configure `xhr` related context. You need to check if `fetch request` and `fetch response` exist if you configure `fetch` related context. Otherwise you may not get the data you need. | +| overridePageViewDuration | boolean | false | If true, default behavior of trackPageView is changed to record end of page view duration interval when trackPageView is called. If false and no custom duration is provided to trackPageView, the page view performance is calculated using the navigation timing API. Default is false. | +| maxAjaxCallsPerView | numeric | 500 | Default 500 - controls how many Ajax calls are monitored per page view. Set to -1 to monitor all (unlimited) Ajax calls on the page. | +| disableDataLossAnalysis | boolean | true | If false, internal telemetry sender buffers are checked at startup for items not yet sent. | +| disableCorrelationHeaders | boolean | false | If false, the SDK adds two headers ('Request-Id' and 'Request-Context') to all dependency requests to correlate them with corresponding requests on the server side. Default is false. | +| correlationHeaderExcludedDomains | string[] | undefined | Disable correlation headers for specific domains | +| correlationHeaderExcludePatterns | regex[] | undefined | Disable correlation headers using regular expressions | +| correlationHeaderDomains | string[] | undefined | Enable correlation headers for specific domains | +| disableFlushOnBeforeUnload | boolean | false | Default false. If true, flush method isn't called when onBeforeUnload event triggers | +| enableSessionStorageBuffer | boolean | true | Default true. If true, the buffer with all unsent telemetry is stored in session storage. The buffer is restored on page load | +| cookieCfg | [ICookieCfgConfig](#cookies)<br>[Optional]<br>(Since 2.6.0) | undefined | Defaults to cookie usage enabled see [ICookieCfgConfig](#cookies) settings for full defaults. | +| disableCookiesUsage | alias for [`cookieCfg.enabled`](#cookies)<br>[Optional] | false | Default false. A boolean that indicates whether to disable the use of cookies by the SDK. If true, the SDK doesn't store or read any data from cookies.<br>(Since v2.6.0) If `cookieCfg.enabled` is defined it takes precedence. Cookie usage can be re-enabled after initialization via the core.getCookieMgr().setEnabled(true). | +| cookieDomain | alias for [`cookieCfg.domain`](#cookies)<br>[Optional] | null | Custom cookie domain. It's helpful if you want to share Application Insights cookies across subdomains.<br>(Since v2.6.0) If `cookieCfg.domain` is defined it takes precedence over this value. | +| cookiePath | alias for [`cookieCfg.path`](#cookies)<br>[Optional]<br>(Since 2.6.0) | null | Custom cookie path. It's helpful if you want to share Application Insights cookies behind an application gateway.<br>If `cookieCfg.path` is defined, it takes precedence. | +| isRetryDisabled | boolean | false | Default false. If false, retry on 206 (partial success), 408 (timeout), 429 (too many requests), 500 (internal server error), 503 (service unavailable), and 0 (offline, only if detected) | +| isStorageUseDisabled | boolean | false | If true, the SDK doesn't store or read any data from local and session storage. Default is false. | +| isBeaconApiDisabled | boolean | true | If false, the SDK sends all telemetry using the [Beacon API](https://www.w3.org/TR/beacon) | +| disableXhr | boolean | false | Don't use XMLHttpRequest or XDomainRequest (for IE < 9) by default instead attempt to use fetch() or sendBeacon. If no other transport is available, it uses XMLHttpRequest | +| onunloadDisableBeacon | boolean | false | Default false. when tab is closed, the SDK sends all remaining telemetry using the [Beacon API](https://www.w3.org/TR/beacon) | +| onunloadDisableFetch | boolean | false | If fetch keepalive is supported don't use it for sending events during unload, it may still fall back to fetch() without keepalive | +| sdkExtension | string | null | Sets the sdk extension name. Only alphabetic characters are allowed. The extension name is added as a prefix to the 'ai.internal.sdkVersion' tag (for example, 'ext_javascript:2.0.0'). Default is null. | +| isBrowserLinkTrackingEnabled | boolean | false | Default is false. If true, the SDK tracks all [Browser Link](/aspnet/core/client-side/using-browserlink) requests. | +| appId | string | null | AppId is used for the correlation between AJAX dependencies happening on the client-side with the server-side requests. When Beacon API is enabled, it can't be used automatically, but can be set manually in the configuration. Default is null | +| enableCorsCorrelation | boolean | false | If true, the SDK adds two headers ('Request-Id' and 'Request-Context') to all CORS requests to correlate outgoing AJAX dependencies with corresponding requests on the server side. Default is false | +| namePrefix | string | undefined | An optional value that is used as name postfix for localStorage and session cookie name. +| sessionCookiePostfix | string | undefined | An optional value that is used as name postfix for session cookie name. If undefined, namePrefix is used as name postfix for session cookie name. +| userCookiePostfix | string | undefined | An optional value that is used as name postfix for user cookie name. If undefined, no postfix is added on user cookie name. +| enableAutoRouteTracking | boolean | false | Automatically track route changes in Single Page Applications (SPA). If true, each route change sends a new Pageview to Application Insights. Hash route changes (`example.com/foo#bar`) are also recorded as new page views. +| enableRequestHeaderTracking | boolean | false | If true, AJAX & Fetch request headers is tracked, default is false. If ignoreHeaders isn't configured, Authorization and X-API-Key headers aren't logged. +| enableResponseHeaderTracking | boolean | false | If true, AJAX & Fetch request's response headers is tracked, default is false. If ignoreHeaders isn't configured, WWW-Authenticate header isn't logged. +| ignoreHeaders | string[] | ["Authorization", "X-API-Key", "WWW-Authenticate"] | AJAX & Fetch request and response headers to be ignored in log data. To override or discard the default, add an array with all headers to be excluded or an empty array to the configuration. +| enableAjaxErrorStatusText | boolean | false | Default false. If true, include response error data text boolean in dependency event on failed AJAX requests. | +| enableAjaxPerfTracking | boolean | false | Default false. Flag to enable looking up and including extra browser window.performance timings in the reported Ajax (XHR and fetch) reported metrics. +| maxAjaxPerfLookupAttempts | numeric | 3 | Defaults to 3. The maximum number of times to look for the window.performance timings (if available) is required. Not all browsers populate the window.performance before reporting the end of the XHR request. For fetch requests, it's added after it's complete. +| ajaxPerfLookupDelay | numeric | 25 | Defaults to 25 ms. The amount of time to wait before reattempting to find the windows.performance timings for an Ajax request, time is in milliseconds and is passed directly to setTimeout(). +| distributedTracingMode | numeric or `DistributedTracingModes` | `DistributedTracingModes.AI_AND_W3C` | Sets the distributed tracing mode. If AI_AND_W3C mode or W3C mode is set, W3C trace context headers (traceparent/tracestate) are generated and included in all outgoing requests. AI_AND_W3C is provided for back-compatibility with any legacy Application Insights instrumented services. +| enableUnhandledPromiseRejectionTracking | boolean | false | If true, unhandled promise rejections are autocollected as a JavaScript error. When disableExceptionTracking is true (don't track exceptions), the config value is ignored and unhandled promise rejections aren't reported. +| disableInstrumentationKeyValidation | boolean | false | If true, instrumentation key validation check is bypassed. Default value is false. +| enablePerfMgr | boolean | false | [Optional] When enabled (true) it creates local perfEvents for code that has been instrumented to emit perfEvents (via the doPerf() helper). It can be used to identify performance issues within the SDK based on your usage or optionally within your own instrumented code. +| perfEvtsSendAll | boolean | false | [Optional] When _enablePerfMgr_ is enabled and the [IPerfManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfManager.ts) fires a [INotificationManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/INotificationManager.ts).perfEvent() this flag determines whether an event is fired (and sent to all listeners) for all events (true) or only for 'parent' events (false <default>).<br />A parent [IPerfEvent](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfEvent.ts) is an event where no other IPerfEvent is still running at the point of the event being created and its _parent_ property isn't null or undefined. Since v2.5.7 +| createPerfMgr | (core: IAppInsightsCore, notification +| idLength | numeric | 22 | [Optional] Identifies the default length used to generate new random session and user IDs. Defaults to 22, previous default value was 5 (v2.5.8 or less), if you need to keep the previous maximum length you should set the value to 5. +| customHeaders | `[{header: string, value: string}]` | undefined | [Optional] The ability for the user to provide extra headers when using a custom endpoint. customHeaders aren't added on browser shutdown moment when beacon sender is used. And adding custom headers isn't supported on IE9 or earlier. +| convertUndefined | `any` | undefined | [Optional] Provide user an option to convert undefined field to user defined value. +| eventsLimitInMem | number | 10000 | [Optional] The number of events that can be kept in memory before the SDK starts to drop events when not using Session Storage (the default). +| disableIkeyDeprecationMessage | boolean | true | [Optional] Disable instrumentation Key deprecation error message. If true, error messages are NOT sent. ++## Cookies ++The Azure Application Insights JavaScript SDK provides instance-based cookie management that allows you to control the use of cookies. ++You can control cookies by enabling or disabling them, setting custom domains and paths, and customizing the functions for managing cookies. ++### Cookie configuration ++ICookieMgrConfig is a cookie configuration for instance-based cookie management added in 2.6.0. The options provided allow you to enable or disable the use of cookies by the SDK. You can also set custom cookie domains and paths and customize the functions for fetching, setting, and deleting cookies. ++The ICookieMgrConfig options are defined in the following table. ++| Name | Type | Default | Description | +||||-| +| enabled | boolean | true | The current instance of the SDK uses this boolean to indicate whether the use of cookies is enabled. If false, the instance of the SDK initialized by this configuration doesn't store or read any data from cookies. | +| domain | string | null | Custom cookie domain. It's helpful if you want to share Application Insights cookies across subdomains. If not provided uses the value from root `cookieDomain` value. | +| path | string | / | Specifies the path to use for the cookie, if not provided it uses any value from the root `cookiePath` value. | +| ignoreCookies | string[] | undefined | Specify the cookie name(s) to be ignored, it causes any matching cookie name to never be read or written. They may still be explicitly purged or deleted. You don't need to repeat the name in the `blockedCookies` configuration. (since v2.8.8) +| blockedCookies | string[] | undefined | Specify the cookie name(s) to never write. It prevents creating or updating any cookie name, but they can still be read unless also included in the ignoreCookies. They may still be purged or deleted explicitly. If not provided, it defaults to the same list in ignoreCookies. (Since v2.8.8) +| getCookie | `(name: string) => string` | null | Function to fetch the named cookie value, if not provided it uses the internal cookie parsing / caching. | +| setCookie | `(name: string, value: string) => void` | null | Function to set the named cookie with the specified value, only called when adding or updating a cookie. | +| delCookie | `(name: string, value: string) => void` | null | Function to delete the named cookie with the specified value, separated from setCookie to avoid the need to parse the value to determine whether the cookie is being added or removed. If not provided it uses the internal cookie parsing / caching. | ++### Cookie management ++Starting from version 2.6.0, the Azure Application Insights JavaScript SDK provides instance-based cookie management that can be disabled and re-enabled after initialization. ++If you disabled cookies during initialization using the `disableCookiesUsage` or `cookieCfg.enabled` configurations, you can re-enable them using the `setEnabled` function of the ICookieMgr object. ++The instance-based cookie management replaces the previous CoreUtils global functions of `disableCookies()`, `setCookie()`, `getCookie()`, and `deleteCookie()`. ++To take advantage of the tree-shaking enhancements introduced in version 2.6.0, it's recommended to no longer use the global functions ++## Source map ++Source map support helps you debug minified JavaScript code with the ability to unminify the minified callstack of your exception telemetry. ++> [!div class="checklist"] +> - Compatible with all current integrations on the **Exception Details** panel +> - Supports all current and future JavaScript SDKs, including Node.JS, without the need for an SDK upgrade + +To view the unminified callstack, select an Exception Telemetry item in the Azure portal, find the source maps that match the call stack, and drag and drop the source maps onto the call stack in the Azure portal. The source map must have the same name as the source file of a stack frame, but with a `map` extension. +++## Tree shaking ++Tree shaking eliminates unused code from the final JavaScript bundle. ++To take advantage of tree shaking, import only the necessary components of the SDK into your code. By doing so, unused code isn't included in the final bundle, reducing its size and improving performance. ++### Tree shaking enhancements and recommendations ++In version 2.6.0, we deprecated and removed the internal usage of these static helper classes to improve support for tree-shaking algorithms. It lets npm packages safely drop unused code. ++- `CoreUtils` +- `EventHelper` +- `Util` +- `UrlHelper` +- `DateTimeUtils` +- `ConnectionStringParser` ++ The functions are now exported as top-level roots from the modules, making it easier to refactor your code for better tree-shaking. ++The static classes were changed to const objects that reference the new exported functions, and future changes are planned to further refactor the references. ++### Tree shaking deprecated functions and replacements ++| Existing | Replacement | +|-|-| +| **CoreUtils** | **@microsoft/applicationinsights-core-js** | +| CoreUtils._canUseCookies | None. Don't use as it causes all of CoreUtils reference to be included in your final code.<br> Refactor your cookie handling to use the `appInsights.getCookieMgr().setEnabled(true/false)` to set the value and `appInsights.getCookieMgr().isEnabled()` to check the value. | +| CoreUtils.isTypeof | isTypeof | +| CoreUtils.isUndefined | isUndefined | +| CoreUtils.isNullOrUndefined | isNullOrUndefined | +| CoreUtils.hasOwnProperty | hasOwnProperty | +| CoreUtils.isFunction | isFunction | +| CoreUtils.isObject | isObject | +| CoreUtils.isDate | isDate | +| CoreUtils.isArray | isArray | +| CoreUtils.isError | isError | +| CoreUtils.isString | isString | +| CoreUtils.isNumber | isNumber | +| CoreUtils.isBoolean | isBoolean | +| CoreUtils.toISOString | toISOString or getISOString | +| CoreUtils.arrForEach | arrForEach | +| CoreUtils.arrIndexOf | arrIndexOf | +| CoreUtils.arrMap | arrMap | +| CoreUtils.arrReduce | arrReduce | +| CoreUtils.strTrim | strTrim | +| CoreUtils.objCreate | objCreateFn | +| CoreUtils.objKeys | objKeys | +| CoreUtils.objDefineAccessors | objDefineAccessors | +| CoreUtils.addEventHandler | addEventHandler | +| CoreUtils.dateNow | dateNow | +| CoreUtils.isIE | isIE | +| CoreUtils.disableCookies | disableCookies<br>Referencing either causes CoreUtils to be referenced for backward compatibility.<br> Refactor your cookie handling to use the `appInsights.getCookieMgr().setEnabled(false)` | +| CoreUtils.newGuid | newGuid | +| CoreUtils.perfNow | perfNow | +| CoreUtils.newId | newId | +| CoreUtils.randomValue | randomValue | +| CoreUtils.random32 | random32 | +| CoreUtils.mwcRandomSeed | mwcRandomSeed | +| CoreUtils.mwcRandom32 | mwcRandom32 | +| CoreUtils.generateW3CId | generateW3CId | +| **EventHelper** | **@microsoft/applicationinsights-core-js** | +| EventHelper.Attach | attachEvent | +| EventHelper.AttachEvent | attachEvent | +| EventHelper.Detach | detachEvent | +| EventHelper.DetachEvent | detachEvent | +| **Util** | **@microsoft/applicationinsights-common-js** | +| Util.NotSpecified | strNotSpecified | +| Util.createDomEvent | createDomEvent | +| Util.disableStorage | utlDisableStorage | +| Util.isInternalApplicationInsightsEndpoint | isInternalApplicationInsightsEndpoint | +| Util.canUseLocalStorage | utlCanUseLocalStorage | +| Util.getStorage | utlGetLocalStorage | +| Util.setStorage | utlSetLocalStorage | +| Util.removeStorage | utlRemoveStorage | +| Util.canUseSessionStorage | utlCanUseSessionStorage | +| Util.getSessionStorageKeys | utlGetSessionStorageKeys | +| Util.getSessionStorage | utlGetSessionStorage | +| Util.setSessionStorage | utlSetSessionStorage | +| Util.removeSessionStorage | utlRemoveSessionStorage | +| Util.disableCookies | disableCookies<br>Referencing either causes CoreUtils to be referenced for backward compatibility.<br> Refactor your cookie handling to use the `appInsights.getCookieMgr().setEnabled(false)` | +| Util.canUseCookies | canUseCookies<br>Referencing either causes CoreUtils to be referenced for backward compatibility.<br>Refactor your cookie handling to use the `appInsights.getCookieMgr().isEnabled()` | +| Util.disallowsSameSiteNone | uaDisallowsSameSiteNone | +| Util.setCookie | coreSetCookie<br>Referencing causes CoreUtils to be referenced for backward compatibility.<br>Refactor your cookie handling to use the `appInsights.getCookieMgr().set(name: string, value: string)` | +| Util.stringToBoolOrDefault | stringToBoolOrDefault | +| Util.getCookie | coreGetCookie<br>Referencing causes CoreUtils to be referenced for backward compatibility.<br>Refactor your cookie handling to use the `appInsights.getCookieMgr().get(name: string)` | +| Util.deleteCookie | coreDeleteCookie<br>Referencing causes CoreUtils to be referenced for backward compatibility.<br>Refactor your cookie handling to use the `appInsights.getCookieMgr().del(name: string, path?: string)` | +| Util.trim | strTrim | +| Util.newId | newId | +| Util.random32 | <br>No replacement, refactor your code to use the core random32(true) | +| Util.generateW3CId | generateW3CId | +| Util.isArray | isArray | +| Util.isError | isError | +| Util.isDate | isDate | +| Util.toISOStringForIE8 | toISOString | +| Util.getIEVersion | getIEVersion | +| Util.msToTimeSpan | msToTimeSpan | +| Util.isCrossOriginError | isCrossOriginError | +| Util.dump | dumpObj | +| Util.getExceptionName | getExceptionName | +| Util.addEventHandler | attachEvent | +| Util.IsBeaconApiSupported | isBeaconApiSupported | +| Util.getExtension | getExtensionByName +| **UrlHelper** | **@microsoft/applicationinsights-common-js** | +| UrlHelper.parseUrl | urlParseUrl | +| UrlHelper.getAbsoluteUrl | urlGetAbsoluteUrl | +| UrlHelper.getPathName | urlGetPathName | +| UrlHelper.getCompeteUrl | urlGetCompleteUrl | +| UrlHelper.parseHost | urlParseHost | +| UrlHelper.parseFullHost | urlParseFullHost +| **DateTimeUtils** | **@microsoft/applicationinsights-common-js** | +| DateTimeUtils.Now | dateTimeUtilsNow | +| DateTimeUtils.GetDuration | dateTimeUtilsDuration | +| **ConnectionStringParser** | **@microsoft/applicationinsights-common-js** | +| ConnectionStringParser.parse | parseConnectionString | ++## Troubleshooting ++See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/javascript-sdk-troubleshooting). ++## Next steps ++* [Track usage](usage-overview.md) +* [Custom events and metrics](api-custom-events-metrics.md) +* [Build-measure-learn](usage-overview.md) |
azure-monitor | Javascript Sdk Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk-upgrade.md | Upgrading to the new version of the Application Insights JavaScript SDK can prov If you're using the current application insights PRODUCTION SDK (1.0.20) and want to see if the new SDK works in runtime, update the URL depending on your current SDK loading scenario. -- Download via CDN scenario: Update the SDK Loader Script that you currently use to point to the following URL:+- Download via CDN scenario: Update the JavaScript (Web) SDK Loader Script that you currently use to point to the following URL: ``` "https://js.monitor.azure.com/scripts/b/ai.2.min.js" ``` |
azure-monitor | Javascript Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk.md | The Microsoft Azure Monitor Application Insights JavaScript SDK allows you to mo Follow the steps in this section to instrument your application with the Application Insights JavaScript SDK. > [!TIP] -> Good news! We're making it even easier to enable JavaScript. Check out where [SDK Loader Script injection by configuration is available](./codeless-overview.md#sdk-loader-script-injection-by-configuration)! +> Good news! We're making it even easier to enable JavaScript. Check out where [JavaScript (Web) SDK Loader Script injection by configuration is available](./codeless-overview.md#javascript-web-sdk-loader-script-injection-by-configuration)! > [!NOTE]-> If you have a React, React Native, or Angular application, you can [optionally add these plug-ins after you follow the steps to get started](#5-optional-advanced-sdk-configuration). +> If you have a React, React Native, or Angular application, you can [optionally add these plug-ins after you follow the steps to get started](#4-optional-add-advanced-sdk-configuration). ### 1. Add the JavaScript code Two methods are available to add the code to enable Application Insights via the | Method | When would I use this method? | |:-|:|-| SDK Loader Script | For most customers, we recommend the SDK Loader Script because you never have to update the SDK and you get the latest updates automatically. Also, you have control over which pages you add the Application Insights JavaScript SDK to. | +| JavaScript (Web) SDK Loader Script | For most customers, we recommend the JavaScript (Web) SDK Loader Script because you never have to update the SDK and you get the latest updates automatically. Also, you have control over which pages you add the Application Insights JavaScript SDK to. | | npm package | You want to bring the SDK into your code and enable IntelliSense. This option is only needed for developers who require more custom events and configuration. | -#### [SDK Loader Script](#tab/sdkloaderscript) +#### [JavaScript (Web) SDK Loader Script](#tab/javascriptwebsdkloaderscript) -1. Paste the SDK Loader Script at the top of each page for which you want to enable Application Insights. +1. Paste the JavaScript (Web) SDK Loader Script at the top of each page for which you want to enable Application Insights. > [!NOTE] > Preferably, you should add it as the first script in your <head> section so that it can monitor any potential issues with all of your dependencies. Two methods are available to add the code to enable Application Insights via the </script> ``` -1. (Optional) Add or update optional [SDK Loader Script configuration](#sdk-loader-script-configuration), depending on if you need to optimize the loading of your web page or resolve loading errors. +1. (Optional) Add or update optional [JavaScript (Web) SDK Loader Script configuration](#javascript-web-sdk-loader-script-configuration), depending on if you need to optimize the loading of your web page or resolve loading errors. - :::image type="content" source="media/javascript-sdk/sdk-loader-script-configuration.png" alt-text="Screenshot of the SDK Loader Script. The parameters for configuring the SDK Loader Script are highlighted." lightbox="media/javascript-sdk/sdk-loader-script-configuration.png"::: + :::image type="content" source="media/javascript-sdk/sdk-loader-script-configuration.png" alt-text="Screenshot of the JavaScript (Web) SDK Loader Script. The parameters for configuring the JavaScript (Web) SDK Loader Script are highlighted." lightbox="media/javascript-sdk/sdk-loader-script-configuration.png"::: -#### SDK Loader Script configuration +#### JavaScript (Web) SDK Loader Script configuration | Name | Type | Required? | Description |||--| | src | string | Required | The full URL for where to load the SDK from. This value is used for the "src" attribute of a dynamically added <script /> tag. You can use the public CDN location or your own privately hosted one.- | name | string | Optional | The global name for the initialized SDK. Use this setting if you need to initialize two different SDKs at the same time.<br><br>The default value is appInsights, so ```window.appInsights``` is a reference to the initialized instance.<br><br> Note: If you assign a name value or if a previous instance has been assigned to the global name appInsightsSDK, the SDK initialization code requires it to be in the global namespace as `window.appInsightsSDK=<name value>` to ensure the correct SDK Loader Script skeleton, and proxy methods are initialized and updated. - | ld | number in ms | Optional | Defines the load delay to wait before attempting to load the SDK. Use this setting when the HTML page is failing to load because the SDK Loader Script is loading at the wrong time.<br><br>The default value is 0ms after timeout. If you use a negative value, the script tag is immediately added to the <head> region of the page and blocks the page load event until the script is loaded or fails. - | useXhr | boolean | Optional | This setting is used only for reporting SDK load failures. For example, this setting is useful when the SDK Loader Script is preventing the HTML page from loading, causing fetch() to be unavailable.<br><br>Reporting first attempts to use fetch() if available and then fallback to XHR. Set this setting to `true` to bypass the fetch check. This setting is only required if your application is being used in an environment where fetch would fail to send the failure events such as if the SDK Loader Script isn't loading successfully. + | name | string | Optional | The global name for the initialized SDK. Use this setting if you need to initialize two different SDKs at the same time.<br><br>The default value is appInsights, so ```window.appInsights``` is a reference to the initialized instance.<br><br> Note: If you assign a name value or if a previous instance has been assigned to the global name appInsightsSDK, the SDK initialization code requires it to be in the global namespace as `window.appInsightsSDK=<name value>` to ensure the correct JavaScript (Web) SDK Loader Script skeleton, and proxy methods are initialized and updated. + | ld | number in ms | Optional | Defines the load delay to wait before attempting to load the SDK. Use this setting when the HTML page is failing to load because the JavaScript (Web) SDK Loader Script is loading at the wrong time.<br><br>The default value is 0ms after timeout. If you use a negative value, the script tag is immediately added to the <head> region of the page and blocks the page load event until the script is loaded or fails. + | useXhr | boolean | Optional | This setting is used only for reporting SDK load failures. For example, this setting is useful when the JavaScript (Web) SDK Loader Script is preventing the HTML page from loading, causing fetch() to be unavailable.<br><br>Reporting first attempts to use fetch() if available and then fallback to XHR. Set this setting to `true` to bypass the fetch check. This setting is only required if your application is being used in an environment where fetch would fail to send the failure events such as if the JavaScript (Web) SDK Loader Script isn't loading successfully. | crossOrigin | string | Optional | By including this setting, the script tag added to download the SDK includes the crossOrigin attribute with this string value. Use this setting when you need to provide support for CORS. When not defined (the default), no crossOrigin attribute is added. Recommended values are not defined (the default), "", or "anonymous". For all valid values, see the [cross origin HTML attribute](https://developer.mozilla.org/docs/Web/HTML/Attributes/crossorigin) documentation.- | onInit | function(aiSdk) { ... } | Optional | This callback function is called after the main SDK script has been successfully loaded and initialized from the CDN (based on the src value). This callback function is useful when you need to insert a telemetry initializer. It's passed one argument, which is a reference to the SDK instance that's being called for and is also called before the first initial page view. If the SDK has already been loaded and initialized, this callback is still called. NOTE: During the processing of the sdk.queue array, this callback is called. You CANNOT add any more items to the queue because they're ignored and dropped. (Added as part of SDK Loader Script version 5--the sv:"5" value within the script). | + | onInit | function(aiSdk) { ... } | Optional | This callback function is called after the main SDK script has been successfully loaded and initialized from the CDN (based on the src value). This callback function is useful when you need to insert a telemetry initializer. It's passed one argument, which is a reference to the SDK instance that's being called for and is also called before the first initial page view. If the SDK has already been loaded and initialized, this callback is still called. NOTE: During the processing of the sdk.queue array, this callback is called. You CANNOT add any more items to the queue because they're ignored and dropped. (Added as part of JavaScript (Web) SDK Loader Script version 5--the sv:"5" value within the script). | #### [npm package](#tab/npmpackage) To paste the connection string in your environment, follow these steps: ### 3. (Optional) Add SDK configuration -The optional [SDK configuration](./javascript-sdk-advanced.md#sdk-configuration) is passed to the Application Insights JavaScript SDK during initialization. +The optional [SDK configuration](./javascript-sdk-configuration.md#sdk-configuration) is passed to the Application Insights JavaScript SDK during initialization. To add SDK configuration, add each configuration option directly under `connectionString`. For example: :::image type="content" source="media/javascript-sdk/example-sdk-configuration.png" alt-text="Screenshot of JavaScript code with SDK configuration options added and highlighted." lightbox="media/javascript-sdk/example-sdk-configuration.png"::: -### 4. Confirm data is flowing +### 4. (Optional) Add advanced SDK configuration ++If you want to use the extra features provided by plugins for specific frameworks and optionally enable the Click Analytics plug-in, see: ++- [React plugin](javascript-framework-extensions.md?tabs=react) +- [React native plugin](javascript-framework-extensions.md?tabs=reactnative) +- [Angular plugin](javascript-framework-extensions.md?tabs=reactnative) ++> [!TIP] +> We collect page views by default. But if you want to also collect clicks by default, consider adding the Click Analytics Auto-Collection plug-in. If you're adding a framework extension, you'll have the option to add Click Analytics when you add the framework extension. If you're not adding a framework extension, [add the Click Analytics plug-in](./javascript-feature-extensions.md). ++### 5. Confirm data is flowing 1. Go to your Application Insights resource that you've enabled the SDK for. 1. In the Application Insights resource menu on the left, under **Investigate**, select the **Transaction search** pane. 1. Open the **Event types** dropdown menu and select **Select all** to clear the checkboxes in the menu. -1. From the **Event types** dropdown menu, select **Page View**. +1. From the **Event types** dropdown menu, select: ++ - **Page View** for Azure Monitor Application Insights Real User Monitoring + - **Custom Event** for the Click Analytics Auto-Collection plug-in. It might take a few minutes for data to show up in the portal. To add SDK configuration, add each configuration option directly under `connecti If you can't run the application or you aren't getting data as expected, see the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/javascript-sdk-troubleshooting). -### 5. (Optional) Advanced SDK configuration --If you want to use the extra features provided by plugins for specific frameworks, see: --- [React plugin](javascript-framework-extensions.md?tabs=react)-- [React native plugin](javascript-framework-extensions.md?tabs=reactnative)-- [Angular plugin](javascript-framework-extensions.md?tabs=reactnative)--> [!TIP] -> We collect page views by default. But if you want to also collect clicks by default, consider adding the [Click Analytics plug-in](javascript-feature-extensions.md). - ## Support - If you're having trouble with enabling Application Insights, see the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/javascript-sdk-troubleshooting). If you want to use the extra features provided by plugins for specific framework ## Next steps -* [Track usage](usage-overview.md) +* [Explore Application Insights usage experiences](usage-overview.md) * [Track page views](api-custom-events-metrics.md#page-views)-* [Custom events and metrics](api-custom-events-metrics.md) -* [JavaScript telemetry initializers](api-filtering-sampling.md#javascript-telemetry-initializers) -* [Build-measure-learn](usage-overview.md) -* [JavaScript SDK advanced topics](javascript-sdk-advanced.md) +* [Track custom events and metrics](api-custom-events-metrics.md) +* [Insert a JavaScript telemetry initializer](api-filtering-sampling.md#javascript-telemetry-initializers) +* [Add JavaScript SDK configuration](javascript-sdk-configuration.md) * See the detailed [release notes](https://github.com/microsoft/ApplicationInsights-JS/releases) on GitHub for updates and bug fixes.+* [Query data in Log Analytics](../../azure-monitor/logs/log-query-overview.md). |
azure-monitor | Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md | appInsights.start(); ### Automatic web Instrumentation[Preview] - Automatic web Instrumentation can be enabled for node server via SDK Loader Script injection by configuration. + Automatic web Instrumentation can be enabled for node server via JavaScript (Web) SDK Loader Script injection by configuration. ```javascript let appInsights = require("applicationinsights"); |
azure-monitor | Sdk Connection String | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md | For more information, see [Connection string configuration](./java-standalone-co JavaScript doesn't support the use of environment variables. You have two options: -- To use the SDK Loader Script, see [SDK Loader Script](./javascript-sdk.md?tabs=sdkloaderscript#get-started).+- To use the JavaScript (Web) SDK Loader Script, see [JavaScript (Web) SDK Loader Script](./javascript-sdk.md?tabs=javascriptwebsdkloaderscript#get-started). - Manual setup: ```javascript import { ApplicationInsights } from '@microsoft/applicationinsights-web' |
azure-monitor | Telemetry Channels | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/telemetry-channels.md | |
azure-monitor | Tutorial Asp Net Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-asp-net-core.md | |
azure-monitor | Tutorial Asp Net Custom Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-asp-net-custom-metrics.md | |
azure-monitor | Usage Heart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-heart.md | You only have to interact with the main workbook, **HEART Analytics - All Sectio To validate that data is flowing as expected to light up the metrics accurately, select the **Development Requirements** tab. > [!IMPORTANT]-> Unless you [set the authenticated user context](./javascript-feature-extensions.md#set-the-authenticated-user-context), you must select **Anonymous Users** from the **ConversionScope** dropdown to see telemetry data. +> Unless you [set the authenticated user context](./javascript-feature-extensions.md#3-optional-set-the-authenticated-user-context), you must select **Anonymous Users** from the **ConversionScope** dropdown to see telemetry data. :::image type="content" source="media/usage-overview/development-requirements-1.png" alt-text="Screenshot that shows the Development Requirements tab of the HEART Analytics - All Sections workbook."::: |
azure-monitor | Usage Segmentation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-segmentation.md | Three of the **Usage** panes use the same tool to slice and dice telemetry from * **Sessions tool**: How many sessions of user activity have included certain pages and features of your app? A session is reset after half an hour of user inactivity, or after 24 hours of continuous use. * **Events tool**: How often are certain pages and features of your app used? A page view is counted when a browser loads a page from your app, provided you've [instrumented it](./javascript.md). - A custom event represents one occurrence of something happening in your app. It's often a user interaction like a button selection or the completion of a task. You insert code in your app to [generate custom events](./api-custom-events-metrics.md#trackevent) or use the [Click Analytics](javascript-feature-extensions.md#feature-extensions-for-the-application-insights-javascript-sdk-click-analytics) extension. + A custom event represents one occurrence of something happening in your app. It's often a user interaction like a button selection or the completion of a task. You insert code in your app to [generate custom events](./api-custom-events-metrics.md#trackevent) or use the [Click Analytics](javascript-feature-extensions.md) extension. > [!NOTE] > For information on an alternatives to using [anonymous IDs](./data-model-complete.md#anonymous-user-id) and ensuring an accurate count, see the documentation for [authenticated IDs](./data-model-complete.md#authenticated-user-id). |
azure-monitor | Work Item Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/work-item-integration.md | Title: Work Item Integration - Application Insights description: Learn how to create work items in GitHub or Azure DevOps with Application Insights data embedded in them. Last updated 06/27/2021-+ # Work Item Integration |
azure-monitor | Container Insights Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-authentication.md | Last updated 06/13/2023 -# Authentication for Azure Monitor - Container Insights +# Authentication for Container Insights Container Insights now defaults to managed identity authentication. This secure and simplified authentication model has a monitoring agent that uses the cluster's managed identity to send data to Azure Monitor. It replaces the existing legacy certificate-based local authentication and removes the requirement of adding a Monitoring Metrics Publisher role to the cluster. ## How to enable -Click on the relevant tab for instructions to enable Managed identity authentication on existing clusters. +Click on the relevant tab for instructions to enable Managed identity authentication on your clusters. ## [Azure portal](#tab/portal-azure-monitor) -No action is needed when creating a cluster from the Portal. However, it isn't possible to switch to Managed Identity authentication from the Azure portal. Customers must use command line tools to migrate. See other tabs for migration instructions and templates. +When creating a new cluster from the Azure portal: On the **Integrations** tab, first check the box for *Enable Container Logs*, then check the box for *Use managed identity*. ++For existing clusters, you can switch to Managed Identity authentication from the *Monitor settings* panel: Navigate to your AKS cluster, scroll through the menu on the left till you see the **Monitoring** section, there click on the **Insights** tab. In the Insights tab, click on the *Monitor Settings* option and check the box for *Use managed identity* ++If you don't see the *Use managed identity* option, you are using an SPN clusters. In that case, you must use command line tools to migrate. See other tabs for migration instructions and templates. ## [Azure CLI](#tab/cli) |
azure-monitor | Container Insights Onboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md | If you have a Kubernetes cluster with Windows nodes, review and configure the ne ## Authentication -Container insights defaults to managed identity authentication. This secure and simplified authentication model has a monitoring agent that uses the cluster's managed identity to send data to Azure Monitor. It replaces the existing legacy certificate-based local authentication and removes the requirement of adding a *Monitoring Metrics Publisher* role to the cluster. +Container insights defaults to managed identity authentication. This secure and simplified authentication model has a monitoring agent that uses the cluster's managed identity to send data to Azure Monitor. It replaces the existing legacy certificate-based local authentication and removes the requirement of adding a *Monitoring Metrics Publisher* role to the cluster. Read more in [Authentication for Container Insights](container-insights-authentication.md) ## Agent |
azure-monitor | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md | Essentials|[Use private endpoints for Managed Prometheus and Azure Monitor works Essentials|[Private Link for data ingestion for Managed Prometheus and Azure Monitor workspace](essentials/private-link-data-ingestion.md)|New article: Private Link for data ingestion for Managed Prometheus and Azure Monitor workspace| Essentials|[Collect Prometheus metrics from an Arc-enabled Kubernetes cluster (preview)](essentials/prometheus-metrics-from-arc-enabled-cluster.md)|New article: Collect Prometheus metrics from an Arc-enabled Kubernetes cluster (preview)| Essentials|[How to migrate from the metrics API to the getBatch API](essentials/migrate-to-batch-api.md)|Migrate from the metrics API to the getBatch API|-Essentials|[Azure Active Directory authorization proxy](essentials/prometheus-authorization-proxy.md)|Aad auth proxy| +Essentials|[Azure Active Directory authorization proxy](essentials/prometheus-authorization-proxy.md)|Microsoft Azure Active Directory (Azure AD) auth proxy| Essentials|[Integrate KEDA with your Azure Kubernetes Service cluster](essentials/integrate-keda.md)|New Article: Integrate KEDA with AKS and Prometheus| Essentials|[General Availability: Azure Monitor managed service for Prometheus](https://techcommunity.microsoft.com/t5/azure-observability-blog/general-availability-azure-monitor-managed-service-for/ba-p/3817973)|General Availability: Azure Monitor managed service for Prometheus | Insights|[Monitor and analyze runtime behavior with Code Optimizations (Preview)](insights/code-optimizations.md)|New doc for public preview release of Code Optimizations feature.| Logs|[Set daily cap on Log Analytics workspace](logs/daily-cap.md)|Starting Sept |Subservice| Article | Description | |||| Agents|[Azure Monitor Agent Performance Benchmark](agents/azure-monitor-agent-performance.md)|Added performance benchmark data for the scenario of using Azure Monitor Agent to forward data to a gateway.|-Agents|[Troubleshoot issues with the Log Analytics agent for Windows](agents/agent-windows-troubleshoot.md)|Log Analytics will no longer accept connections from MMA versions that use old root CAs (MMA versions prior to the Winter 2020 release for Log Analytics agent, and prior to SCOM 2019 UR3 for SCOM). | +Agents|[Troubleshoot issues with the Log Analytics agent for Windows](agents/agent-windows-troubleshoot.md)|Log Analytics will no longer accept connections from MMA versions that use old root CAs (MMA versions prior to the Winter 2020 release for Log Analytics agent, and prior to Microsoft System Center Operations Manager 2019 UR3 for Operations Manager). | Agents|[Azure Monitor Agent overview](agents/agents-overview.md)|Log Analytics agent supports Windows Server 2022. | Alerts|[Common alert schema](alerts/alerts-common-schema.md)|Updated alert payload common schema to include custom properties.| Alerts|[Create and manage action groups in the Azure portal](alerts/action-groups.md)|Clarified use of basic auth in webhook.| Containers|[Manage the Container insights agent](containers/container-insights-m Essentials|[Azure Monitor Metrics overview](essentials/data-platform-metrics.md)|New Batch Metrics API that allows multiple resource requests and reducing throttling found in the non-batch version. | General|[Cost optimization in Azure Monitor](best-practices-cost.md)|Rewritten to match organization of Well Architected Framework service guides| General|[Best practices for Azure Monitor Logs](best-practices-logs.md)|New article with consolidated list of best practices for Logs organized by WAF pillar.|-General|[Migrate from System Center Operations Manager (SCOM) to Azure Monitor](azure-monitor-operations-manager.md)|Migrate from SCOM to Azure Monitor| +General|[Migrate from Operations Manager to Azure Monitor](azure-monitor-operations-manager.md)|Migrate from Operations Manager to Azure Monitor| Logs|[Application Insights API Access with Microsoft Azure Active Directory (Azure AD) Authentication](app/app-insights-azure-ad-api.md)|New article that explains how to authenticate and access the Azure Monitor Application Insights APIs using Azure AD.| Logs|[Tutorial: Replace custom fields in Log Analytics workspace with KQL-based custom columns](logs/custom-fields-migrate.md)|Guidance for migrate legacy custom fields to KQL-based custom columns using transformations.| Logs|[Monitor Log Analytics workspace health](logs/log-analytics-workspace-health.md)|View Log Analytics workspace health metrics, including query success metrics, directly from the Log Analytics workspace screen in the Azure portal.| Alerts|[Connect ServiceNow to Azure Monitor](alerts/itsmc-secure-webhook-connect Application-Insights|[Application Insights SDK support guidance](app/sdk-support-guidance.md)|Release notes are now available for each SDK.| Application-Insights|[What is distributed tracing and telemetry correlation?](app/distributed-tracing-telemetry-correlation.md)|Merged our documents related to distributed tracing and telemetry correlation.| Application-Insights|[Application Insights availability tests](app/availability-overview.md)|Separated and called out the two Classic Tests, which are older versions of availability tests.|-Application-Insights|[Microsoft Azure Monitor Application Insights JavaScript SDK advanced topics](app/javascript-sdk-advanced.md)|JavaScript SDK advanced topics now include npm setup, cookie configuration and management, source map un-minify support, and tree shaking optimized code.| +Application-Insights|[Microsoft Azure Monitor Application Insights JavaScript SDK configuration](app/javascript-sdk-configuration.md)|JavaScript SDK configuration now includes npm setup, cookie configuration and management, source map un-minify support, and tree shaking optimized code.| Application-Insights|[Microsoft Azure Monitor Application Insights JavaScript SDK](app/javascript-sdk.md)|Our introductory article to the JavaScript SDK now provides only the fast and easy code-snippet method of getting started.| Application-Insights|[Geolocation and IP address handling](app/ip-collection.md)|Updated code samples for .NET 6/7.| Application-Insights|[Application Insights logging with .NET](app/ilogger.md)|Updated code samples for .NET 6/7.| |
azure-netapp-files | Azure Netapp Files Network Topologies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md | Configuring UDRs on the source VM subnets with the address prefix of delegated s > [!NOTE] > To access an Azure NetApp Files volume from an on-premises network via a VNet gateway (ExpressRoute or VPN) and firewall, configure the route table assigned to the VNet gateway to include the `/32` IPv4 address of the Azure NetApp Files volume listed and point to the firewall as the next hop. Using an aggregate address space that includes the Azure NetApp Files volume IP address will not forward the Azure NetApp Files traffic to the firewall. +>[!NOTE] +>If you want to configure a UDR route in the VM VNet, to control the routing of packets destined for a VNet-peered Azure NetApp Files standard volume, the UDR prefix must be more specific or equal to the delegated subnet size of the Azure NetApp Files volume. If the UDR prefix is of size greater than the delegated subnet size, it will not be effective. + ## Azure native environments The following diagram illustrates an Azure-native environment: |
azure-resource-manager | Bicep Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-cli.md | Title: Bicep CLI commands and overview description: Describes the commands that you can use in the Bicep CLI. These commands include building Azure Resource Manager templates from Bicep. Previously updated : 04/18/2023 Last updated : 06/15/2023 # Bicep CLI commands -This article describes the commands you can use in the Bicep CLI. You must have the [Bicep CLI installed](./install.md) to run the commands. +This article describes the commands you can use in the Bicep CLI. You have two options for executing these commands: either by utilizing Azure CLI or by directly invoking Bicep CLI commands. Each method requires a distinct installation process. For more information, see [Install Azure CLI](./install.md#azure-cli) and [Install Azure PowerShell](./install.md#azure-powershell). -You can either run the Bicep CLI commands through Azure CLI or by calling Bicep directly. This article shows how to run the commands in Azure CLI. When running through Azure CLI, you start the commands with `az`. If you're not using Azure CLI, run the commands without `az` at the start of the command. For example, `az bicep build` becomes `bicep build`. +This article shows how to run the commands in Azure CLI. When running through Azure CLI, you start the commands with `az`. If you're not using Azure CLI, run the commands without `az` at the start of the command. For example, `az bicep build` becomes `bicep build`, and `az bicep version` becomes `bicep --version`. ++> [!NOTE] +> The commands related to the Bicep parameters files are exclusively supported within the Bicep CLI and are not currently available in Azure CLI. These commands include: `build-params`, `decompile-params`, and `generate-params`. ## build When you get this error, either run the `build` command without the `--no-restor To use the `--no-restore` switch, you must have Bicep CLI version **0.4.1008 or later**. +## build-params ++The `build-params` command builds a _.bicepparam_ file into a JSON parameters file. ++```azurecli +bicep build-params params.bicepparam +``` ++This command converts a _params.bicepparam_ parameters file into a _params.json_ JSON parameters file. + ## decompile The `decompile` command converts ARM template JSON to a Bicep file. The command creates a file named _main.bicep_ in the same directory as _main.jso For more information about using this command, see [Decompiling ARM template JSON to Bicep](decompile.md). +## decompile-params ++The `decompile-params` command decompile a JSON parameters file to a _.bicepparam_ parameters file. ++```azurecli +bicep decompile-params azuredeploy.parameters.json --bicep-file ./dir/main.bicep +``` ++This command decompiles a _azuredeploy.parameters.json_ parameters file into a _azuredeploy.parameters.bicepparam_ file. `-bicep-file` specifies the path to the Bicep file (relative to the .bicepparam file) that will be referenced in the `using` declaration. + ## generate-params -The `generate-params` command builds *.parameters.json* file from the given bicep file, updates if there is an existing parameters.json file. +The `generate-params` command builds a parameters file from the given Bicep file, updates if there is an existing parameters file. ```azurecli-az bicep generate-params --file main.bicep +bicep generate-params main.bicep --output-format bicepparam --include-params all +``` ++The command creates a Bicep parameters file named _main.bicepparam_. The parameter file contains all parameters in the Bicep file, whether configured with default values or not. ++```azurecli +bicep generate-params --file main.bicep --outfile main.parameters.json ``` The command creates a parameter file named _main.parameters.json_. The parameter file only contains the parameters without default values configured in the Bicep file. To use the restore command, you must have Bicep CLI version **0.4.1008 or later* To manually restore the external modules for a file, use: -```powershell -bicep restore <bicep-file> [--force] +```azurecli +az bicep restore <bicep-file> [--force] ``` The Bicep file you provide is the file you wish to deploy. It must contain a module that links to a registry. For example, you can restore the following file: Bicep CLI version 0.4.1008 (223b8d227a) To call this command directly through the Bicep CLI, use: -```powershell +```Bicep CLI bicep --version ``` |
azure-resource-manager | Bicep Functions Parameters File | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-parameters-file.md | + + Title: Bicep functions - parameters file +description: Describes the functions used in the Bicep parameters files. ++ Last updated : 06/05/2023+++# Parameters file function for Bicep ++Bicep provides a function called `readEnvironmentVariable()` that allows you to retrieve values from environment variables. It also offers the flexibility to set a default value if the environment variable does not exist. This function can only be using in the `.bicepparam` files. For more information, see [Bicep parameters file](./parameter-files.md). ++## readEnvironmentVariable() ++`readEnvironmentVariable(variableName, [defaultValue])` ++Returns the value of the environment variable, or set a default value if the environment variable doesn't exist. Variable loading occurs during compilation, not at runtime. ++Namespace: [sys](bicep-functions.md#namespaces-for-functions). ++### Parameters ++| Parameter | Required | Type | Description | +|: |: |: |: | +| variableName | Yes | string | The name of the variable. | +| defaultValue | No | string | A default string value to be used if the environment variable does not exist. | ++### Return value ++The string value of the environment variable or a default value. ++### Examples ++The following examples show how to retrieve the values of environment variables. ++```bicep +use './main.bicep' ++param adminPassword = readEnvironmentVariable('admin_password') +param boolfromEnvironmentVariables = bool(readEnvironmentVariable('boolVariableName','false')) +``` ++## Next steps ++For more information about Bicep parameters file, see [Parameters file](./parameter-files.md). |
azure-resource-manager | Bicep Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions.md | Title: Bicep functions description: Describes the functions to use in a Bicep file to retrieve values, work with strings and numerics, and retrieve deployment information. Previously updated : 05/11/2023 Last updated : 06/05/2023 # Bicep functions The following functions are available for working with lambda expressions. All o * [reduce](bicep-functions-lambda.md#reduce) * [sort](bicep-functions-lambda.md#sort) - ## Logical functions The following function is available for working with logical conditions. This function is in the `sys` namespace. The following functions are available for working with objects. All of these fun * [length](./bicep-functions-object.md#length) * [union](./bicep-functions-object.md#union) +## Parameters file functions ++The [readEnvironmentVariable function](./bicep-functions-parameters-file.md) is available in Bicep to read environment variable values. This function is in the `sys` namespace. + ## Resource functions The following functions are available for getting resource values. Most of these functions are in the `az` namespace. The list functions and the getSecret function are called directly on the resource type, so they don't have a namespace qualifier. |
azure-resource-manager | Deploy Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-cli.md | Title: Deploy resources with Azure CLI and Bicep files | Microsoft Docs description: Use Azure Resource Manager and Azure CLI to deploy resources to Azure. The resources are defined in a Bicep file.-- Previously updated : 07/08/2022 Last updated : 06/13/2023 If you're deploying to a resource group that doesn't exist, create the resource az group create --name ExampleGroup --location "Central US" ``` -To deploy a local Bicep file, use the `--template-file` parameter in the deployment command. The following example also shows how to set a parameter value. +To deploy a local Bicep file, use the `--template-file` switch in the deployment command. The following example also shows how to set a parameter value. ```azurecli-interactive az deployment group create \ Currently, Azure CLI doesn't support deploying remote Bicep files. You can use [ ## Parameters -To pass parameter values, you can use either inline parameters or a parameter file. +To pass parameter values, you can use either inline parameters or a parameters file. ### Inline parameters az deployment group create \ However, if you're using Azure CLI with Windows Command Prompt (CMD) or PowerShell, set the variable to a JSON string. Escape the quotation marks: `$params = '{ \"prefix\": {\"value\":\"start\"}, \"suffix\": {\"value\":\"end\"} }'`. -### Parameter files +The evaluation of parameters follows a sequential order, meaning that if a value is assigned multiple times, only the last assigned value is used. To ensure proper parameter assignment, it is advised to provide your parameters file initially and selectively override specific parameters using the _KEY=VALUE_ syntax. It's important to mention that if you are supplying a `bicepparam` parameters file, you can use this argument only once. -Rather than passing parameters as inline values in your script, you may find it easier to use a JSON file that contains the parameter values. The parameter file must be a local file. External parameter files aren't supported with Azure CLI. Bicep file uses JSON parameter files. +### Parameters files -For more information about the parameter file, see [Create Resource Manager parameter file](./parameter-files.md). +Rather than passing parameters as inline values in your script, you may find it easier to use a `.bicepparam` file or a JSON file that contains the parameter values. The parameters file must be a local file. External parameters files aren't supported with Azure CLI. -To pass a local parameter file, specify the path and file name. The following example shows a parameter file named _storage.parameters.json_. The file is in the same directory where the command is run. +For more information about the parameters file, see [Create Resource Manager parameters file](./parameter-files.md). ++To pass a local Bicep parameters file, specify the path and file name. The following example shows a parameters file named _storage.bicepparam_. The file is in the same directory where the command is run. ++```azurecli-interactive +az deployment group create \ + --name ExampleDeployment \ + --resource-group ExampleGroup \ + --template-file storage.bicep \ + --parameters storage.bicepparam +``` ++The following example shows a parameters file named _storage.parameters.json_. The file is in the same directory where the command is run. ```azurecli-interactive az deployment group create \ |
azure-resource-manager | Deploy Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-powershell.md | Title: Deploy resources with PowerShell and Bicep description: Use Azure Resource Manager and Azure PowerShell to deploy resources to Azure. The resources are defined in a Bicep file.-- Previously updated : 08/05/2022 Last updated : 06/05/2023 # Deploy resources with Bicep and Azure PowerShell If you're deploying to a resource group that doesn't exist, create the resource New-AzResourceGroup -Name ExampleGroup -Location "Central US" ``` -To deploy a local Bicep file, use the `-TemplateFile` parameter in the deployment command. +To deploy a local Bicep file, use the `-TemplateFile` switch in the deployment command. ```azurepowershell New-AzResourceGroupDeployment ` Currently, Azure PowerShell doesn't support deploying remote Bicep files. Use [B ## Parameters -To pass parameter values, you can use either inline parameters or a parameter file. +To pass parameter values, you can use either inline parameters or a parameters file. ### Inline parameters New-AzResourceGroupDeployment -ResourceGroupName testgroup ` -exampleArray $subnetArray ``` -### Parameter files +### Parameters files -Rather than passing parameters as inline values in your script, you may find it easier to use a JSON file that contains the parameter values. The parameter file can be a local file or an external file with an accessible URI. Bicep file uses JSON parameter files. +Rather than passing parameters as inline values in your script, you may find it easier to use a `.bicepparam` file or a JSON file that contains the parameter values. The parameters file can be a local file or an external file with an accessible URI. -For more information about the parameter file, see [Create Resource Manager parameter file](./parameter-files.md). +For more information about the parameters file, see [Create Resource Manager parameters file](./parameter-files.md). -To pass a local parameter file, use the `TemplateParameterFile` parameter: +To pass a local parameters file, use the `TemplateParameterFile` parameter with a `.bicepparam` file: ++```powershell +New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup ` + -TemplateFile c:\BicepFiles\storage.bicep ` + -TemplateParameterFile c:\BicepFiles\storage.bicepparam +``` ++To pass a local parameters file, use the `TemplateParameterFile` parameter with a JSON parameters file: ```powershell New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup ` New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName Example -TemplateParameterFile c:\BicepFiles\storage.parameters.json ``` -To pass an external parameter file, use the `TemplateParameterUri` parameter: +To pass an external parameters file, use the `TemplateParameterUri` parameter: ```powershell New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup ` New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName Example -TemplateParameterUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.parameters.json ``` +The `TemplateParameterUri` parameter doesn't support `.bicepparam` files, it only supports JSON parameters files. + ## Preview changes Before deploying your Bicep file, you can preview the changes the Bicep file will make to your environment. Use the [what-if operation](./deploy-what-if.md) to verify that the Bicep file makes the changes that you expect. What-if also validates the Bicep file for errors. |
azure-resource-manager | Key Vault Parameter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/key-vault-parameter.md | Title: Key Vault secret with Bicep description: Shows how to pass a secret from a key vault as a parameter during Bicep deployment.-- Previously updated : 06/18/2021 Last updated : 06/15/2023 # Use Azure Key Vault to pass secure parameter value during Bicep deployment -Instead of putting a secure value (like a password) directly in your Bicep file or parameter file, you can retrieve the value from an [Azure Key Vault](../../key-vault/general/overview.md) during a deployment. When a [module](./modules.md) expects a `string` parameter with `secure:true` modifier, you can use the [getSecret function](bicep-functions-resource.md#getsecret) to obtain a key vault secret. The value is never exposed because you only reference its key vault ID. +Instead of putting a secure value (like a password) directly in your Bicep file or parameters file, you can retrieve the value from an [Azure Key Vault](../../key-vault/general/overview.md) during a deployment. When a [module](./modules.md) expects a `string` parameter with `secure:true` modifier, you can use the [getSecret function](bicep-functions-resource.md#getsecret) to obtain a key vault secret. The value is never exposed because you only reference its key vault ID. > [!IMPORTANT]-> This article focuses on how to pass a sensitive value as a template parameter. When the secret is passed as a parameter, the key vault can exist in a different subscription than the resource group you're deploying to. +> This article focuses on how to pass a sensitive value as a template parameter. When the secret is passed as a parameter, the key vault can exist in a different subscription than the resource group you're deploying to. > > This article doesn't cover how to set a virtual machine property to a certificate's URL in a key vault. For a quickstart template of that scenario, see [Install a certificate from Azure Key Vault on a Virtual Machine](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/vm-winrm-keyvault-windows). module sql './sql.bicep' = { } ``` -## Reference secrets in parameter file +## Reference secrets in parameters file -If you don't want to use a module, you can reference the key vault directly in the parameter file. The following image shows how the parameter file references the secret and passes that value to the Bicep file. +If you don't want to use a module, you can reference the key vault directly in the parameters file. The following image shows how the parameters file references the secret and passes that value to the Bicep file.  +> [!NOTE] +> Currently you can only reference the key vault in JSON parameters files. You can't reference key vault in Bicep parameters file. + The following Bicep file deploys a SQL server that includes an administrator password. The password parameter is set to a secure string. But the Bicep doesn't specify where that value comes from. ```bicep resource sqlServer 'Microsoft.Sql/servers@2020-11-01-preview' = { -Now, create a parameter file for the preceding Bicep file. In the parameter file, specify a parameter that matches the name of the parameter in the Bicep file. For the parameter value, reference the secret from the key vault. You reference the secret by passing the resource identifier of the key vault and the name of the secret: +Now, create a parameters file for the preceding Bicep file. In the parameters file, specify a parameter that matches the name of the parameter in the Bicep file. For the parameter value, reference the secret from the key vault. You reference the secret by passing the resource identifier of the key vault and the name of the secret: -In the following parameter file, the key vault secret must already exist, and you provide a static value for its resource ID. +In the following parameters file, the key vault secret must already exist, and you provide a static value for its resource ID. ```json { If you need to use a version of the secret other than the current version, inclu "secretVersion": "cd91b2b7e10e492ebb870a6ee0591b68" ``` -Deploy the template and pass in the parameter file: +Deploy the template and pass in the parameters file: # [Azure CLI](#tab/azure-cli) az group create --name SqlGroup --location westus2 az deployment group create \ --resource-group SqlGroup \ --template-file <Bicep-file> \- --parameters <parameter-file> + --parameters <parameters-file> ``` # [PowerShell](#tab/azure-powershell) New-AzResourceGroup -Name $resourceGroupName -Location $location New-AzResourceGroupDeployment ` -ResourceGroupName $resourceGroupName ` -TemplateFile <Bicep-file> `- -TemplateParameterFile <parameter-file> + -TemplateParameterFile <parameters-file> ``` |
azure-resource-manager | Parameter Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameter-files.md | Title: Create parameter file for Bicep -description: Create parameter file for passing in values during deployment of a Bicep file --+ Title: Create parameters files for Bicep deployment +description: Create parameters file for passing in values during deployment of a Bicep file Previously updated : 11/14/2022 Last updated : 06/05/2023 -# Create Bicep parameter file +# Create parameters files for Bicep deployment -Rather than passing parameters as inline values in your script, you can use a JSON file that contains the parameter values. This article shows how to create a parameter file that you use with a Bicep file. +Rather than passing parameters as inline values in your script, you can use a Bicep parameters file with the `.bicepparam` file extension or a JSON parameters file that contains the parameter values. This article shows how to create parameters files. -## Parameter file +A single Bicep file can have multiple Bicep parameters files associated with it. However, each Bicep parameters file is intended for one particular Bicep file. This relationship is established using the `using` statement within the Bicep parameters file. For more information, see [Bicep parameters file](#parameters-file). -A parameter file uses the following format: +jgao: list the versions for supporting Bicep parameters file. You can compile Bicep parameters files into JSON parameters files to deploy with a Bicep file. ++## Parameters file ++A parameters file uses the following format: ++# [Bicep parameters file](#tab/Bicep) ++```bicep +using '<path>/<file-name>.bicep' ++param <first-parameter-name> = <first-value> +param <second-parameter-name> = <second-value> +``` ++You can use expressions with the default value. For example: ++```bicep +using 'storageaccount.bicep' ++param storageName = toLower('MyStorageAccount') +param intValue = 2 + 2 +``` ++You can reference environment variables as parameter values. For example: ++```bicep +using './main.bicep' ++param intFromEnvironmentVariables = int(readEnvironmentVariable('intEnvVariableName')) +``` ++# [JSON parameters file](#tab/JSON) ```json { A parameter file uses the following format: } ``` -It's worth noting that the parameter file saves parameter values as plain text. For security reasons, this approach is not recommended for sensitive values such as passwords. If you must pass a parameter with a sensitive value, keep the value in a key vault. Instead of adding the sensitive value to your parameter file, use the [getSecret function](bicep-functions-resource.md#getsecret) to retrieve it. For more information, see [Use Azure Key Vault to pass secure parameter value during Bicep deployment](key-vault-parameter.md). +++It's worth noting that the parameters file saves parameter values as plain text. For security reasons, this approach isn't recommended for sensitive values such as passwords. If you must pass a parameter with a sensitive value, keep the value in a key vault. Instead of adding the sensitive value to your parameters file, use the [getSecret function](bicep-functions-resource.md#getsecret) to retrieve it. For more information, see [Use Azure Key Vault to pass secure parameter value during Bicep deployment](key-vault-parameter.md). ++## Parameter type formats ++The following example shows the formats of different parameter types: string, integer, boolean, array, and object. ++# [Bicep parameters file](#tab/Bicep) ++```bicep +using './main.bicep' ++param exampleString = 'test string' +param exampleInt = 2 + 2 +param exampleBool = true +param exampleArray = [ + 'value 1' + 'value 2' +] +param exampleObject = { + property1: 'value 1' + property2: 'value 2' +} +``` ++Use Bicep syntax to declare [objects](./data-types.md#objects) and [arrays](./data-types.md#arrays). ++# [JSON parameters file](#tab/JSON) ++```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "exampleString": { + "value": "test string" + }, + "exampleInt": { + "value": 4 + }, + "exampleBool": { + "value": true + }, + "exampleArray": { + "value": [ + "value 1", + "value 2" + ] + }, + "exampleObject": { + "value": { + "property1": "value1", + "property2": "value2" + } + } + } +} +``` ++++## File name ++# [Bicep parameters file](#tab/Bicep) ++Bicep parameters file has the file extension of `.bicepparam`. ++To deploy to different environments, you create more than one parameters file. When you name the parameters files, identify their use such as development and production. For example, use _main.dev.biceparam_ and _main.prod.json_ to deploy resources. ++# [JSON parameters file](#tab/JSON) ++The general naming convention for the parameters file is to include _parameters_ in the Bicep file name. For example, if your Bicep file is named _azuredeploy.bicep_, your parameters file is named _azuredeploy.parameters.json_. This naming convention helps you see the connection between the Bicep file and the parameters. ++To deploy to different environments, you create more than one parameters file. When you name the parameters files, identify their use such as development and production. For example, use _azuredeploy.parameters-dev.json_ and _azuredeploy.parameters-prod.json_ to deploy resources. ++ ## Define parameter values -To determine how to define the parameter names and values, open your Bicep file. Look at the parameters section of the Bicep file. The following examples show the parameters from a Bicep file. +To determine how to define the parameter names and values, open your Bicep file. Look at the parameters section of the Bicep file. The following examples show the parameters from a Bicep file called `main.bicep`. ```bicep @maxLength(11) param storagePrefix string param storageAccountType string = 'Standard_LRS' ``` -In the parameter file, the first detail to notice is the name of each parameter. The parameter names in your parameter file must match the parameter names in your Bicep file. +In the parameters file, the first detail to notice is the name of each parameter. The parameter names in your parameters file must match the parameter names in your Bicep file. ++# [Bicep parameters file](#tab/Bicep) ++```bicep +using 'main.bicep' ++param storagePrefix +param storageAccountType +``` ++The `using` statement ties the Bicep parameters file to a Bicep file. ++After typing the keyword `param` in Visual Studio Code, it prompts you the available parameters and their descriptions from the linked Bicep file: +++When hovering over a param name, you can see the parameter data type and description. +++# [JSON parameters file](#tab/JSON) ```json { In the parameter file, the first detail to notice is the name of each parameter. } ``` -Notice the parameter type. The parameter types in your parameter file must use the same types as your Bicep file. In this example, both parameter types are strings. +++Notice the parameter type. The parameter types in your parameters file must use the same types as your Bicep file. In this example, both parameter types are strings. ++# [Bicep parameters file](#tab/Bicep) ++```bicep +using 'main.bicep' ++param storagePrefix = '' +param storageAccountType = '' +``` ++# [JSON parameters file](#tab/JSON) ```json { Notice the parameter type. The parameter types in your parameter file must use t } ``` -Check the Bicep file for parameters with a default value. If a parameter has a default value, you can provide a value in the parameter file but it's not required. The parameter file value overrides the Bicep file's default value. +++Check the Bicep file for parameters with a default value. If a parameter has a default value, you can provide a value in the parameters file, but it's not required. The parameters file value overrides the Bicep file's default value. ++# [Bicep parameters file](#tab/Bicep) ++```bicep +using 'main.bicep' ++param storagePrefix = '' // This value must be provided. +param storageAccountType = '' // This value is optional. Bicep will use default value if not provided. +``` ++# [JSON parameters file](#tab/JSON) ```json { Check the Bicep file for parameters with a default value. If a parameter has a d ``` > [!NOTE]-> For inline comments, you can use either // or /* ... */. In Visual Studio Code, save the parameter files with the **JSONC** file type, otherwise you will get an error message saying "Comments not permitted in JSON". +> For inline comments, you can use either // or /* ... */. In Visual Studio Code, save the parameters files with the **JSONC** file type, otherwise you will get an error message saying "Comments not permitted in JSON". ++ Check the Bicep's allowed values and any restrictions such as maximum length. Those values specify the range of values you can provide for a parameter. In this example, `storagePrefix` can have a maximum of 11 characters and `storageAccountType` must specify an allowed value. +# [Bicep parameters file](#tab/Bicep) ++```bicep +using 'main.bicep' ++param storagePrefix = 'storage' +param storageAccountType = 'Standard_ZRS' +``` ++# [JSON parameters file](#tab/JSON) + ```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", Check the Bicep's allowed values and any restrictions such as maximum length. Th ``` > [!NOTE]-> Your parameter file can only contain values for parameters that are defined in the Bicep file. If your parameter file contains extra parameters that don't match the Bicep file's parameters, you receive an error. +> Your parameters file can only contain values for parameters that are defined in the Bicep file. If your parameters file contains extra parameters that don't match the Bicep file's parameters, you receive an error. -## Parameter type formats + -The following example shows the formats of different parameter types: string, integer, boolean, array, and object. +## Generate parameters file -```json -{ - "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "exampleString": { - "value": "test string" - }, - "exampleInt": { - "value": 4 - }, - "exampleBool": { - "value": true - }, - "exampleArray": { - "value": [ - "value 1", - "value 2" - ] - }, - "exampleObject": { - "value": { - "property1": "value1", - "property2": "value2" - } - } - } -} -``` +To generate a parameters file, you have two options: either through Visual Studio Code or by using the Bicep CLI. Both methods allow you to derive the parameters file from a Bicep file. From Visual Studio Code, See [Generate parameters file](./visual-studio-code.md#generate-parameters-file). From Bicep CLI, see [Generate parameters file](./bicep-cli.md#generate-params). ++## Build Bicep parameters file ++From Bicep CLI, you can build a Bicep parameters file into a JSON parameters file. for more information, see [Build parameters file](./bicep-cli.md#build-params). -## Deploy Bicep file with parameter file +## Deploy Bicep file with parameters file -From Azure CLI, pass a local parameter file using `@` and the parameter file name. For example, `@storage.parameters.json`. +From Azure CLI, pass a local parameters file using `@` and the parameters file name. For example, `storage.bicepparam` or `@storage.parameters.json`. ```azurecli az deployment group create \ --name ExampleDeployment \ --resource-group ExampleGroup \ --template-file storage.bicep \- --parameters @storage.parameters.json + --parameters @storage.bicepparam ``` For more information, see [Deploy resources with Bicep and Azure CLI](./deploy-cli.md#parameters). To deploy _.bicep_ files you need Azure CLI version 2.20 or higher. -From Azure PowerShell, pass a local parameter file using the `TemplateParameterFile` parameter. +From Azure PowerShell, pass a local parameters file using the `TemplateParameterFile` parameter. ```azurepowershell New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup ` -TemplateFile C:\MyTemplates\storage.bicep `- -TemplateParameterFile C:\MyTemplates\storage.parameters.json + -TemplateParameterFile C:\MyTemplates\storage.bicepparam ``` For more information, see [Deploy resources with Bicep and Azure PowerShell](./deploy-powershell.md#parameters). To deploy _.bicep_ files you need Azure PowerShell version 5.6.0 or higher. -## File name --The general naming convention for the parameter file is to include _parameters_ in the Bicep file name. For example, if your Bicep file is named _azuredeploy.bicep_, your parameter file is named _azuredeploy.parameters.json_. This naming convention helps you see the connection between the Bicep file and the parameters. --To deploy to different environments, you create more than one parameter file. When you name the parameter files, identify their use such as development and production. For example, use _azuredeploy.parameters-dev.json_ and _azuredeploy.parameters-prod.json_ to deploy resources. - ## Parameter precedence -You can use inline parameters and a local parameter file in the same deployment operation. For example, you can specify some values in the local parameter file and add other values inline during deployment. If you provide values for a parameter in both the local parameter file and inline, the inline value takes precedence. +You can use inline parameters and a local parameters file in the same deployment operation. For example, you can specify some values in the local parameters file and add other values inline during deployment. If you provide values for a parameter in both the local parameters file and inline, the inline value takes precedence. -It's possible to use an external parameter file, by providing the URI to the file. When you use an external parameter file, you can't pass other values either inline or from a local file. All inline parameters are ignored. Provide all parameter values in the external file. +It's possible to use an external parameters file, by providing the URI to the file. When you use an external parameters file, you can't pass other values either inline or from a local file. All inline parameters are ignored. Provide all parameter values in the external file. ## Parameter name conflicts |
azure-resource-manager | Visual Studio Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/visual-studio-code.md | Title: Create Bicep files by using Visual Studio Code description: Describes how to create Bicep files by using Visual Studio Code Previously updated : 05/12/2023 Last updated : 06/05/2023 # Create Bicep files by using Visual Studio Code You can deploy Bicep files directly from Visual Studio Code. Select **Deploy Bic ### Generate parameters file -This command creates a parameter file in the same folder as the Bicep file. The new parameter file name is `<bicep-file-name>.parameters.json`. +This command creates a parameter file in the same folder as the Bicep file. You can choose to create a Bicep parameter file or a JSON parameter file. The new Bicep parameter file name is `<bicep-file-name>.bicepparam`, while the new JSON parameter file name is `<bicep-file-name>.parameters.json`. ### Import Kubernetes manifest (Preview) |
azure-resource-manager | Request Just In Time Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/request-just-in-time-access.md | -Consumers of your managed application may be reluctant to grant you permanent access to the managed resource group. As a publisher of a managed application, you might prefer that consumers know exactly when you need to access the managed resources. To give consumers greater control over granting access to managed resources, Azure Managed Applications provides a feature called just-in-time (JIT) access. This feature is currently in preview. -+Consumers of your managed application may be reluctant to grant you permanent access to the managed resource group. As a publisher of a managed application, you might prefer that consumers know exactly when you need to access the managed resources. To give consumers greater control over granting access to managed resources, Azure Managed Applications provides a feature called just-in-time (JIT) access. JIT access enables you to request elevated access to a managed application's resources for troubleshooting or maintenance. You always have read-only access to the resources, but for a specific time period you can have greater access. The work flow for granting access is: The principal ID of the account requesting JIT access must be explicitly include ## Next steps -To learn about approving requests for JIT access, see [Approve just-in-time access in Azure Managed Applications](approve-just-in-time-access.md). +To learn about approving requests for JIT access, see [Approve just-in-time access in Azure Managed Applications](approve-just-in-time-access.md). |
backup | Backup Azure Enhanced Soft Delete About | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-enhanced-soft-delete-about.md | Title: Overview of enhanced soft delete for Azure Backup (preview) description: This article gives an overview of enhanced soft delete for Azure Backup. Previously updated : 05/15/2023 Last updated : 06/16/2023 The key benefits of enhanced soft delete are: ## Supported regions - Enhanced soft delete is available in all Azure public regions.-- Soft delete of recovery points is currently in preview in West Central US, North Europe, and Australia East. Support in other regions will be added shortly.+- Soft delete of recovery points is currently in preview in West Central US, Australia East, North Europe, South Central US, Australia Central, Australia Central 2, Canada East, India Central, India South,Japan West, Japan East, Korea Central, Korea South, France South, France Central, Sweden Central, Sweden South, West Europe, UK South, Australia South East, Brazil South, Brazil South East, Canada Central, UK West. + ## Supported scenarios - Enhanced soft delete is supported for Recovery Services vaults and Backup vaults. Also, it's supported for new and existing vaults. This feature helps to retain these recovery points for an additional duration, a >[!Note] >- Soft delete of recovery points is not supported for log recovery points in SQL and SAP HANA workloads.->- Thisfeature is currently available in selected Azure regions only. [Learn more](#supported-scenarios). +>- This feature is currently available in selected Azure regions only. [Learn more](#supported-scenarios). ## Pricing |
bastion | Bastion Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md | No. You can access your virtual machine from the Azure portal using your browser ### <a name="native-client"></a>Can I connect to my VM using a native client? -Yes. You can connect to a VM from your local computer using a native client. See [Connect to a VM using a native client](connect-native-client-windows.md). +Yes. You can connect to a VM from your local computer using a native client. See [Connect to a VM using a native client](native-client.md). ### <a name="agent"></a>Do I need an agent running in the Azure virtual machine? |
bastion | Connect Ip Address | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-ip-address.md | Before you begin these steps, verify that you have the following environment set ## Connect to VM - native client -You can connect to VMs using a specified IP address with native client via SSH, RDP, or tunnelling. Note that this feature does not support Azure Active Directory authentication or custom port and protocol at the moment. To learn more about configuring native client support, see [Connect to a VM - native client](connect-native-client-windows.md). Use the following commands as examples: +You can connect to VMs using a specified IP address with native client via SSH, RDP, or tunnelling. Note that this feature does not support Azure Active Directory authentication or custom port and protocol at the moment. To learn more about configuring native client support, see [Configure Bastion native client support](native-client.md). Use the following commands as examples: **RDP:** |
bastion | Connect Native Client Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-native-client-windows.md | - Title: 'Connect to a VM using a native client and Azure Bastion'- -description: Learn how to connect to a VM from a Windows computer by using Bastion and a native client. --- Previously updated : 05/18/2023----# Connect to a VM using a native client --This article helps you configure your Bastion deployment, and then connect to a VM in the VNet using the native client (SSH or RDP) on your local computer. The native client feature lets you connect to your target VMs via Bastion using Azure CLI, and expands your sign-in options to include local SSH key pair and Azure Active Directory (Azure AD). Additionally with this feature, you can now also upload or download files, depending on the connection type and client. ---Your capabilities on the VM when connecting via native client are dependent on what is enabled on the native client. Controlling access to features such as file transfer via Bastion isn't supported. --> [!NOTE] -> This configuration requires the Standard SKU tier for Azure Bastion. --After you deploy this feature, there are two different sets of connection instructions. --* [Connect to a VM from the native client on a Windows local computer](#connect). This lets you do the following: -- * Connect using SSH or RDP. - * [Upload and download files](vm-upload-download-native.md#rdp) over RDP. - * If you want to connect using SSH and need to upload files to your target VM, use the **az network bastion tunnel** command instead. --* [Connect to a VM using the **az network bastion tunnel** command](#connect-tunnel). This lets you do the following: -- * Use native clients on *non*-Windows local computers (example: a Linux PC). - * Use the native client of your choice. (This includes the Windows native client.) - * Connect using SSH or RDP. (The bastion tunnel doesn't relay web servers or hosts.) - * Set up concurrent VM sessions with Bastion. - * [Upload files](vm-upload-download-native.md#tunnel-command) to your target VM from your local computer. File download from the target VM to the local client is currently not supported for this command. --**Limitations** --* Signing in using an SSH private key stored in Azure Key Vault isnΓÇÖt supported with this feature. Before signing in to your Linux VM using an SSH key pair, download your private key to a file on your local machine. -* This feature isn't supported on Cloud Shell. --## <a name="prereq"></a>Prerequisites --Before you begin, verify that you have the following prerequisites: --* The latest version of the CLI commands (version 2.32 or later) is installed. For information about installing the CLI commands, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Get Started with Azure CLI](/cli/azure/get-started-with-azure-cli). -* An Azure virtual network. -* A virtual machine in the virtual network. -* The VM's Resource ID. The Resource ID can be easily located in the Azure portal. Go to the Overview page for your VM and select the *JSON View* link to open the Resource JSON. Copy the Resource ID at the top of the page to your clipboard to use later when connecting to your VM. -* If you plan to sign in to your virtual machine using your Azure AD credentials, make sure your virtual machine is set up using one of the following methods: - * [Enable Azure AD sign-in for a Windows VM](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md) or [Linux VM](../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md). - * [Configure your Windows VM to be Azure AD-joined](../active-directory/devices/concept-azure-ad-join.md). - * [Configure your Windows VM to be hybrid Azure AD-joined](../active-directory/devices/concept-azure-ad-join-hybrid.md). --## <a name="secure "></a>Secure your native client connection --If you want to further secure your native client connection, you can limit port access by only providing access to port 22/3389. To restrict port access, you must deploy the following NSG rules on your AzureBastionSubnet to allow access to select ports and deny access from any other ports. ---## <a name="configure"></a>Configure the native client support feature --You can configure this feature by either modifying an existing Bastion deployment, or you can deploy Bastion with the feature configuration already specified. --### To modify an existing Bastion deployment --If you've already deployed Bastion to your VNet, modify the following configuration settings: --1. Navigate to the **Configuration** page for your Bastion resource. Verify that the SKU Tier is **Standard**. If it isn't, select **Standard**. -1. Select the box for **Native Client Support**, then apply your changes. -- :::image type="content" source="./media/connect-native-client-windows/update-host.png" alt-text="Screenshot that shows settings for updating an existing host with Native Client Support box selected." lightbox="./media/connect-native-client-windows/update-host.png"::: --### To deploy Bastion with the native client feature --If you haven't already deployed Bastion to your VNet, you can deploy with the native client feature specified by deploying Bastion using manual settings. For steps, see [Tutorial - Deploy Bastion with manual settings](tutorial-create-host-portal.md#createhost). When you deploy Bastion, specify the following settings: --1. On the **Basics** tab, for **Instance Details -> Tier** select **Standard**. Native client support requires the Standard SKU. -- :::image type="content" source="./media/connect-native-client-windows/standard.png" alt-text="Settings for a new bastion host with Standard SKU selected." lightbox="./media/connect-native-client-windows/standard.png"::: -1. Before you create the bastion host, go to the **Advanced** tab and check the box for **Native Client Support**, along with the checkboxes for any other additional features that you want to deploy. -- :::image type="content" source="./media/connect-native-client-windows/new-host.png" alt-text="Screenshot that shows settings for a new bastion host with Native Client Support box selected." lightbox="./media/connect-native-client-windows/new-host.png"::: --1. Click **Review + create** to validate, then click **Create** to deploy your Bastion host. --## <a name="verify"></a>Verify roles and ports --Verify that the following roles and ports are configured in order to connect to the VM. --### Required roles --* Reader role on the virtual machine. -* Reader role on the NIC with private IP of the virtual machine. -* Reader role on the Azure Bastion resource. -* Virtual Machine Administrator Login or Virtual Machine User Login role, if youΓÇÖre using the Azure AD sign-in method. You only need to do this if you're enabling Azure AD login using the processes outlined in one of these articles: -- * [Azure Windows VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md) - * [Azure Linux VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md) --### Ports --To connect to a Linux VM using native client support, you must have the following ports open on your Linux VM: --* Inbound port: SSH (22) *or* -* Inbound port: Custom value (youΓÇÖll then need to specify this custom port when you connect to the VM via Azure Bastion) --To connect to a Windows VM using native client support, you must have the following ports open on your Windows VM: --* Inbound port: RDP (3389) *or* -* Inbound port: Custom value (youΓÇÖll then need to specify this custom port when you connect to the VM via Azure Bastion) --To learn about how to best configure NSGs with Azure Bastion, see [Working with NSG access and Azure Bastion](bastion-nsg.md). --## <a name="connect"></a>Connect to VM - Windows native client --This section helps you connect to your virtual machine from the native client on a local Windows computer. If you want to upload and download files after connecting, you must use an RDP connection. For more information about file transfers, see [Upload or download files](vm-upload-download-native.md). --Use the example that corresponds to the type of target VM to which you want to connect. --* [Windows VM](#connect-windows) -* [Linux VM](#connect-linux) --### <a name="connect-windows"></a>Connect to a Windows VM --1. Sign in to your Azure account. If you have more than one subscription, select the subscription containing your Bastion resource. -- ```azurecli - az login - az account list - az account set --subscription "<subscription ID>" - ``` --1. Sign in to your target Windows VM using one of the following example options. If you want to specify a custom port value, you should also include the field **--resource-port** in the sign-in command. -- **RDP:** -- To connect via RDP, use the following command. YouΓÇÖll then be prompted to input your credentials. You can use either a local username and password, or your Azure AD credentials. For more information, see [Azure Windows VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md). -- ```azurecli - az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" - ``` -- > [!IMPORTANT] - > Remote connection to VMs that are joined to Azure AD is allowed only from Windows 10 or later PCs that are Azure AD registered (starting with Windows 10 20H1), Azure AD joined, or hybrid Azure AD joined to the *same* directory as the VM. -- **SSH:** -- The extension can be installed by running, ```az extension add --name ssh```. To sign in using an SSH key pair, use the following example. -- ```azurecli - az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>" - ``` -- Once you sign in to your target VM, the native client on your computer opens up with your VM session; **MSTSC** for RDP sessions, and **SSH CLI extension (az ssh)** for SSH sessions. --### <a name="connect-linux"></a>Connect to a Linux VM --1. Sign in to your Azure account. If you have more than one subscription, select the subscription containing your Bastion resource. -- ```azurecli - az login - az account list - az account set --subscription "<subscription ID>" - ``` --1. Sign in to your target Linux VM using one of the following example options. If you want to specify a custom port value, you should also include the field **--resource-port** in the sign-in command. -- **Azure AD:** -- If youΓÇÖre signing in to an Azure AD login-enabled VM, use the following command. For more information, see [Azure Linux VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md). -- ```azurecli - az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --auth-type "AAD" - ``` -- **SSH:** -- The extension can be installed by running, ```az extension add --name ssh```. To sign in using an SSH key pair, use the following example. -- ```azurecli - az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>" - ``` -- **Username/password:** -- If youΓÇÖre signing in using a local username and password, use the following command. YouΓÇÖll then be prompted for the password for the target VM. -- ```azurecli - az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --auth-type "password" --username "<Username>" - ``` -- 1. Once you sign in to your target VM, the native client on your computer opens up with your VM session; **MSTSC** for RDP sessions, and **SSH CLI extension (az ssh)** for SSH sessions. --## <a name="connect-tunnel"></a>Connect to VM - other native clients --This section helps you connect to your virtual machine from native clients on *non*-Windows local computers (example: a Linux PC) using the **az network bastion tunnel** command. You can also connect using this method from a Windows computer. This is helpful when you require an SSH connection and want to upload files to your VM. The bastion tunnel supports RDP/SSH connection, but doesn't relay web servers or hosts. --This connection supports file upload from the local computer to the target VM. For more information, see [Upload files](vm-upload-download-native.md). --1. Sign in to your Azure account. If you have more than one subscription, select the subscription containing your Bastion resource. -- ```azurecli - az login - az account list - az account set --subscription "<subscription ID>" - ``` --1. Open the tunnel to your target VM using the following command. -- ```azurecli - az network bastion tunnel --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --resource-port "<TargetVMPort>" --port "<LocalMachinePort>" - ``` --1. Connect to your target VM using SSH or RDP, the native client of your choice, and the local machine port you specified in Step 2. -- For example, you can use the following command if you have the OpenSSH client installed on your local computer: -- ```azurecli - ssh <username>@127.0.0.1 -p <LocalMachinePort> - ``` --## <a name="connect-IP"></a>Connect to VM - IP Address --This section helps you connect to your on-premises, non-Azure, and Azure virtual machines via Azure Bastion using a specified private IP address from native client. You can replace `--target-resource-id` with `--target-ip-address` in any of the above commands with the specified IP address to connect to your VM. --> [!Note] -> This feature does not support support Azure AD authentication or custom port and protocol at the moment. For more information on IP-based connection, see [Connect to a VM - IP address](connect-ip-address.md). --Use the following commands as examples: --- **RDP:** - - ```azurecli - az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress> - ``` - - **SSH:** - - ```azurecli - az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-addres "<VMIPAddress>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>" - ``` - - **Tunnel:** - - ```azurecli - az network bastion tunnel --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>" --resource-port "<TargetVMPort>" --port "<LocalMachinePort>" - ``` ---## Next steps --[Upload or download files](vm-upload-download-native.md) |
bastion | Connect Vm Native Client Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-vm-native-client-linux.md | + + Title: 'Connect to a VM using Bastion - Linux native client' ++description: Learn how to connect to a VM from a Linux computer by using Bastion and a native client. +++ Last updated : 06/12/2023++++# Connect to a VM using Bastion and a Linux native client ++This article helps you connect to a VM in the VNet using the native client (SSH or RDP) on your local computer using the **az network bastion tunnel** command. The native client feature lets you connect to your target VMs via Bastion using Azure CLI, and expands your sign-in options to include local SSH key pair and Azure Active Directory (Azure AD). For more information and steps to configure Bastion for native client connections, see [Configure Bastion for native client connections](native-client.md). Connections via native client require the Bastion Standard SKU. +++After you've configured Bastion for native client support, you can connect to a VM using the **az network bastion tunnel** command. When you use this command, you can do the following: ++ * Use native clients on *non*-Windows local computers (example: a Linux computer). + * Use the native client of your choice. (This includes the Windows native client.) + * Connect using SSH or RDP. (The bastion tunnel doesn't relay web servers or hosts.) + * Set up concurrent VM sessions with Bastion. + * [Upload files](vm-upload-download-native.md#tunnel-command) to your target VM from your local computer. File download from the target VM to the local client is currently not supported for this command. ++Limitations: ++* Signing in using an SSH private key stored in Azure Key Vault isnΓÇÖt supported with this feature. Before signing in to your Linux VM using an SSH key pair, download your private key to a file on your local machine. +* This feature isn't supported on Cloud Shell. ++## <a name="prereq"></a>Prerequisites +++## <a name="verify"></a>Verify roles and ports ++Verify that the following roles and ports are configured in order to connect to the VM. +++## <a name="connect-tunnel"></a>Connect to a VM ++This section helps you connect to your virtual machine from native clients on *non*-Windows local computers (example: Linux) using the **az network bastion tunnel** command. You can also connect using this method from a Windows computer. This is helpful when you require an SSH connection and want to upload files to your VM. The bastion tunnel supports RDP/SSH connection, but doesn't relay web servers or hosts. ++This connection supports file upload from the local computer to the target VM. For more information, see [Upload files](vm-upload-download-native.md). +++## <a name="connect-IP"></a>Connect to VM via IP Address +++Use the following command as an example: + + **Tunnel:** + + ```azurecli + az network bastion tunnel --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>" --resource-port "<TargetVMPort>" --port "<LocalMachinePort>" + ``` ++## Next steps ++[Upload or download files](vm-upload-download-native.md) |
bastion | Connect Vm Native Client Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-vm-native-client-windows.md | + + Title: 'Connect to a VM using Bastion - Windows native client' ++description: Learn how to connect to a VM from a Windows computer by using Bastion and a native client. +++ Last updated : 06/12/2023++++# Connect to a VM using Bastion and the Windows native client ++This article helps you connect to a VM in the VNet using the native client (SSH or RDP) on your local Windows computer. The native client feature lets you connect to your target VMs via Bastion using Azure CLI, and expands your sign-in options to include local SSH key pair and Azure Active Directory (Azure AD). For more information and steps to configure Bastion for native client connections, see [Configure Bastion for native client connections](native-client.md). Connections via native client require the Bastion Standard SKU. +++After you've configured Bastion for native client support, you can connect to a VM using the native Windows client. This lets you do the following: ++ * Connect using SSH or RDP. + * [Upload and download files](vm-upload-download-native.md#rdp) over RDP. + * If you want to connect using SSH and need to upload files to your target VM, you can use the instructions for the [az network bastion tunnel](connect-vm-native-client-linux.md) command instead. ++Limitations: ++* Signing in using an SSH private key stored in Azure Key Vault isnΓÇÖt supported with this feature. Before signing in to your Linux VM using an SSH key pair, download your private key to a file on your local machine. +* This feature isn't supported on Cloud Shell. ++## <a name="prereq"></a>Prerequisites +++## <a name="verify"></a>Verify roles and ports ++Verify that the following roles and ports are configured in order to connect to the VM. +++## <a name="connect-windows"></a>Connect to a Windows VM ++1. Sign in to your Azure account. If you have more than one subscription, select the subscription containing your Bastion resource. ++ ```azurecli + az login + az account list + az account set --subscription "<subscription ID>" + ``` ++1. Sign in to your target Windows VM using one of the following example options. If you want to specify a custom port value, you should also include the field **--resource-port** in the sign-in command. ++ **RDP:** ++ To connect via RDP, use the following command. YouΓÇÖll then be prompted to input your credentials. You can use either a local username and password, or your Azure AD credentials. For more information, see [Azure Windows VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md). ++ ```azurecli + az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" + ``` ++ > [!IMPORTANT] + > Remote connection to VMs that are joined to Azure AD is allowed only from Windows 10 or later PCs that are Azure AD registered (starting with Windows 10 20H1), Azure AD joined, or hybrid Azure AD joined to the *same* directory as the VM. ++ **SSH:** ++ The extension can be installed by running, ```az extension add --name ssh```. To sign in using an SSH key pair, use the following example. ++ ```azurecli + az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>" + ``` ++ Once you sign in to your target VM, the native client on your computer opens up with your VM session; **MSTSC** for RDP sessions, and **SSH CLI extension (az ssh)** for SSH sessions. ++## <a name="connect-linux"></a>Connect to a Linux VM ++1. Sign in to your Azure account. If you have more than one subscription, select the subscription containing your Bastion resource. ++ ```azurecli + az login + az account list + az account set --subscription "<subscription ID>" + ``` ++1. Sign in to your target Linux VM using one of the following example options. If you want to specify a custom port value, you should also include the field **--resource-port** in the sign-in command. ++ **Azure AD:** ++ If youΓÇÖre signing in to an Azure AD login-enabled VM, use the following command. For more information, see [Azure Linux VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md). ++ ```azurecli + az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --auth-type "AAD" + ``` ++ **SSH:** ++ The extension can be installed by running, ```az extension add --name ssh```. To sign in using an SSH key pair, use the following example. ++ ```azurecli + az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>" + ``` ++ **Username/password:** ++ If youΓÇÖre signing in using a local username and password, use the following command. YouΓÇÖll then be prompted for the password for the target VM. ++ ```azurecli + az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --auth-type "password" --username "<Username>" + ``` ++ 1. Once you sign in to your target VM, the native client on your computer opens up with your VM session; **MSTSC** for RDP sessions, and **SSH CLI extension (az ssh)** for SSH sessions. ++## <a name="connect-IP"></a>Connect to VM via IP Address +++Use the following commands as examples: ++ **RDP:** + + ```azurecli + az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress> + ``` + + **SSH:** + + ```azurecli + az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-addres "<VMIPAddress>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>" + ``` ++## Next steps ++[Upload or download files](vm-upload-download-native.md) |
bastion | Native Client | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/native-client.md | + + Title: 'Configure Bastion for native client connections' ++description: Learn how to configure Bastion for native client connections. +++ Last updated : 06/12/2023++++# Configure Bastion for native client connections ++This article helps you configure your Bastion deployment to accept connections from the native client (SSH or RDP) on your local computer to VMs located in the VNet. The native client feature lets you connect to your target VMs via Bastion using Azure CLI, and expands your sign-in options to include local SSH key pair and Azure Active Directory (Azure AD). Additionally, you can also upload or download files, depending on the connection type and client. +++* Your capabilities on the VM when connecting via native client are dependent on what is enabled on the native client. +* You can configure this feature by either modifying an existing Bastion deployment, or you can deploy Bastion with the feature configuration already specified. ++> [!IMPORTANT] +> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)] +> ++## Deploy Bastion with the native client feature ++If you haven't already deployed Bastion to your VNet, you can deploy with the native client feature specified by deploying Bastion using manual settings. For steps, see [Tutorial - Deploy Bastion with manual settings](tutorial-create-host-portal.md#createhost). When you deploy Bastion, specify the following settings: ++1. On the **Basics** tab, for **Instance Details -> Tier** select **Standard**. Native client support requires the Standard SKU. ++ :::image type="content" source="./media/native-client/standard.png" alt-text="Settings for a new bastion host with Standard SKU selected." lightbox="./media/native-client/standard.png"::: +1. Before you create the bastion host, go to the **Advanced** tab and check the box for **Native Client Support**, along with the checkboxes for any other features that you want to deploy. ++ :::image type="content" source="./media/native-client/new-host.png" alt-text="Screenshot that shows settings for a new bastion host with Native Client Support box selected." lightbox="./media/native-client/new-host.png"::: ++1. Select **Review + create** to validate, then select **Create** to deploy your Bastion host. ++## Modify an existing Bastion deployment ++If you've already deployed Bastion to your VNet, modify the following configuration settings: ++1. Navigate to the **Configuration** page for your Bastion resource. Verify that the SKU Tier is **Standard**. If it isn't, select **Standard**. +1. Select the box for **Native Client Support**, then apply your changes. ++ :::image type="content" source="./media/native-client/update-host.png" alt-text="Screenshot that shows settings for updating an existing host with Native Client Support box selected." lightbox="./media/native-client/update-host.png"::: ++## <a name="secure "></a>Secure your native client connection ++If you want to further secure your native client connection, you can limit port access by only providing access to port 22/3389. To restrict port access, you must deploy the following NSG rules on your AzureBastionSubnet to allow access to select ports and deny access from any other ports. +++## Connecting to VMs ++After you deploy this feature, there are different connection instructions, depending on the host computer you're connecting from. ++* [Connect from the native client on a Windows computer](connect-vm-native-client-windows.md). This lets you do the following: ++ * Connect using SSH or RDP. + * [Upload and download files](vm-upload-download-native.md#rdp) over RDP. + * If you want to connect using SSH and need to upload files to your target VM, you can use the instructions for the [az network bastion tunnel](connect-vm-native-client-linux.md) command instead. ++* [Connect using the **az network bastion tunnel** command](connect-vm-native-client-linux.md). This lets you do the following: ++ * Use native clients on *non*-Windows local computers (example: a Linux PC). + * Use the native client of your choice. (This includes the Windows native client.) + * Connect using SSH or RDP. (The bastion tunnel doesn't relay web servers or hosts.) + * Set up concurrent VM sessions with Bastion. + * [Upload files](vm-upload-download-native.md#tunnel-command) to your target VM from your local computer. File download from the target VM to the local client is currently not supported for this command. ++### Limitations ++* Signing in using an SSH private key stored in Azure Key Vault isnΓÇÖt supported with this feature. Before signing in to a Linux VM using an SSH key pair, download your private key to a file on your local machine. +* Connecting using a native client isn't supported on Cloud Shell. ++## Next steps ++* [Connect from a Windows native client](connect-vm-native-client-windows.md) +* [Connect using the az network bastion tunnel command](connect-vm-native-client-linux.md) +* [Upload or download files](vm-upload-download-native.md) |
bastion | Vm Upload Download Native | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/vm-upload-download-native.md | -Azure Bastion offers support for file transfer between your target VM and local computer using Bastion and a native RDP or native SSH client. To learn more about native client support, refer to [Connect to a VM using the native client](connect-native-client-windows.md). While it may be possible to use third-party clients and tools to upload or download files, this article focuses on working with supported native clients. +Azure Bastion offers support for file transfer between your target VM and local computer using Bastion and a native RDP or native SSH client. To learn more about native client support, refer to [Configure Bastion native client support](native-client.md). While it may be possible to use third-party clients and tools to upload or download files, this article focuses on working with supported native clients. * File transfers are supported using the native client only. You can't upload or download files using PowerShell or via the Azure portal. * To both [upload and download files](#rdp), you must use the Windows native client and RDP. Azure Bastion offers support for file transfer between your target VM and local ## <a name="rdp"></a>Upload and download files - RDP -The steps in this section apply when connecting to a target VM from a Windows local computer using the native Windows client and RDP. The **az network bastion rdp** command uses the native client MSTSC. Once connected to the target VM, you can upload and download files using **right-click**, then **Copy** and **Paste**. To learn more about this command and how to connect, see [Connect to a VM using a native client](connect-native-client-windows.md). +The steps in this section apply when connecting to a target VM from a Windows local computer using the native Windows client and RDP. The **az network bastion rdp** command uses the native client MSTSC. Once connected to the target VM, you can upload and download files using **right-click**, then **Copy** and **Paste**. To learn more about this command and how to connect, see [Connect from a Windows native client](connect-vm-native-client-windows.md). > [!NOTE] > File transfer over SSH is not supported using this method. Instead, use the [az network bastion tunnel command](#tunnel-command) to upload files over SSH. The steps in this section apply when connecting to a target VM from a Windows lo ## <a name="tunnel-command"></a>Upload files - SSH and RDP The steps in this section apply to native clients other than Windows, as well as Windows native clients that want to connect over SSH to upload files.-This section helps you upload files from your local computer to your target VM over SSH or RDP using the **az network bastion tunnel** command. To learn more about the tunnel command and how to connect, see [Connect to a VM using a native client](connect-native-client-windows.md). +This section helps you upload files from your local computer to your target VM over SSH or RDP using the **az network bastion tunnel** command. To learn more about the tunnel command and how to connect, see [Connect from a Linux native client](connect-vm-native-client-linux.md). > [!NOTE] > This command can be used to upload files from your local computer to the target VM. File download is not supported. |
communication-services | Call Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md | Title: Call Automation overview description: Learn about Azure Communication Services Call Automation. - -++ Last updated 09/06/2022 -> Call Automation currently doesn't interoperate with Microsoft Teams. Actions like making or redirecting a call to a Teams user or adding them to a call using Call Automation aren't supported. +> Call Automation currently doesn't interoperate with Microsoft Teams. Actions like making or redirecting a call to a Teams user or adding them to a call using Call Automation aren't supported. +> Call Automation currently doesn't support [Rooms](../rooms/room-concept.md) calls. ## Common use cases The Call Automation events are sent to the web hook callback URI specified when | CallTransferAccepted | Your applicationΓÇÖs call leg has been transferred to another endpoint | | CallTransferFailed | The transfer of your applicationΓÇÖs call leg failed | | AddParticipantSucceeded| Your application added a participant |-|AddParticipantFailed | Your application was unable to add a participant | -| ParticipantUpdated | The status of a participant changed while your applicationΓÇÖs call leg was connected to a call | +| AddParticipantFailed | Your application was unable to add a participant | +| RemoveParticipantSucceeded| Your application has successfuly removed a participant from the call. | +| RemoveParticipantFailed | Your application was unable to remove a participant from the call. | +| ParticipantsUpdated | The status of a participant changed while your applicationΓÇÖs call leg was connected to a call | | PlayCompleted | Your application successfully played the audio file provided | | PlayFailed | Your application failed to play audio |-| PlayCanceled | Your application canceled the play operation | +| PlayCanceled | The requested play action has been canceled. | | RecognizeCompleted | Recognition of user input was successfully completed |+| RecognizeCanceled | The requested recognize action has been canceled. | | RecognizeFailed | Recognition of user input was unsuccessful <br/>*to learn more about recognize action events view our how-to guide for [gathering user input](../../how-tos/call-automation/recognize-action.md)*|-| RecognizeCanceled | Your application canceled the request to recognize user input | -+|RecordingStateChanged | Status of recording action has changed from active to inactive or vice versa. | To understand which events are published for different actions, refer to [this guide](../../how-tos/call-automation/actions-for-call-control.md) that provides code samples as well as sequence diagrams for various call control flows. To learn how to secure the callback event delivery, refer to [this guide](../../how-tos/call-automation/secure-webhook-endpoint.md). -## Known issues --1. Using the incorrect IdentifierType for endpoints for `Transfer` requests (like using CommunicationUserIdentifier to specify a phone number) returns a 500 error instead of a 400 error code. Solution: Use the correct type, CommunicationUserIdentifier for Communication Users and PhoneNumberIdentifier for phone numbers. -2. Taking a pre-call action like Answer/Reject on the original call after redirected it gives a 200 success instead of failing on 'call not found'. -3. Transferring a call with more than two participants is currently not supported. -4. After transferring a call, you may receive two `CallDisconnected` events and will need to handle this behavior by ignoring the duplicate. - ## Next steps > [!div class="nextstepaction"] |
communication-services | Incoming Call Notification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/incoming-call-notification.md | Below is an example of an advanced filter on an Event Grid subscription watching Since the `IncomingCall` notification doesn't have a specific destination other than the Event Grid subscription you've created, you're free to associate any particular number to any endpoint in Azure Communication Services. For example, if you acquired a PSTN phone number of `+14255551212` and want to assign it to a user with an identity of `375f0e2f-e8db-4449-9bf7-2054b02e42b4` in your application, you'll maintain a mapping of that number to the identity. When an `IncomingCall` notification is sent matching the phone number in the **to** field, you'll invoke the `Redirect` API and supply the identity of the user. In other words, you maintain the number assignment within your application and route or answer calls at runtime. +## Best Practices +1. Event Grid requires you to prove ownership of your Webhook endpoint before it starts delivering events to that endpoint. This requirement prevents a malicious user from flooding your endpoint with events. If you are facing issues with receiving events, ensure the webhook configured is verified by handling `SubscriptionValidationEvent`. For more information, see this [guide](../../../event-grid/webhook-event-delivery.md). +2. Upon the receipt of an incoming call event, if your application does not respond back with 200Ok to Event Grid in time, Event Grid will use exponential backoff retry to send the again. However, an incoming call only rings for 30 seconds, and acting on a call after that will not work. To avoid retries for expired or stale calls, we recommend setting the retry policy as - Max Event Delivery Attempts to 2 and Event Time to Live to 1 minute. These settings can be found under Additional Features tab of the event subscription. Learn more about retries [here](../../../event-grid/delivery-and-retry.md). ++3. We recommend you to enable logging for your Event Grid resource to monitor events that failed to deliver. Navigate to the system topic under Events tab of your Communication resource and enable logging from the Diagnostic settings. Failure logs can be found in 'AegDeliveryFailureLogs' table. ++ ```sql + AegDeliveryFailureLogs + | limit 10 + | where Message has "incomingCall" + ``` + ## Next steps-- [Build a Call Automation application](../../quickstarts/call-automation/callflows-for-customer-interactions.md) to simulate a customer interaction.-- [Redirect an inbound PSTN call](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md) to your resource.+- Try out the quickstart to [place an outbound call](../../quickstarts/call-automation/quickstart-make-an-outbound-call.md). |
communication-services | Enable User Engagement Tracking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/enable-user-engagement-tracking.md | -> By enabling this feature, you are acknowledging that you are enabling open/click tracking and giving consent to collect your customers' email activity +> By enabling this feature, you are acknowledging that you are enabling open/click tracking and giving consent to collect your customers' email activity. In this quick start, you'll learn about how to enable user engagement tracking for verified domain in Azure Communication Services. In this quick start, you'll learn about how to enable user engagement tracking f 6. Click turn on to enable engagement tracking. -**Your email domain is now ready to send emails with user engagement tracking.** +**Your email domain is now ready to send emails with user engagement tracking. Please be aware that user engagement tracking is applicable to HTML content and will not function if you submit the payload in plaintext.** You can now subscribe to Email User Engagement operational logs - provides information related to 'open' and 'click' user engagement metrics for messages sent from the Email service. |
communication-services | Number Lookup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/number-lookup.md | cd NumberLookupQuickstart dotnet build ``` +### Connect to dev package feed +The private preview version of the SDK is published to a dev package feed. You can add the dev feed using the [NuGet CLI](https://docs.microsoft.com/nuget/reference/nuget-exe-cli-reference), which will add it to the NuGet.Config file. ++```console +nuget sources add -Name "Azure SDK for .NET Dev Feed" -Source "https://pkgs.dev.azure.com/azure-sdk/public/_packaging/azure-sdk-for-net/nuget/v3/index.json" +``` ++More detailed information and other options for connecting to the dev feed can be found in the [contributing guide](https://github.com/Azure/azure-sdk-for-net/blob/main/CONTRIBUTING.md#nuget-package-dev-feed). + ### Install the package While still in the application directory, install the Azure Communication Services PhoneNumbers client library for .NET package by using the following command. In this quickstart you learned how to: > [Number Lookup Concept](../../concepts/numbers/number-lookup-concept.md) > [!div class="nextstepaction"]-> [Number Lookup SDK](../../concepts/numbers/number-lookup-sdk.md) +> [Number Lookup SDK](../../concepts/numbers/number-lookup-sdk.md) |
communication-services | End Of Call Survey Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/end-of-call-survey-tutorial.md | This tutorial shows you how to use the Azure Communication Services End of Call - An active Communication Services resource. [Create a Communication Services resource](../quickstarts/create-communication-resource.md). Survey results are tied to single Communication Services resources. - An active Log Analytics Workspace, also known as Azure Monitor Logs. See [End of Call Survey Logs](../concepts/analytics/logs/end-of-call-survey-logs.md).-- To conduct a survey with custom questions using free form text, you will need an [App Insight resource](../../azure-monitor/app/create-workspace-resource.md#create-a-workspace-based-resource).+- To conduct a survey with custom questions using free form text, you need an [App Insight resource](../../azure-monitor/app/create-workspace-resource.md#create-a-workspace-based-resource). > [!IMPORTANT] Screenshare. However, each API value can be customized from a minimum of ## Custom questions In addition to using the End of Call Survey API you can create your own survey questions and incorporate them with the End of Call Survey results. Below you'll find steps to incorporate your own customer questions into a survey and query the results of the End of Call Survey API and your own survey questions. - [Create App Insight resource](../../azure-monitor/app/create-workspace-resource.md#create-a-workspace-based-resource).-- Embed Azure AppInsights into your application [Click here to know more about App Insight initialization using plain JavaScript](../../azure-monitor/app/javascript-sdk.md). Alternatively, you can use NPM to get the App Insights dependences. [Click here to know more about App Insight initialization using NPM](../../azure-monitor/app/javascript-sdk-advanced.md).+- Embed Azure AppInsights into your application [Click here to know more about App Insight initialization using plain JavaScript](../../azure-monitor/app/javascript-sdk.md). Alternatively, you can use NPM to get the App Insights dependences. [Click here to know more about App Insight initialization using NPM](../../azure-monitor/app/javascript-sdk-configuration.md). - Build a UI in your application that will serve custom questions to the user and gather their input, lets assume that your application gathered responses as a string in the `improvementSuggestion` variable - Submit survey results to ACS and send user response using App Insights: In addition to using the End of Call Survey API you can create your own survey q }); appInsights.flush(); ```-User responses that were sent using AppInsights will be available under your App Insights workspace. You can use [Workbooks](../../update-center/workbooks.md) to query between multiple resources, correlate call ratings and custom survey data. Steps to correlate the call ratings and custom survey data: +User responses that were sent using AppInsights are available under your App Insights workspace. You can use [Workbooks](../../update-center/workbooks.md) to query between multiple resources, correlate call ratings and custom survey data. Steps to correlate the call ratings and custom survey data: - Create new [Workbooks](../../update-center/workbooks.md) (Your ACS Resource -> Monitoring -> Workbooks -> New) and query Call Survey data from your ACS resource. - Add new query (+Add -> Add query) - Make sure `Data source` is `Logs` and `Resource type` is `Communication` - You can rename the query (Advanced Settings -> Step name [example: call-survey])-- Please be aware that it could require a maximum of **2 hours** before the survey data becomes visible in the Azure portal.. Query the call rating data-+- Be aware that it could require a maximum of **2 hours** before the survey data becomes visible in the Azure portal.. Query the call rating data- ```KQL ACSCallSurvey | where TimeGenerated > now(-24h) |
confidential-computing | Confidential Vm Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-vm-overview.md | With Secure Boot, trusted publishers must sign OS boot components (including the Azure confidential VMs use both the OS disk and a small encrypted virtual machine guest state (VMGS) disk of several megabytes. The VMGS disk contains the security state of the VM's components. Some components include the vTPM and UEFI bootloader. The small VMGS disk might incur a monthly storage cost. -From July 2022, encrypted OS disks will incur higher costs. This change is because encrypted OS disks use more space, and compression isn't possible. For more information, see [the pricing guide for managed disks](https://azure.microsoft.com/pricing/details/managed-disks/). +From July 2022, encrypted OS disks will incur higher costs. For more information, see [the pricing guide for managed disks](https://azure.microsoft.com/pricing/details/managed-disks/). ## Attestation and TPM |
confidential-computing | Skr Flow Confidential Containers Azure Container Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/skr-flow-confidential-containers-azure-container-instance.md | Secure Key Release (SKR) flow with Azure Key Vault (AKV) with confidential conta An [open sourced GitHub project "confidential side-cars"](https://github.com/microsoft/confidential-sidecar-containers) details how to build this container and what parameters/environment variables are required for you to prepare and run this side-car container. The current side car implementation provides various HTTP REST APIs that your primary application container can use to fetch the key from AKV. The integration through Microsoft Azure Attestation(MAA) is already built in. The preparation steps to run the side-car SKR container can be found in details [here](https://github.com/microsoft/confidential-sidecar-containers/tree/main/examples/skr). -Your main application container application can call the side-car WEB API end points as defined in the example blow. Side-cars runs within the same container group and is a local endpoint to your application container. Full details of the API can be found [here](https://github.com/microsoft/confidential-sidecar-containers/blob/main/cmd/skr/README.md) +Your main application container application can call the side-car WEB API end points as defined in the example below. Side-cars runs within the same container group and is a local endpoint to your application container. Full details of the API can be found [here](https://github.com/microsoft/confidential-sidecar-containers/blob/main/cmd/skr/README.md) The `key/release` POST method expects a JSON of the following format: |
connectors | Connectors Create Api Servicebus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-servicebus.md | The Service Bus connector has different versions, based on [logic app workflow t For more information about managed identities, review [Authenticate access to Azure resources with managed identities in Azure Logic Apps](../logic-apps/create-managed-service-identity.md). +* By default, the Service Bus built-in connector operations are stateless. To run these operations in stateful mode, see [Enable stateful mode for stateless built-in connectors](../connectors/enable-stateful-affinity-built-in-connectors.md). + ## Considerations for Azure Service Bus operations ### Infinite loops The steps to add and use a Service Bus trigger differ based on whether you want #### Built-in connector trigger +The built-in Service Bus connector is a stateless connector, by default. To run this connector's operations in stateful mode, see [Enable stateful mode for stateless built-in connectors](enable-stateful-affinity-built-in-connectors.md). + 1. In the [Azure portal](https://portal.azure.com), and open your blank logic app workflow in the designer. 1. On the designer, select **Choose an operation**. The steps to add and use a Service Bus action differ based on whether you want t #### Built-in connector action +The built-in Service Bus connector is a stateless connector, by default. To run this connector's operations in stateful mode, see [Enable stateful mode for stateless built-in connectors](enable-stateful-affinity-built-in-connectors.md). + 1. In the [Azure portal](https://portal.azure.com), and open your logic app workflow in the designer. 1. Under the trigger or action where you want to add the action, select the plus sign (**+**), and then select **Add an action**. |
connectors | Enable Stateful Affinity Built In Connectors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/enable-stateful-affinity-built-in-connectors.md | + + Title: Enable stateful mode for stateless built-in connectors +description: Enable stateless built-in connectors to run in stateful mode for Standard workflows in Azure Logic Apps. ++ms.suite: integration ++ Last updated : 06/13/2023+++# Enable stateful mode for stateless built-in connectors in Azure Logic Apps +++In Standard logic app workflows, the following built-in, service provider-based connectors are stateless, by default: ++- Azure Service Bus +- SAP ++To run these connector operations in stateful mode, you must enable this capability. This how-to guide shows how to enable stateful mode for these connectors. ++## Prerequisites ++- An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ++- The Standard logic app resource where you plan to create the workflow that uses the stateful mode-enabled connector operations. If you don't have this resource, [create your Standard logic app resource now](../logic-apps/create-single-tenant-workflows-azure-portal.md). ++- An Azure virtual network with a subnet to integrate with your logic app. If you don't have these items, see the following documentation: ++ - [Quickstart: Create a virtual network with the Azure portal](../virtual-network/quick-create-portal.md) + - [Add, change, or delete a virtual network subnet](../virtual-network/virtual-network-manage-subnet.md?tabs=azure-portal) ++## Enable stateful mode in the Azure portal ++1. In the [Azure portal](https://portal.azure.com), open the Standard logic app resource where you want to enable stateful mode for these connector operations. ++1. Enable virtual network integration for your logic app and add your logic app to the previously created subnet: ++ 1. On your logic app menu resource, under **Settings**, select **Networking**. ++ 1. In the **Outbound Traffic** section, select **VNET integration** > **Add VNet**. ++ 1. On the **Add VNet Integration** pane that opens, select your Azure subscription and your virtual network. ++ 1. Under **Subnet**, select **Select existing**. From the **Subnet** list, select the subnet where you want to add your logic app. ++ 1. When you're done, select **OK**. ++ On the **Networking** page, the **VNet integration** option now appears set to **On**, for example: ++ :::image type="content" source="media/enable-stateful-affinity-built-in-connectors/enable-virtual-network-integration.png" alt-text="Screenshot shows Azure portal, Standard logic app resource, Networking page, VNet integration set to On."::: ++ For general information about enabling virtual network integration with your app, see [Enable virtual network integration in Azure App Service](../app-service/configure-vnet-integration-enable.md). ++1. Next, update your logic app's underlying website configuration (**<*logic-app-name*>.azurewebsites.net**) by using either of the following tools: ++## Update website configuration for logic app ++After you enable virtual network integration for your logic app, you must update your logic app's underlying website configuration (**<*logic-app-name*>.azurewebsites.net**) by using one the following methods: ++- [Azure Resource Management API](#azure-resource-management-api) (bearer token required) +- [Azure PowerShell](#azure-powershell) (bearer token *not* required) ++### Azure Resource Management API ++To complete this task with the [Azure Resource Management API - Update By Id](/rest/api/resources/resources/update-by-id), review the following requirements, syntax, and parameter values. ++#### Requirements ++OAuth authorization and the bearer token are required. To get the bearer token, follow these steps ++1. While you're signed in to the Azure portal, open your web browser's developer tools (F12). ++1. Get the token by sending any management request, for example, by saving a workflow in your Standard logic app. ++#### Syntax ++Updates a resource by using the specified resource ID: ++`PATCH https://management.azure.com/{resourceId}?api-version=2021-04-01` ++#### Parameter values ++| Element | Value | Description | +||--|-| +| HTTP request method | **PATCH** | +| <*resourceId*> | **subscriptions/{yourSubscriptionID}/resourcegroups/{yourResourceGroup}/providers/Microsoft.Web/sites/{websiteName}/config/web** | +| <*yourSubscriptionId*> | The ID for your Azure subscription | +| <*yourResourceGroup*> | The resource group that contains your logic app resource | +| <*websiteName*> | The name for your logic app resource, which is **mystandardlogicapp** in this example | +| HTTP request body | **{"properties": {"vnetPrivatePortsCount": "2"}}** | ++#### Example ++`https://management.azure.com/subscriptions/XXxXxxXX-xXXx-XxxX-xXXX-XXXXxXxXxxXX/resourcegroups/My-Standard-RG/providers/Microsoft.Web/sites/mystandardlogicapp/config/web?api-version=2021-02-01` ++### Azure PowerShell ++To complete this task with Azure PowerShell, review the following requirements, syntax, and values. This method doesn't require that you manually get the bearer token. ++#### Syntax ++```powershell +Set-AzContext -Subscription {yourSubscriptionID} +$webConfig = Get-AzResource -ResourceId {resourceId} +$webConfig.Properties.vnetPrivatePortsCount = 2 +$webConfig | Set-AzResource -ResourceId {resourceId} +``` ++For more information, see the following documentation: ++- [Set-AzContext](/powershell/module/az.accounts/set-azcontext) +- [Get-AzResource](/powershell/module/az.resources/get-azresource) +- [Set-AzResource](/powershell/module/az.resources/set-azresource) ++#### Parameter values ++| Element | Value | +||--| +| <*yourSubscriptionID*> | The ID for your Azure subscription | +| <*resourceId*> | **subscriptions/{yourSubscriptionID}/resourcegroups/{yourResourceGroup}/providers/Microsoft.Web/sites/{websiteName}/config/web** | +| <*yourResourceGroup*> | The resource group that contains your logic app resource | +| <*websiteName*> | The name for your logic app resource, which is **mystandardlogicapp** in this example | ++#### Example ++`https://management.azure.com/subscriptions/XXxXxxXX-xXXx-XxxX-xXXX-XXXXxXxXxxXX/resourcegroups/My-Standard-RG/providers/Microsoft.Web/sites/mystandardlogicapp/config/web?api-version=2021-02-01` ++#### Troubleshoot errors ++##### Error: Reserved instance count is invalid ++If you get an error that says **Reserved instance count is invalid**, use the following workaround: ++```powershell +$webConfig.Properties.preWarmedInstanceCount = $webConfig.Properties.reservedInstanceCount +$webConfig.Properties.reservedInstanceCount = $null +$webConfig | Set-AzResource -ResourceId {resourceId} +``` ++Error example: ++```powershell +Set-AzResource : +{ + "Code":"BadRequest", + "Message":"siteConfig.ReservedInstanceCount is invalid. Please use the new property siteConfig.PreWarmedInstanceCount.", + "Target": null, + "Details": + [ + { + "Message":"siteConfig.ReservedInstanceCount is invalid. Please use the new property siteConfig.PreWarmedInstanceCount." + }, + { + "Code":"BadRequest" + }, + { + "ErrorEntity": + { + "ExtendedCode":"51021", + "MessageTemplate":"{0} is invalid. {1}", + "Parameters": + [ + "siteConfig.ReservedInstanceCount", "Please use the new property siteConfig.PreWarmedInstanceCount." + ], + "Code":"BadRequest", + "Message":"siteConfig.ReservedInstanceCount is invalid. Please use the new property siteConfig.PreWarmedInstanceCount." + } + } + ], + "Innererror": null +} +``` ++## Prevent context loss during resource scale-in events ++Resource scale-in events might cause the loss of context for built-in connectors with stateful mode enabled. To prevent this potential loss before such events can happen, fix the number of instances available for your logic app resource. This way, no scale-in events can happen to cause this potential context loss. ++1. On your logic app resource menu, under **Settings**, select **Scale out**. ++1. Under **App Scale Out**, set **Enforce Scale Out Limit** to **Yes**, which shows the **Maximum Scale Out Limit**. ++1. On the **Scale out** page, under **App Scale out**, set the number for **Always Ready Instances** to the same number as **Maximum Scale Out Limit** and **Maximum Burst**, which appears under **Plan Scale Out**, for example: ++ :::image type="content" source="media/enable-stateful-affinity-built-in-connectors/scale-in-settings.png" alt-text="Screenshot shows Azure portal, Standard logic app resource, Scale out page, and Always Ready Instances number set to match Maximum Scale Out Limit and Maximum Burst."::: ++1. When you're done, on the **Scale out** toolbar, select **Save**. ++## Next steps ++- [Connect to Azure Service Bus](connectors-create-api-servicebus.md) +- [Connect to SAP](../logic-apps/logic-apps-using-sap-connector.md) |
container-apps | Tutorial Ci Cd Runners Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-ci-cd-runners-jobs.md | Refer to [jobs preview limitations](jobs.md#jobs-preview-restrictions) for a lis 1. To sign in to Azure from the CLI, run the following command and follow the prompts to complete the authentication process. + # [Bash](#tab/bash) ```bash az login ``` + # [PowerShell](#tab/powershell) + ```powershell + az login + ``` ++ + 1. Ensure you're running the latest version of the CLI via the `upgrade` command. + # [Bash](#tab/bash) ```bash az upgrade ``` + # [PowerShell](#tab/powershell) + ```powershell + az upgrade + ``` ++ + 1. Install the latest version of the Azure Container Apps CLI extension. + # [Bash](#tab/bash) ```bash az extension add --name containerapp --upgrade ``` + # [PowerShell](#tab/powershell) + ```powershell + az extension add --name containerapp --upgrade + ``` ++ + 1. Register the `Microsoft.App` and `Microsoft.OperationalInsights` namespaces if you haven't already registered them in your Azure subscription. + # [Bash](#tab/bash) ```bash az provider register --namespace Microsoft.App az provider register --namespace Microsoft.OperationalInsights ``` + # [PowerShell](#tab/powershell) + ```powershell + az provider register --namespace Microsoft.App + az provider register --namespace Microsoft.OperationalInsights + ``` ++ + 1. Define the environment variables that are used throughout this article. ::: zone pivot="container-apps-jobs-self-hosted-ci-cd-github-actions" + # [Bash](#tab/bash) ```bash RESOURCE_GROUP="jobs-sample" LOCATION="northcentralus" Refer to [jobs preview limitations](jobs.md#jobs-preview-restrictions) for a lis JOB_NAME="github-actions-runner-job" ``` + # [PowerShell](#tab/powershell) + ```powershell + $RESOURCE_GROUP="jobs-sample" + $LOCATION="northcentralus" + $ENVIRONMENT="env-jobs-sample" + $JOB_NAME="github-actions-runner-job" + ``` ++ + ::: zone-end ::: zone pivot="container-apps-jobs-self-hosted-ci-cd-azure-pipelines" + # [Bash](#tab/bash) ```bash RESOURCE_GROUP="jobs-sample" LOCATION="northcentralus" Refer to [jobs preview limitations](jobs.md#jobs-preview-restrictions) for a lis PLACEHOLDER_JOB_NAME="placeholder-agent-job" ``` + # [PowerShell](#tab/powershell) + ```powershell + $RESOURCE_GROUP="jobs-sample" + $LOCATION="northcentralus" + $ENVIRONMENT="env-jobs-sample" + $JOB_NAME="azure-pipelines-agent-job" + $PLACEHOLDER_JOB_NAME="placeholder-agent-job" + ``` ++ + ::: zone-end ## Create a Container Apps environment The Azure Container Apps environment acts as a secure boundary around container apps and jobs so they can share the same network and communicate with each other. +> [!NOTE] +> To create a Container Apps environment that's integrated with an existing virtual network, see [Provide a virtual network to an internal Azure Container Apps environment](vnet-custom-internal.md?tabs=bash). + 1. Create a resource group using the following command. + # [Bash](#tab/bash) ```bash az group create \ --name "$RESOURCE_GROUP" \ --location "$LOCATION" ``` + # [PowerShell](#tab/powershell) + ```powershell + az group create ` + --name "$RESOURCE_GROUP" ` + --location "$LOCATION" + ``` ++ + 1. Create the Container Apps environment using the following command. + # [Bash](#tab/bash) ```bash az containerapp env create \ --name "$ENVIRONMENT" \ The Azure Container Apps environment acts as a secure boundary around container --location "$LOCATION" ``` + # [PowerShell](#tab/powershell) + ```powershell + az containerapp env create ` + --name "$ENVIRONMENT" ` + --resource-group "$RESOURCE_GROUP" ` + --location "$LOCATION" + ``` ++ + ::: zone pivot="container-apps-jobs-self-hosted-ci-cd-github-actions" ## Create a GitHub repository for running a workflow To run a self-hosted runner, you need to create a personal access token (PAT) in 1. Define variables that are used to configure the runner and scale rule later. + # [Bash](#tab/bash) ```bash GITHUB_PAT="<GITHUB_PAT>" REPO_OWNER="<REPO_OWNER>" REPO_NAME="<REPO_NAME>" ``` + # [PowerShell](#tab/powershell) + ```powershell + $GITHUB_PAT="<GITHUB_PAT>" + $REPO_OWNER="<REPO_OWNER>" + $REPO_NAME="<REPO_NAME>" + ``` ++ + Replace the placeholders with the following values: | Placeholder | Value | To create a self-hosted runner, you need to build a container image that execute 1. Define a name for your container image and registry. + # [Bash](#tab/bash) ```bash CONTAINER_IMAGE_NAME="github-actions-runner:1.0" CONTAINER_REGISTRY_NAME="<CONTAINER_REGISTRY_NAME>" ``` + # [PowerShell](#tab/powershell) + ```powershell + $CONTAINER_IMAGE_NAME="github-actions-runner:1.0" + $CONTAINER_REGISTRY_NAME="<CONTAINER_REGISTRY_NAME>" + ``` ++ + Replace `<CONTAINER_REGISTRY_NAME>` with a unique name for creating a container registry. Container registry names must be *unique within Azure* and be from 5 to 50 characters in length containing numbers and lowercase letters only. 1. Create a container registry. + # [Bash](#tab/bash) ```bash az acr create \ --name "$CONTAINER_REGISTRY_NAME" \ To create a self-hosted runner, you need to build a container image that execute --admin-enabled true ``` + # [PowerShell](#tab/powershell) + ```powershell + az acr create ` + --name "$CONTAINER_REGISTRY_NAME" ` + --resource-group "$RESOURCE_GROUP" ` + --location "$LOCATION" ` + --sku Basic ` + --admin-enabled true + ``` ++ + 1. The Dockerfile for creating the runner image is available on [GitHub](https://github.com/Azure-Samples/container-apps-ci-cd-runner-tutorial/tree/main/github-actions-runner). Run the following command to clone the repository and build the container image in the cloud using the `az acr build` command. + # [Bash](#tab/bash) ```bash az acr build \ --registry "$CONTAINER_REGISTRY_NAME" \ To create a self-hosted runner, you need to build a container image that execute "https://github.com/Azure-Samples/container-apps-ci-cd-runner-tutorial.git" ``` + # [PowerShell](#tab/powershell) + ```powershell + az acr build ` + --registry "$CONTAINER_REGISTRY_NAME" ` + --image "$CONTAINER_IMAGE_NAME" ` + --file "Dockerfile.github" ` + "https://github.com/Azure-Samples/container-apps-ci-cd-runner-tutorial.git" + ``` ++ + The image is now available in the container registry. ## Deploy a self-hosted runner as a job You can now create a job that uses to use the container image. In this section, 1. Create a job in the Container Apps environment. + # [Bash](#tab/bash) ```bash az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" \ --trigger-type Event \ --replica-timeout 300 \- --replica-retry-limit 0 \ + --replica-retry-limit 1 \ --replica-completion-count 1 \ --parallelism 1 \ --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" \ You can now create a job that uses to use the container image. In this section, --registry-server "$CONTAINER_REGISTRY_NAME.azurecr.io" ``` + # [PowerShell](#tab/powershell) + ```powershell + az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" ` + --trigger-type Event ` + --replica-timeout 300 ` + --replica-retry-limit 1 ` + --replica-completion-count 1 ` + --parallelism 1 ` + --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" ` + --min-executions 0 ` + --max-executions 10 ` + --polling-interval 30 ` + --scale-rule-name "github-runner" ` + --scale-rule-type "github-runner" ` + --scale-rule-metadata "github-runner=https://api.github.com" "owner=$REPO_OWNER" "runnerScope=repo" "repos=$REPO_NAME" "targetWorkflowQueueLength=1" ` + --scale-rule-auth "personalAccessToken=personal-access-token" ` + --cpu "2.0" ` + --memory "4Gi" ` + --secrets "personal-access-token=$GITHUB_PAT" ` + --env-vars "GITHUB_PAT=secretref:personal-access-token" "REPO_URL=https://github.com/$REPO_OWNER/$REPO_NAME" "REGISTRATION_TOKEN_API_URL=https://api.github.com/repos/$REPO_OWNER/$REPO_NAME/actions/runners/registration-token" ` + --registry-server "$CONTAINER_REGISTRY_NAME.azurecr.io" + ``` ++ + The following table describes the key parameters used in the command. | Parameter | Description | To verify the job was configured correctly, you modify the workflow to use a sel 1. List the executions of the job to confirm a job execution was created and completed successfully. + # [Bash](#tab/bash) ```bash az containerapp job execution list \ --name "$JOB_NAME" \ To verify the job was configured correctly, you modify the workflow to use a sel --query '[].{Status: properties.status, Name: name, StartTime: properties.startTime}' ``` + # [PowerShell](#tab/powershell) + ```powershell + az containerapp job execution list ` + --name "$JOB_NAME" ` + --resource-group "$RESOURCE_GROUP" ` + --output table ` + --query '[].{Status: properties.status, Name: name, StartTime: properties.startTime}' + ``` ++ + ::: zone-end ::: zone pivot="container-apps-jobs-self-hosted-ci-cd-azure-pipelines" To run a self-hosted runner, you need to create a personal access token (PAT) in 1. Define variables that are used to configure the Container Apps jobs later. + # [Bash](#tab/bash) ```bash AZP_TOKEN="<AZP_TOKEN>" ORGANIZATION_URL="<ORGANIZATION_URL>" AZP_POOL="container-apps" ``` + # [PowerShell](#tab/powershell) + ```powershell + $AZP_TOKEN="<AZP_TOKEN>" + $ORGANIZATION_URL="<ORGANIZATION_URL>" + $AZP_POOL="container-apps" + ``` ++ + Replace the placeholders with the following values: | Placeholder | Value | Comments | To create a self-hosted agent, you need to build a container image that runs the 1. Back in your terminal, define a name for your container image and registry. + # [Bash](#tab/bash) ```bash CONTAINER_IMAGE_NAME="azure-pipelines-agent:1.0" CONTAINER_REGISTRY_NAME="<CONTAINER_REGISTRY_NAME>" ``` + # [PowerShell](#tab/powershell) + ```powershell + $CONTAINER_IMAGE_NAME="azure-pipelines-agent:1.0" + $CONTAINER_REGISTRY_NAME="<CONTAINER_REGISTRY_NAME>" + ``` ++ + Replace `<CONTAINER_REGISTRY_NAME>` with a unique name for creating a container registry. Container registry names must be *unique within Azure* and be from 5 to 50 characters in length containing numbers and lowercase letters only. 1. Create a container registry. + # [Bash](#tab/bash) ```bash az acr create \ --name "$CONTAINER_REGISTRY_NAME" \ To create a self-hosted agent, you need to build a container image that runs the --admin-enabled true ``` + # [PowerShell](#tab/powershell) + ```powershell + az acr create ` + --name "$CONTAINER_REGISTRY_NAME" ` + --resource-group "$RESOURCE_GROUP" ` + --location "$LOCATION" ` + --sku Basic ` + --admin-enabled true + ``` ++ + 1. The Dockerfile for creating the runner image is available on [GitHub](https://github.com/Azure-Samples/container-apps-ci-cd-runner-tutorial/tree/main/azure-pipelines-agent). Run the following command to clone the repository and build the container image in the cloud using the `az acr build` command. + # [Bash](#tab/bash) ```bash az acr build \ --registry "$CONTAINER_REGISTRY_NAME" \ To create a self-hosted agent, you need to build a container image that runs the "https://github.com/Azure-Samples/container-apps-ci-cd-runner-tutorial.git" ``` + # [PowerShell](#tab/powershell) + ```powershell + az acr build ` + --registry "$CONTAINER_REGISTRY_NAME" ` + --image "$CONTAINER_IMAGE_NAME" ` + --file "Dockerfile.azure-pipelines" ` + "https://github.com/Azure-Samples/container-apps-ci-cd-runner-tutorial.git" + ``` ++ + The image is now available in the container registry. ## Create a placeholder self-hosted agent -Before you can run a self-hosted agent in your new agent pool, you need to create a placeholder agent. Pipelines that use the agent pool fail when there's no placeholder agent. You can create a placeholder agent by running a job that registers an offline placeholder agent. +Before you can run a self-hosted agent in your new agent pool, you need to create a placeholder agent. The placeholder agent ensures the agent pool is available. Pipelines that use the agent pool fail when there's no placeholder agent. ++You can run a manual job to register an offline placeholder agent. The job runs once and can be deleted. The placeholder agent doesn't consume any resources in Azure Container Apps or Azure DevOps. 1. Create a manual job in the Container Apps environment that creates the placeholder agent. + # [Bash](#tab/bash) ```bash az containerapp job create -n "$PLACEHOLDER_JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" \ --trigger-type Manual \ --replica-timeout 300 \- --replica-retry-limit 0 \ + --replica-retry-limit 1 \ --replica-completion-count 1 \ --parallelism 1 \ --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" \ Before you can run a self-hosted agent in your new agent pool, you need to creat --registry-server "$CONTAINER_REGISTRY_NAME.azurecr.io" ``` + # [PowerShell](#tab/powershell) + ```powershell + az containerapp job create -n "$PLACEHOLDER_JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" ` + --trigger-type Manual ` + --replica-timeout 300 ` + --replica-retry-limit 1 ` + --replica-completion-count 1 ` + --parallelism 1 ` + --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" ` + --cpu "2.0" ` + --memory "4Gi" ` + --secrets "personal-access-token=$AZP_TOKEN" "organization-url=$ORGANIZATION_URL" ` + --env-vars "AZP_TOKEN=secretref:personal-access-token" "AZP_URL=secretref:organization-url" "AZP_POOL=$AZP_POOL" "AZP_PLACEHOLDER=1" "AZP_AGENT_NAME=placeholder-agent" ` + --registry-server "$CONTAINER_REGISTRY_NAME.azurecr.io" + ``` ++ + The following table describes the key parameters used in the command. | Parameter | Description | Before you can run a self-hosted agent in your new agent pool, you need to creat 1. Execute the manual job to create the placeholder agent. + # [Bash](#tab/bash) ```bash az containerapp job start -n "$PLACEHOLDER_JOB_NAME" -g "$RESOURCE_GROUP" ``` + # [PowerShell](#tab/powershell) + ```powershell + az containerapp job start -n "$PLACEHOLDER_JOB_NAME" -g "$RESOURCE_GROUP" + ``` ++ + 1. List the executions of the job to confirm a job execution was created and completed successfully. + # [Bash](#tab/bash) ```bash az containerapp job execution list \ --name "$PLACEHOLDER_JOB_NAME" \ Before you can run a self-hosted agent in your new agent pool, you need to creat --query '[].{Status: properties.status, Name: name, StartTime: properties.startTime}' ``` + # [PowerShell](#tab/powershell) + ```powershell + az containerapp job execution list ` + --name "$PLACEHOLDER_JOB_NAME" ` + --resource-group "$RESOURCE_GROUP" ` + --output table ` + --query '[].{Status: properties.status, Name: name, StartTime: properties.startTime}' + ``` ++ + 1. Verify the placeholder agent was created in Azure DevOps. 1. In Azure DevOps, navigate to your project. 1. Select **Project settings** > **Agent pools** > **container-apps** > **Agents**.- 1. Confirm that a placeholder agent named `placeholder-agent` is listed. + 1. Confirm that a placeholder agent named `placeholder-agent` is listed and its status is offline. ++1. The job isn't needed again. You can delete it. + + # [Bash](#tab/bash) + ```bash + az containerapp job delete -n "$PLACEHOLDER_JOB_NAME" -g "$RESOURCE_GROUP" + ``` ++ # [PowerShell](#tab/powershell) + ```powershell + az containerapp job delete -n "$PLACEHOLDER_JOB_NAME" -g "$RESOURCE_GROUP" + ``` ++ ## Create a self-hosted agent as an event-driven job Now that you have a placeholder agent, you can create a self-hosted agent. In this section, you create an event-driven job that runs a self-hosted agent when a pipeline is triggered. +# [Bash](#tab/bash) ```bash az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" \ --trigger-type Event \ --replica-timeout 300 \- --replica-retry-limit 0 \ + --replica-retry-limit 1 \ --replica-completion-count 1 \ --parallelism 1 \ --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" \ az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$E --registry-server "$CONTAINER_REGISTRY_NAME.azurecr.io" ``` +# [PowerShell](#tab/powershell) +```powershell +az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" \ + --trigger-type Event \ + --replica-timeout 300 \ + --replica-retry-limit 1 \ + --replica-completion-count 1 \ + --parallelism 1 \ + --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" \ + --min-executions 0 \ + --max-executions 10 \ + --polling-interval 30 \ + --scale-rule-name "azure-pipelines" \ + --scale-rule-type "azure-pipelines" \ + --scale-rule-metadata "poolName=container-apps" "targetPipelinesQueueLength=1" \ + --scale-rule-auth "personalAccessToken=personal-access-token" "organizationURL=organization-url" \ + --cpu "2.0" \ + --memory "4Gi" \ + --secrets "personal-access-token=$AZP_TOKEN" "organization-url=$ORGANIZATION_URL" \ + --env-vars "AZP_TOKEN=secretref:personal-access-token" "AZP_URL=secretref:organization-url" "AZP_POOL=$AZP_POOL" \ + --registry-server "$CONTAINER_REGISTRY_NAME.azurecr.io" +``` +++ The following table describes the scale rule parameters used in the command. | Parameter | Description | Now that you've configured a self-hosted agent job, you can run a pipeline and v 1. List the executions of the job to confirm a job execution was created and completed successfully. + # [Bash](#tab/bash) ```bash az containerapp job execution list \ --name "$JOB_NAME" \ Now that you've configured a self-hosted agent job, you can run a pipeline and v --query '[].{Status: properties.status, Name: name, StartTime: properties.startTime}' ``` + # [PowerShell](#tab/powershell) + ```powershell + az containerapp job execution list ` + --name "$JOB_NAME" ` + --resource-group "$RESOURCE_GROUP" ` + --output table ` + --query '[].{Status: properties.status, Name: name, StartTime: properties.startTime}' + ``` ++ + ::: zone-end > [!TIP] Once you're done, run the following command to delete the resource group that co >[!CAUTION] > The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this tutorial exist in the specified resource group, they will also be deleted. +# [Bash](#tab/bash) ```bash az group delete \ --resource-group $RESOURCE_GROUP ``` +# [PowerShell](#tab/powershell) +```powershell +az group delete ` + --resource-group $RESOURCE_GROUP +``` +++ To delete your GitHub repository, see [Deleting a repository](https://docs.github.com/en/github/administering-a-repository/managing-repository-settings/deleting-a-repository). ## Next steps |
cosmos-db | Synapse Link Time Travel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link-time-travel.md | display(df) | `spark.cosmos.timetravel.ignoreTransactionalUserDeletes` | `FALSE` | Ignore the records the user deleted from the transactional store. Set this setting to `TRUE` if you would like to see the records in time travel result set that is deleted from the transactional store. | | `spark.cosmos.timetravel.fullFidelity` | `FALSE` | Set this setting to `TRUE` if you would like to access all versions of records (including intermediate updates) at a specific point in history. | +> [!IMPORTANT] +> All configuration settings are used in UTC timezone. + ## Limitations - Time Travel is only available for Azure Synapse Spark. |
cost-management-billing | Cost Analysis Common Uses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-analysis-common-uses.md | Each metric affects how data is shown for your reservation charges. **Actual cost** - Shows the purchase as it appears on your bill. For example, if you bought a one-year reservation for $1200 in January, cost analysis shows a $1200 cost in the month of January for the reservation. It doesn't show a reservation cost for other months of the year. If you group your actual costs by VM, then a VM that received the reservation benefit for a given month would have zero cost for the month. -**Amortized cost** - Shows a reservation purchase split as an amortized cost over the duration of the reservation term. Using the same example above, cost analysis shows a $100 cost for each month throughout the year, if you purchased a one-year reservation for $1200 in January. If you group costs by VM in this example, you'd see cost attributed to each VM that received the reservation benefit. +**Amortized cost** - Shows a reservation purchase split as an amortized cost over the duration of the reservation term. Using the same example above, cost analysis shows a varying cost for each month throughout the year, because of the varying number of days in a month. If you group costs by VM in this example, you'd see cost attributed to each VM that received the reservation benefit. ## View your reservation utilization |
cost-management-billing | Overview Cost Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/overview-cost-management.md | + + Title: Overview of Cost Management ++description: You use Cost Management features to monitor and control Azure spending and to optimize Azure resource use. +keywords: +++ Last updated : 06/12/2023++++++# What is Microsoft Cost Management ++Microsoft Cost Management is a suite of FinOps tools that help organizations analyze, monitor, and optimize their Microsoft Cloud costs. Cost Management is available to anyone with access to a billing account, subscription, resource group, or management group. You can access Cost Management within the billing and resource management experiences or separately as a standalone tool optimized for FinOps teams who manage cost across multiple scopes. You can also automate and extend native capabilities or enrich your own tools and processes with cost to maximize organizational visibility and accountability with all stakeholders and realize your optimization and efficiency goals faster. ++A few examples of what you can do in Cost Management include: ++- Report on and analyze costs in the Azure portal, Microsoft 365 admin center, or Power BI. +- Monitor costs proactively with budget, anomaly, reservation utilization, and scheduled alerts. +- Enable tag inheritance and split shared costs with cost allocation rules. +- Automate business processes or integrate cost into external tools by exporting data. ++## How charges are processed ++To understand how Cost Management works, you should first understand the Commerce system. At its core, Microsoft Commerce is a data pipeline that underpins all Microsoft commercial transactions, whether consumer or commercial. While there are many inputs and connections to this pipeline, like the sign-up and Marketplace purchase experiences, this article focuses on the components that help you monitor, allocate, and optimize your costs. +++From the left, your Azure, Microsoft 365, Dynamics 365, and Power Platform services are all pushing data into the Commerce data pipeline. Each service publishes data on a different cadence. In general, if data for one service is slower than another, it's due to how frequently those services are publishing their usage and charges. ++As the data makes its way through the pipeline, the rating system applies discounts based on your specific price sheet and generates ΓÇ£rated usage,ΓÇ¥ which includes price and quantity for each cost record. It's the basis for what you see in Cost Management and it's covered later. At the end of the month, credits are applied and the invoice is published. This process starts 72 hours after your billing period ends, which is usually the last day of the calendar month for most accounts. For example, if your billing period ends on March 31, charges will be finalized on April 4 at midnight. ++>[!IMPORTANT] +>Credits are applied like a gift card or other payment instrument before the invoice is generated. While credit status is tracked as new charges flow into the data pipeline, credits arenΓÇÖt explicitly applied to these charges until the end of the month. ++Everything up to this point makes up the billing process where charges are finalized, discounts are applied, and invoices are published. Billing account and billing profile owners may be familiar with this process as part of the Billing experience within the Azure portal or Microsoft 365 admin center. The Billing experience allows you to review credits, manage your billing address and payment methods, pay invoices, and more ΓÇô everything related to managing your billing relationship with Microsoft. ++- The [anomaly detection](../understand/analyze-unexpected-charges.md) model identifies anomalies daily based on normalized usage (not rated usage). +- The cost allocation engine applies tag inheritance and [splits shared costs](allocate-costs.md). +- AWS cost and usage reports are pulled based on any [connectors for AWS](aws-integration-manage.md) you may have configured. +- Azure Advisor cost recommendations are pulled in to enable cost savings insights for subscriptions and resource groups. +- Cost alerts are sent out for [budgets](tutorial-acm-create-budgets.md), [anomalies](../understand/analyze-unexpected-charges.md#create-an-anomaly-alert), [scheduled alerts](save-share-views.md#subscribe-to-scheduled-alerts), and more based on the configured settings. ++Lastly, cost details are made available from [cost analysis](quick-acm-cost-analysis.md) in the Azure portal and published to your storage account via [scheduled exports](tutorial-export-acm-data.md). ++## How Cost Management and Billing relate ++[Cost Management](https://portal.azure.com/#view/Microsoft_Azure_CostManagement/Menu) is a set of FinOps tools that enable you to analyze, manage, and optimize your costs. ++[Billing](https://portal.azure.com/#view/Microsoft_Azure_GTM/ModernBillingMenuBlade) provides all the tools you need to manage your billing account and pay invoices. ++While Cost Management is available from within the Billing experience, Cost Management is also available from every subscription, resource group, and management group in the Azure portal to ensure everyone has full visibility into the costs theyΓÇÖre responsible for and can optimize their workloads to maximize efficiency. Cost Management is also available independently to streamline the process for managing cost across multiple billing accounts, subscriptions, resource groups, and/or management groups. +++## What data is included in Cost Management and Billing? ++Within the Billing experience, you can manage all the products, subscriptions, and recurring purchases you use; review your credits and commitments; and view and pay your invoices. Invoices are available online or as PDFs and include all billed charges and any applicable taxes. Credits are applied to the total invoice amount when invoices are generated. This invoicing process happens in parallel to Cost Management data processing, which means Cost Management doesn't include credits, taxes, and some purchases, like support charges in non-MCA accounts. ++Classic Cloud Solution Provider (CSP) and sponsorship subscriptions aren't supported in Cost Management. These subscriptions will be supported after they transition to Microsoft Customer Agreement. ++For more information about supported offers, what data is included, or how data is refreshed and retained in Cost Management, see [Understand Cost Management data](understand-cost-mgt-data.md). ++## Estimate your cloud costs ++During your cloud journey, there are many tools available to help you understand pricing: ++- The [Total Cost of Ownership (TCO) calculator](https://azure.microsoft.com/pricing/tco/calculator/) should be your first stop if youΓÇÖre curious about how much it would cost to move your existing on-premises infrastructure to the cloud. +- [Azure Migrate](https://azure.microsoft.com/products/azure-migrate/) is a free tool that helps you analyze your on-premises workloads and plan your cloud migration. +- The [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) helps you estimate the cost of creating new or expanding existing deployments. In this tool, you're able to explore various configurations of many different Azure services as you identify which SKUs and how much usage keeps you within your desired price range. For more information, see the pricing details for each of the services you use. +- The [Virtual Machine Selector Tool](https://azure.microsoft.com/pricing/vm-selector/) is your one-stop-shop for finding the best VMs for your intended solution. +- The [Azure Hybrid Benefit savings calculator](https://azure.microsoft.com/pricing/hybrid-benefit/#calculator) helps you estimate the savings of using your existing Windows Server and SQL Server licenses on Azure. ++## Report on and analyze costs ++Cost Management and Billing include several tools to help you understand, report on, and analyze your invoiced Microsoft Cloud and AWS costs. ++- [**Cost analysis**](quick-acm-cost-analysis.md) is a tool for ad-hoc cost exploration. Get quick answers with lightweight insights and analytics. +**Power BI** is an advanced solution to build more extensive dashboards and complex reports or combine costs with other data. Power BI is available for billing accounts and billing profiles. +- [**Exports and the Cost Details API**](../automate/usage-details-best-practices.md) enable you to integrate cost details into external systems or business processes. +- **Connectors for AWS** enable you to ingest your AWS cost details into Azure to facilitate managing Azure and AWS costs together. After configured, the connector also enables other capabilities, like budget and scheduled alerts. ++For more information, see [Get started with reporting](reporting-get-started.md). ++## Organize and allocate costs ++Organizing and allocating costs are critical to ensuring invoices are routed to the correct business units and can be further split for internal billing, also known as *chargeback*. The first step to allocating cloud costs is organizing subscriptions and resources in a way that facilitates natural reporting and chargeback. Microsoft offers the following options to organize resources and subscriptions: ++- MCA **billing profiles** and **invoice sections** are used to [group subscriptions into invoices](../manage/mca-section-invoice.md). Each billing profile represents a separate invoice that can be billed to a different business unit and each invoice section is segmented separately within those invoices. You can also view costs by billing profile or invoice section in costs analysis. +- EA **departments** and **enrollment accounts** are conceptually similar to invoice sections, as groups of subscriptions, but they aren't represented within the invoice PDF. They're included within the cost details backing each invoice, however. You can also view costs by department or enrollment account in costs analysis. +- **Management groups** also allow grouping subscriptions together, but offer a few key differences: + - Management group access is inherited down to the subscriptions and resources. + - Management groups can be layered into multiple levels and subscriptions can be placed at any level. + - Management groups aren't included in cost details. + - All historical costs are returned for management groups based on the subscriptions currently within that hierarchy. When a subscription moves, all historical cost moves. + - Azure Policy supports management groups and they can have rules assigned to automate compliance reporting for your cost governance strategy. +- **Subscriptions** and **resource groups** are the lowest level at which you can organize your cloud solutions. At Microsoft, every product ΓÇô sometimes even limited to a single region ΓÇô is managed within its own subscription. It simplifies cost governance but requires more overhead for subscription management. Most organizations use subscriptions for business units and separating dev/test from production or other environments, then use resource groups for the products. It complicates cost management because resource group owners don't have a way to manage cost across resource groups. On the other hand, it's a straightforward way to understand who's responsible for most resource-based charges. Keep in mind that not all charges come from resources and some don't have resource groups or subscriptions associated with them. It also changes as you move to MCA billing accounts. +- **Resource tags** are the only way to add your own business context to cost details and are perhaps the most flexible way to map resources to applications, business units, environments, owners, etc. For more information, see [How tags are used in cost and usage data](understand-cost-mgt-data.md#how-tags-are-used-in-cost-and-usage-data) for limitations and important considerations. ++Once your resources and subscriptions are organized using the subscription hierarchy and have the necessary metadata (tags) to facilitate further allocation, use the following tools in Cost Management to streamline cost reporting: ++- [Tag inheritance](enable-tag-inheritance.md) simplifies the application of tags by copying subscription and resource group tags down to the resources in cost data. These tags aren't saved on the resources themselves. The change only happens within Cost Management and isn't available to other services, like Azure Policy. +- [Cost allocation](allocate-costs.md) offers the ability to ΓÇ£moveΓÇ¥ or split shared costs from one subscription, resource group, or tag to another subscription, resource group, or tag. Cost allocation doesn't change the invoice. The goal of cost allocation is to reduce overhead and more accurately report on where charges are ultimately coming from (albeit indirectly), which should drive more complete accountability. ++How you organize and allocate costs plays a huge role in how people within your organization can manage and optimize costs. Be sure to plan ahead and revisit your allocation strategy yearly. +++## Monitor costs with alerts ++Cost Management and Billing offer many different types of emails and alerts to keep you informed and help you proactively manage your account and incurred costs. ++- [**Budget alerts**](tutorial-acm-create-budgets.md) notify recipients when cost exceeds a predefined cost or forecast amount. Budgets can be visualized in cost analysis and are available on every scope supported by Cost Management. Subscription and resource group budgets can also be configured to notify an action group to take automated actions to reduce or even stop further charges. +- [**Anomaly alerts**](../understand/analyze-unexpected-charges.md)notify recipients when an unexpected change in daily usage has been detected. It can be a spike or a dip. Anomaly detection is only available for subscriptions and can be viewed within the cost analysis preview. Anomaly alerts can be configured from the cost alerts page. +- [**Scheduled alerts**](save-share-views.md#subscribe-to-scheduled-alerts) notify recipients about the latest costs on a daily, weekly, or monthly schedule based on a saved cost view. Alert emails include a visual chart representation of the view and can optionally include a CSV file. Views are configured in cost analysis, but recipients don't require access to cost in order to view the email, chart, or linked CSV. +- **EA commitment balance alerts** are automatically sent to any notification contacts configured on the EA billing account when the balance is 90% or 100% used. +- **Invoice alerts** can be configured for MCA billing profiles and Microsoft Online Services Program (MOSP) subscriptions. For details, see [View and download your Azure invoice](../understand/download-azure-invoice.md). ++For for information, see [Monitor usage and spending with cost alerts](cost-mgt-alerts-monitor-usage-spending.md). ++## Optimize costs ++Microsoft offers a wide range of tools for optimizing your costs. Some of these tools are available outside the Cost Management and Billing experience, but are included for completeness. ++- There are many [**free services**](https://azure.microsoft.com/pricing/free-services/) available in Azure. Be sure to pay close attention to the constraints. Different services are free indefinitely, for 12 months, or 30 days. Some are free up to a specific amount of usage and some may have dependencies on other services that aren't free. +- [**Azure Advisor cost recommendations**](tutorial-acm-opt-recommendations.md) should be your first stop when interested in optimizing existing resources. Advisor recommendations are updated daily and are based on your usage patterns. Advisor is available for subscriptions and resource groups. Management group users can also see recommendations but they need to select the desired subscriptions. Billing users can only see recommendations for subscriptions they have resource access to. +- [**Azure savings plans**](../savings-plan/index.yml) save you money when you have consistent usage of Azure compute resources. A savings plan can significantly reduce your resource costs by up to 65% from pay-as-you-go prices. +- [**Azure reservations**](https://azure.microsoft.com/reservations/) help you save up to 72% compared to pay-as-you-go rates by precommitting to specific usage amounts for a set time duration. +- [**Azure Hybrid Benefit**](https://azure.microsoft.com/pricing/hybrid-benefit/) helps you significantly reduce costs by using on-premises Windows Server and SQL Server licenses or RedHat and SUSE Linux subscriptions on Azure. ++For other options, see [Azure benefits and incentives](https://azure.microsoft.com/pricing/offers/#cloud). ++## Next steps ++For other options, see [Azure benefits and incentives](https://azure.microsoft.com/pricing/offers/#cloud). |
cost-management-billing | Permission View Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/permission-view-manage.md | If you're a billing administrator, use following steps to view and manage all sa - If you're a Microsoft Customer Agreement billing profile owner, in the left menu, select **Billing profiles**. In the list of billing profiles, select one. 1. In the left menu, select **Products + services** > **Savings plans**. The complete list of savings plans for your EA enrollment or billing profile is shown.-1. Billing administrators can take ownership of a savings plan by selecting one or multiple savings plans, selecting **Grant access** and selecting **Grant access** in the window that appears. +1. Billing administrators can take ownership of a savings plan with the [Savings Plan Order - Elevate REST API](/rest/api/billingbenefits/savings-plan-order/elevate) to give themselves Azure RBAC roles. ### Adding billing administrators |
databox | Data Box Disk System Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-system-requirements.md | Title: Microsoft Azure Data Box Disk system requirements| Microsoft Docs -description: Learn about the software and networking requirements for your Azure Data Box Disk +description: Learn about the software and networking requirements for your Azure Data Box Disk The client computer containing the data must have a USB 3.0 or later port. The d ## Supported storage accounts +> [!Note] +> Classic storage accounts will not be supported starting **August 1, 2023**. + Here is a list of the supported storage types for the Data Box Disk. | **Storage account** | **Supported access tiers** | |
databox | Data Box Heavy System Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-heavy-system-requirements.md | The software requirements include the information on the supported operating sys ### Supported storage accounts +> [!Note] +> Classic storage accounts will not be supported starting **August 1, 2023**. + [!INCLUDE [data-box-supported-storage-accounts](../../includes/data-box-supported-storage-accounts.md)] ### Supported storage types |
databox | Data Box System Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-system-requirements.md | Title: Microsoft Azure Data Box system requirements| Microsoft Docs -description: Learn about important system requirements for your Azure Data Box and for clients that connect to the Data Box. +description: Learn about important system requirements for your Azure Data Box and for clients that connect to the Data Box. The software requirements include supported operating systems, file transfer pro ### Supported storage accounts +> [!Note] +> Classic storage accounts will not be supported starting **August 1, 2023**. + [!INCLUDE [data-box-supported-storage-accounts](../../includes/data-box-supported-storage-accounts.md)] ### Supported storage types |
ddos-protection | Ddos Protection Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md | Azure DDoS Protection, combined with application design best practices, provides :::image type="content" source="./media/ddos-best-practices/ddos-protection-overview-architecture.png" alt-text="Diagram of the reference architecture for a DDoS protected PaaS web application."::: +Azure DDoS Protection protects at layer 3 and layer 4 network layers. For web applications protection at layer 7, you need to add protection at the application layer using a WAF offering. For more information, see [Application DDoS protection](../web-application-firewall/shared/application-ddos-protection.md). + ## Key benefits ### Always-on traffic monitoring |
ddos-protection | Ddos Protection Reference Architectures | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-reference-architectures.md | -> [!NOTE] -> Protected resources include public IPs attached to an IaaS VM (except for single VM running behind a public IP), Load Balancer (Classic & Standard Load Balancers), Application Gateway (including WAF) cluster, Firewall, Bastion, VPN Gateway, Service Fabric, IaaS based Network Virtual Appliance (NVA) or Azure API Management (Premium tier only), connected to a virtual network (VNet) in the external mode. Protection also covers public IP ranges brought to Azure via Custom IP Prefixes (BYOIPs). PaaS services (multi-tenant), which includes Azure App Service Environment for Power Apps, Azure API Management in deployment modes other than those supported above, or Azure Virtual WAN are not supported at present. +## Protected Resources ++Supported resources include: +* Public IPs attached to: + * An IaaS virtual machine. + * Application Gateway (including WAF) cluster. + * Azure API Management (Premium tier only). + * Bastion. + * Connected to a virtual network (VNet) in the external mode. + * Firewall. + * IaaS based Network Virtual Appliance (NVA). + * Load Balancer (Classic & Standard Load Balancers). + * Service Fabric. + * VPN Gateway. +* Protection also covers public IP ranges brought to Azure via Custom IP Prefixes (BYOIPs). ++ +Unsupported resources include: ++* Azure Virtual WAN. +* Azure API Management in deployment modes other than the supported modes. +* PaaS services (multi-tenant) including Azure App Service Environment for Power Apps. +* Protected resources that include public IPs created from public IP address prefix. + > [!NOTE]-> Protected resources that include public IPs created from public IP address prefix are not supported at present. +> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to deploy [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md). ## Virtual machine (Windows/Linux) workloads |
ddos-protection | Manage Ddos Ip Protection Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-ip-protection-portal.md | |
ddos-protection | Manage Ddos Protection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection.md | A DDoS protection plan defines a set of virtual networks that have DDoS Network In this quickstart, you'll create a DDoS protection plan and link it to a virtual network. - ## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. |
defender-for-iot | How To Work With The Sensor Device Map | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-the-sensor-device-map.md | By default, IT devices are automatically aggregated by [subnet](how-to-control-w 1. Sign into your OT sensor and select **Device map**. 1. Select one or more expanded subnets and then select **Collapse All**. +### View traffic details between connected devices ++**To view traffic details between connected devices**: ++1. Sign into your OT sensor and select **Device map**. +1. Locate two connected devices on the map. You might need to zoom in on the map to view a device icon, which looks like a monitor. +1. Click on the line connecting two devices on the map and then :::image type="icon" source="media/how-to-work-with-maps/expand-pane-icon.png" border="false"::: expand the **Connection Properties** pane on the right. For example: ++ :::image type="content" source="media/how-to-work-with-maps/connection-properties.png" alt-text="Screenshot of connection properties on the device map." lightbox="media/how-to-work-with-maps/connection-properties.png"::: ++1. In the **Connection Properties** pane, you can view traffic details between the two devices, such as: ++ - How long ago the connection was first detected. + - The IP address of each device. + - The status of each device. + - The number of alerts for each device. + - A chart for total bandwidth. + - A chart for top traffic by port. + ## Create a custom device group In addition to OT sensor's [built-in device groups](#built-in-device-map-groups), create new custom groups as needed to use when highlighting or filtering devices on the map. |
devtest-labs | Configure Lab Remote Desktop Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/configure-lab-remote-desktop-gateway.md | Follow these steps to set up a sample remote desktop gateway farm. |`signCertificate` |**Required** |The Base64 encoding for the signing certificate for the gateway machine. | |`signCertificatePassword` |**Required** |The password for the signing certificate for the gateway machine. | |`signCertificateThumbprint` |**Required** |The certificate thumbprint for identification in the local certificate store of the signing certificate. |- |`_artifactsLocation` |**Required** |The URI location to find artifacts this template requires. This value must be a fully qualified URI, not a relative path. The artifacts include other templates, PowerShell scripts, and the Remote Desktop Gateway Pluggable Authentication module, expected to be named *RDGatewayFedAuth.msi*, that supports token authentication. | + |`_artifactsLocation` |**Required** |The URI location to find artifacts this template requires. This value must be a fully qualified URI, not a relative path. The artifacts include other templates, PowerShell scripts, and the Remote Desktop Gateway Pluggable Authentication module, expected to be named *RDGatewayFedAuth.msi* that supports token authentication. | |`_artifactsLocationSasToken`|**Required** |The shared access signature (SAS) token to access artifacts, if the `_artifactsLocation` is an Azure storage account. | 1. Run the following Azure CLI command to deploy *azuredeploy.json*: Once you configure both the gateway and the lab, the RDP connection file created ### Automate lab configuration -- Powershell: [Set-DevTestLabGateway.ps1](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/tools/Set-DevTestLabGateway.ps1) is a sample PowerShell script to automatically set **Gateway hostname** and **Gateway token secret** settings.+- PowerShell: [Set-DevTestLabGateway.ps1](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/tools/Set-DevTestLabGateway.ps1) is a sample PowerShell script to automatically set **Gateway hostname** and **Gateway token secret** settings. - ARM: Use the [Gateway sample ARM templates](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/lab) in the Azure DevTest Labs GitHub repository to create or update labs with **Gateway hostname** and **Gateway token secret** settings. |
devtest-labs | Connect Virtual Machine Through Browser | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/connect-virtual-machine-through-browser.md | |
devtest-labs | Devtest Lab Add Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-add-vm.md | |
devtest-labs | Devtest Lab Attach Detach Data Disk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-attach-detach-data-disk.md | |
devtest-labs | Devtest Lab Auto Shutdown | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-auto-shutdown.md | |
devtest-labs | Devtest Lab Auto Startup Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-auto-startup-vm.md | |
devtest-labs | Devtest Lab Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-concepts.md | |
devtest-labs | Devtest Lab Create Lab | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-lab.md | |
devtest-labs | Devtest Lab Guidance Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-guidance-get-started.md | |
devtest-labs | Devtest Lab Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-overview.md | |
devtest-labs | Devtest Lab Troubleshoot Apply Artifacts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-troubleshoot-apply-artifacts.md | description: Troubleshoot issues with applying artifacts on an Azure DevTest Lab Previously updated : 03/31/2022 Last updated : 06/15/2023+ # Troubleshoot issues applying artifacts on DevTest Labs virtual machines You can troubleshoot artifact failures from the Azure portal or from the VM wher ## Troubleshoot artifact failures from the Azure portal -If you can't apply an artifact to a VM, first check the following in the Azure portal: +If you can't apply an artifact to a VM, first check the following items in the Azure portal: - Make sure that the VM is running. - Navigate to the **Artifacts** page for the lab VM to make sure the VM is ready for applying artifacts. If the Apply artifacts feature isn't available, you see a message at the top of the page. An artifact can stop responding, and finally appear as **Failed**. To investigat 1. On your lab **Overview** page, from the list under **My virtual machines**, select the VM that has the artifact you want to investigate. 1. On the VM **Overview** page, select **Artifacts** in the left navigation. The **Artifacts** page lists artifacts associated with the VM, and their status. -  + :::image type="content" source="media/devtest-lab-troubleshoot-apply-artifacts/artifact-list.png" alt-text="Screenshot showing the list of artifacts and their status."::: 1. Select the artifact that shows a **Failed** status. The artifact opens with an extension message that includes details about the artifact failure. -  + :::image type="content" source="media/devtest-lab-troubleshoot-apply-artifacts/artifact-failure.png" alt-text="Screenshot of the error message for a failed artifact."::: ### Inspect the Activity logs Select the failed entry to see the error details. On the failure page, select ** ### Investigate the private artifact repository and lab storage account -When DevTest Labs applies an artifact, it reads the artifact configuration and files from connected repositories. By default, DevTest Labs has access to the DevTest Labs [public Artifact repository](https://github.com/Azure/azure-devtestlab/tree/master/Artifacts). You can also connect a lab to a private repository to access custom artifacts. If a custom artifact fails to install, make sure the personal access token (PAT) for the private repository hasn't expired. If the PAT is expired, the artifact won't be listed, and any scripts that refer to artifacts from that repository will fail. +When DevTest Labs applies an artifact, it reads the artifact configuration and files from connected repositories. By default, DevTest Labs has access to the DevTest Labs [public Artifact repository](https://github.com/Azure/azure-devtestlab/tree/master/Artifacts). You can also connect a lab to a private repository to access custom artifacts. If a custom artifact fails to install, make sure the personal access token (PAT) for the private repository hasn't expired. If the PAT is expired, the artifact won't be listed, and any scripts that refer to artifacts from that repository fail. Depending on configuration, lab VMs might not have direct access to the artifact repository. DevTest Labs caches the artifacts in a lab storage account that's created when the lab first initializes. If access to this storage account is blocked, such as when traffic is blocked from the VM to the Azure Storage service, you might see an error similar to this: To troubleshoot connectivity issues to the Azure Storage account: 1. Navigate to the lab's resource group. 1. Locate the resource of type **Storage account** whose name matches the convention.- 1. On the storage account **Overview** page, select **Firewalls and virtual networks** in the left navigation. - 1. Ensure that **Firewalls and virtual networks** is set to **All networks**. Or, if the **Selected networks** option is selected, make sure the lab's virtual networks used to create VMs are added to the list. + 1. On the storage account **Overview** page, select **Networking** in the left navigation. + 1. On the **Firewalls and virtual networks** tab, ensure that **Public network access** is set to **Enabled from all networks**. Or, if the **Enabled from selected virtual networks and IP addresses** option is selected, make sure the lab's virtual networks used to create VMs are added to the list. For in-depth troubleshooting, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md). You can connect to the lab VM where the artifact failed, and investigate the iss 1. On the lab VM, go to *C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\\*1.10.12\*\\Status\\*, where *\*1.10.12\** is the CSE version number. -  + :::image type="content" source="media/devtest-lab-troubleshoot-apply-artifacts/status-folder.png" alt-text="Screenshot of the Status folder on the lab V M."::: 1. Open and inspect the *STATUS* file to view the error. For general information about Azure extensions, see [Azure virtual machine exten The artifact installation could fail because of the way the artifact installation script is authored. For example: -- The script has mandatory parameters, but fails to pass a value, either by allowing the user to leave it blank, or because there's no default value in the *artifactfile.json* definition file. The script stops responding because it's awaiting user input.+- The script has mandatory parameters but fails to pass a value, either by allowing the user to leave it blank, or because there's no default value in the *artifactfile.json* definition file. The script stops responding because it's awaiting user input. - The script requires user input as part of execution. Scripts should work silently without requiring user intervention. If you need more help, try one of the following support channels: - Contact the Azure DevTest Labs experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/forums/). - Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums). - Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.-- Go to the [Azure support site](https://azure.microsoft.com/support/options) and select **Get Support** to file an Azure support incident.+- Go to the [Azure support site](https://azure.microsoft.com/support/options) and select **Submit a support ticket** to file an Azure support incident. |
devtest-labs | Devtest Lab Use Resource Manager Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-use-resource-manager-template.md | You can customize and use an ARM template from any Azure VM base to deploy more 1. On the **Advanced Settings** tab, select **View ARM template**. 1. Copy and [save the ARM template](#store-arm-templates-in-git-repositories) to use for creating more VMs. -  + :::image type="content" source="media/devtest-lab-use-arm-template/devtestlab-lab-copy-rm-template.png" alt-text="Screenshot that shows an ARM template to save for later use."::: 1. If you want to create an instance of the VM now, on the **Basic Settings** tab, select **Create**. Use the following file structure to store an ARM template in a source control re - To reuse the ARM template, you need to update the `parameters` section of *azuredeploy.json*. You can create a *parameter.json* file that customizes just the parameters, without having to edit the main template file. Name this parameter file *azuredeploy.parameters.json*. -  + :::image type="content" source="media/devtest-lab-use-arm-template/devtestlab-lab-custom-params.png" alt-text="Customize parameters using a JSON file."::: In the parameters file, you can use the parameters `_artifactsLocation` and `_artifactsLocationSasToken` to construct a `parametersLink` URI value for automatically managing nested templates. For more information about nested templates, see [Deploy nested Azure Resource Manager templates for testing environments](deploy-nested-template-environments.md). Use the following file structure to store an ARM template in a source control re The following screenshot shows a typical ARM template folder structure in a repository. - ## Add template repositories to labs Add your template repositories to your lab so all lab users can access the templ 1. To add your private ARM template repository to the lab, select **Add** in the top menu bar. -  + :::image type="content" source="media/devtest-lab-create-environment-from-arm/public-repo.png" alt-text="Screenshot that shows the Repositories configuration screen."::: 1. In the **Repositories** pane, enter the following information: Add your template repositories to your lab so all lab users can access the templ 1. Select **Save**. -  + :::image type="content" source="media/devtest-lab-create-environment-from-arm/repo-values.png" alt-text="Screenshot that shows adding a new template repository to a lab."::: The repository now appears in the **Repositories** list for the lab. Users can now use the repository templates to [create multi-VM DevTest Labs environments](devtest-lab-create-environment-from-arm.md). Lab administrators can use the templates to [automate lab deployment and management tasks](devtest-lab-use-arm-and-powershell-for-lab-resources.md#arm-template-automation). |
devtest-labs | Create Lab Windows Vm Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/quickstarts/create-lab-windows-vm-terraform.md | Title: 'Quickstart: Create a lab in Azure DevTest Labs using Terraform' description: 'In this article, you create a Windows virtual machine in a lab within Azure DevTest Labs using Terraform' Last updated 4/14/2023-+ |
devtest-labs | Troubleshoot Vm Deployment Failures | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/troubleshoot-vm-deployment-failures.md | |
devtest-labs | Tutorial Create Custom Lab | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/tutorial-create-custom-lab.md | |
devtest-labs | Tutorial Use Custom Lab | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/tutorial-use-custom-lab.md | |
devtest-labs | Use Command Line Start Stop Virtual Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/use-command-line-start-stop-virtual-machines.md | |
event-grid | Event Handlers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-handlers.md | Title: Azure Event Grid event handlers description: Describes supported event handlers for Azure Event Grid. Azure Automation, Functions, Event Hubs, Hybrid Connections, Logic Apps, Service Bus, Queue Storage, Webhooks. Previously updated : 03/15/2022 Last updated : 06/16/2023 # Event handlers in Azure Event Grid |
event-hubs | Event Hubs Dotnet Standard Getstarted Send | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dotnet-standard-getstarted-send.md | This section shows you how to create a .NET Core console application to send eve ## Receive events from the event hub This section shows how to write a .NET Core console application that receives events from an event hub using an event processor. The event processor simplifies receiving events from event hubs. -> [!WARNING] -> If you run this code on **Azure Stack Hub**, you will experience runtime errors unless you target a specific Storage API version. That's because the Event Hubs SDK uses the latest available Azure Storage API available in Azure that may not be available on your Azure Stack Hub platform. Azure Stack Hub may support a different version of Storage Blob SDK than those typically available on Azure. If you are using Azure Blob Storage as a checkpoint store, check the [supported Azure Storage API version for your Azure Stack Hub build](/azure-stack/user/azure-stack-acs-differences?#api-version) and target that version in your code. -> -> For example, If you are running on Azure Stack Hub version 2005, the highest available version for the Storage service is version 2019-02-02. By default, the Event Hubs SDK client library uses the highest available version on Azure (2019-07-07 at the time of the release of the SDK). In this case, besides following steps in this section, you will also need to add code to target the Storage service API version 2019-02-02. For an example on how to target a specific Storage API version, see [this sample on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples/). --- ### Create an Azure Storage Account and a blob container In this quickstart, you use Azure Storage as the checkpoint store. Follow these steps to create an Azure Storage account. 1. [Create an Azure Storage account](../storage/common/storage-account-create.md?tabs=azure-portal) 2. [Create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) 3. Authenticate to the blob container using either Azure AD (passwordless) authentication or a connection string to the namespace.++ ## [Passwordless (Recommended)](#tab/passwordless) |
event-hubs | Event Hubs Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-features.md | If a reader disconnects from a partition, when it reconnects it begins reading a > [!IMPORTANT] > Offsets are provided by the Event Hubs service. It's the responsibility of the consumer to checkpoint as events are processed. -> [!NOTE] -> If you are using Azure Blob Storage as the checkpoint store in an environment that supports a different version of Storage Blob SDK than those typically available on Azure, you'll need to use code to change the Storage service API version to the specific version supported by that environment. For example, if you are running [Event Hubs on an Azure Stack Hub version 2002](/azure-stack/user/event-hubs-overview), the highest available version for the Storage service is version 2017-11-09. In this case, you need to use code to target the Storage service API version to 2017-11-09. For an example on how to target a specific Storage API version, see these samples on GitHub: -> - [.NET](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples/). -> - [Java](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/eventhubs/azure-messaging-eventhubs-checkpointstore-blob/src/samples/java/com/azure/messaging/eventhubs/checkpointstore/blob/) -> - [JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventhub/eventhubs-checkpointstore-blob/samples/v1/javascript) or [TypeScript](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventhub/eventhubs-checkpointstore-blob/samples/v1/typescript) -> - [Python](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub-checkpointstoreblob-aio/samples/) ### Log compaction |
event-hubs | Event Hubs Go Get Started Send | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-go-get-started-send.md | Don't run the application yet. You first need to run the receiver app and then t State such as leases on partitions and checkpoints in the event stream are shared between receivers using an Azure Storage container. You can create a storage account and container with the Go SDK, but you can also create one by following the instructions in [About Azure storage accounts](../storage/common/storage-account-create.md). + ### Go packages To receive the messages, get the Go packages for Event Hubs as shown in the following example. |
event-hubs | Event Hubs Java Get Started Send | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-java-get-started-send.md | Build the program, and ensure that there are no errors. You'll run this program The code in this tutorial is based on the [EventProcessorClient sample on GitHub](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/eventhubs/azure-messaging-eventhubs-checkpointstore-blob/src/samples/java/com/azure/messaging/eventhubs/checkpointstore/blob/EventProcessorBlobCheckpointStoreSample.java), which you can examine to see the full working application. -> [!WARNING] -> If you run this code on Azure Stack Hub, you will experience runtime errors unless you target a specific Storage API version. That's because the Event Hubs SDK uses the latest available Azure Storage API available in Azure that may not be available on your Azure Stack Hub platform. Azure Stack Hub may support a different version of Azure Blob Storage SDK than those typically available on Azure. If you are using Azure Blob Storage as a checkpoint store, check the [supported Azure Storage API version for your Azure Stack Hub build](/azure-stack/user/azure-stack-acs-differences?#api-version) and target that version in your code. -> -> For example, If you are running on Azure Stack Hub version 2005, the highest available version for the Storage service is version 2019-02-02. By default, the Event Hubs SDK client library uses the highest available version on Azure (2019-07-07 at the time of the release of the SDK). In this case, besides following steps in this section, you will also need to add code to target the Storage service API version 2019-02-02. For an example on how to target a specific Storage API version, see [this sample on GitHub](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/eventhubs/azure-messaging-eventhubs-checkpointstore-blob/src/samples/java/com/azure/messaging/eventhubs/checkpointstore/blob/EventProcessorWithCustomStorageVersion.java). - ### Create an Azure Storage and a blob container |
event-hubs | Event Hubs Node Get Started Send | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-node-get-started-send.md | Congratulations! You have now sent events to an event hub. ## Receive events In this section, you receive events from an event hub by using an Azure Blob storage checkpoint store in a JavaScript application. It performs metadata checkpoints on received messages at regular intervals in an Azure Storage blob. This approach makes it easy to continue receiving messages later from where you left off. -> [!WARNING] -> If you run this code on Azure Stack Hub, you will experience runtime errors unless you target a specific Storage API version. That's because the Event Hubs SDK uses the latest available Azure Storage API available in Azure that may not be available on your Azure Stack Hub platform. Azure Stack Hub may support a different version of Storage Blob SDK than those typically available on Azure. If you are using Azure Blog Storage as a checkpoint store, check the [supported Azure Storage API version for your Azure Stack Hub build](/azure-stack/user/azure-stack-acs-differences?#api-version) and target that version in your code. -> -> For example, If you are running on Azure Stack Hub version 2005, the highest available version for the Storage service is version 2019-02-02. By default, the Event Hubs SDK client library uses the highest available version on Azure (2019-07-07 at the time of the release of the SDK). In this case, besides following steps in this section, you will also need to add code to target the Storage service API version 2019-02-02. For an example on how to target a specific Storage API version, see [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/eventhub/eventhubs-checkpointstore-blob/samples/v1/javascript/receiveEventsWithApiSpecificStorage.js) and [TypeScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/eventhub/eventhubs-checkpointstore-blob/samples/v1/typescript/src/receiveEventsWithApiSpecificStorage.ts) samples on GitHub. ### Create an Azure storage account and a blob container |
event-hubs | Event Hubs Python Get Started Send | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-python-get-started-send.md | In this section, create a Python script to send events to the event hub that you This quickstart uses Azure Blob storage as a checkpoint store. The checkpoint store is used to persist checkpoints (that is, the last read positions). -> [!WARNING] -> If you run this code on Azure Stack Hub, you will experience runtime errors unless you target a specific Storage API version. That's because the Event Hubs SDK uses the latest available Azure Storage API available in Azure that may not be available on your Azure Stack Hub platform. Azure Stack Hub may support a different version of Storage Blob SDK than those typically available on Azure. If you are using Azure Blog Storage as a checkpoint store, check the [supported Azure Storage API version for your Azure Stack Hub build](/azure-stack/user/azure-stack-acs-differences?#api-version) and target that version in your code. -> -> For example, If you are running on Azure Stack Hub version 2005, the highest available version for the Storage service is version 2019-02-02. By default, the Event Hubs SDK client library uses the highest available version on Azure (2019-07-07 at the time of the release of the SDK). In this case, besides following steps in this section, you will also need to add code to target the Storage service API version 2019-02-02. For an example on how to target a specific Storage API version, see the [synchronous](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub-checkpointstoreblob/samples/receive_events_using_checkpoint_store_storage_api_version.py) and [asynchronous](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub-checkpointstoreblob-aio/samples/receive_events_using_checkpoint_store_storage_api_version_async.py) samples on GitHub. ### Create an Azure storage account and a blob container Create an Azure storage account and a blob container in it by doing the following steps: |
event-hubs | Event Processor Balance Partition Load | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-processor-balance-partition-load.md | If an event processor disconnects from a partition, another instance can resume When the checkpoint is performed to mark an event as processed, an entry in checkpoint store is added or updated with the event's offset and sequence number. Users should decide the frequency of updating the checkpoint. Updating after each successfully processed event can have performance and cost implications as it triggers a write operation to the underlying checkpoint store. Also, checkpointing every single event is indicative of a queued messaging pattern for which a Service Bus queue might be a better option than an event hub. The idea behind Event Hubs is that you get "at least once" delivery at great scale. By making your downstream systems idempotent, it's easy to recover from failures or restarts that result in the same events being received multiple times. -> [!NOTE] -> If you are using Azure Blob Storage as the checkpoint store in an environment that supports a different version of Storage Blob SDK than those typically available on Azure, you'll need to use code to change the Storage service API version to the specific version supported by that environment. For example, if you are running [Event Hubs on an Azure Stack Hub version 2002](/azure-stack/user/event-hubs-overview), the highest available version for the Storage service is version 2017-11-09. In this case, you need to use code to target the Storage service API version to 2017-11-09. For an example on how to target a specific Storage API version, see these samples on GitHub: -> - [.NET](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples/). -> - [Java](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/eventhubs/azure-messaging-eventhubs-checkpointstore-blob/src/samples/java/com/azure/messaging/eventhubs/checkpointstore/blob/) -> - [JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventhub/eventhubs-checkpointstore-blob/samples/v1/javascript) or [TypeScript](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventhub/eventhubs-checkpointstore-blob/samples/v1/typescript) -> - [Python](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub-checkpointstoreblob-aio/samples/) ++ ## Thread safety and processor instances |
event-hubs | Troubleshoot Checkpoint Store Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/troubleshoot-checkpoint-store-issues.md | + + Title: Troubleshoot storage checkpoint store issues in Azure Event Hubs +description: This article describes how to troubleshoot checkpoint store issues when using Azure Blob Storage as the checkpoint store in Azure Event Hubs. + Last updated : 06/16/2023+++# Troubleshoot checkpoint store issues +This article discusses issues with using Blob Storage as a checkpoint store. ++## Issues with using Blob Storage as a checkpoint store +You may see issues when using a blob storage account as a checkpoint store that are related to delays in processing, or failures to create checkpoints when using the SDK, etc. +++## Using Blob Storage checkpoint store on Azure Stack Hub +If you're using Azure Blob Storage as the checkpoint store in an environment that supports a different version of Storage Blob SDK than the ones that are typically available on Azure, you need to use code to change the Storage service API version to the specific version supported by that environment. For example, if you're running [Event Hubs on an Azure Stack Hub version 2002](/azure-stack/user/event-hubs-overview), the highest available version for the Storage service is version 2017-11-09. In this case, you need to use code to target the Storage service API version to 2017-11-09. For an example of how to target a specific Storage API version, see these samples on GitHub: ++- [.NET](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples/) +- [Java](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/eventhubs/azure-messaging-eventhubs-checkpointstore-blob/src/samples/java/com/azure/messaging/eventhubs/checkpointstore/blob/EventProcessorWithCustomStorageVersion.java). +- [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/eventhub/eventhubs-checkpointstore-blob/samples/v1/javascript/receiveEventsWithApiSpecificStorage.js) or [TypeScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/eventhub/eventhubs-checkpointstore-blob/samples/v1/typescript/src/receiveEventsWithApiSpecificStorage.ts) +- Python - [Synchronous](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub-checkpointstoreblob/samples/receive_events_using_checkpoint_store_storage_api_version.py), [Asynchronous](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub-checkpointstoreblob-aio/samples/receive_events_using_checkpoint_store_storage_api_version_async.py) ++If you run Event Hubs receiver that uses Blob Storage as the checkpoint store without targeting the version that Azure Stack Hub supports, you receive the following error message: ++``` +The value for one of the HTTP headers is not in the correct format +``` +++### Sample error message in Python +For Python, an error of `azure.core.exceptions.HttpResponseError` is passed to the error handler `on_error(partition_context, error)` of `EventHubConsumerClient.receive()`. But, the method `receive()` doesn't raise an exception. `print(error)` prints the following exception information: ++```bash +The value for one of the HTTP headers is not in the correct format. ++RequestId:f048aee8-a90c-08ba-4ce1-e69dba759297 +Time:2020-03-17T22:04:13.3559296Z +ErrorCode:InvalidHeaderValue +Error:None +HeaderName:x-ms-version +HeaderValue:2019-07-07 +``` ++The logger logs two warnings like the following ones: ++```bash +WARNING:azure.eventhub.extensions.checkpointstoreblobaio._blobstoragecsaio: +An exception occurred during list_ownership for namespace '<namespace-name>.eventhub.<region>.azurestack.corp.microsoft.com' eventhub 'python-eh-test' consumer group '$Default'. ++Exception is HttpResponseError('The value for one of the HTTP headers is not in the correct format.\nRequestId:f048aee8-a90c-08ba-4ce1-e69dba759297\nTime:2020-03-17T22:04:13.3559296Z\nErrorCode:InvalidHeaderValue\nError:None\nHeaderName:x-ms-version\nHeaderValue:2019-07-07') ++WARNING:azure.eventhub.aio._eventprocessor.event_processor:EventProcessor instance '26d84102-45b2-48a9-b7f4-da8916f68214' of eventhub 'python-eh-test' consumer group '$Default'. An error occurred while load-balancing and claiming ownership. ++The exception is HttpResponseError('The value for one of the HTTP headers is not in the correct format.\nRequestId:f048aee8-a90c-08ba-4ce1-e69dba759297\nTime:2020-03-17T22:04:13.3559296Z\nErrorCode:InvalidHeaderValue\nError:None\nHeaderName:x-ms-version\nHeaderValue:2019-07-07'). Retrying after 71.45254944090853 seconds +``` +++## Next steps ++See the following article learn about partitioning and checkpointing: [Balance partition load across multiple instances of your application](event-processor-balance-partition-load.md) |
expressroute | Designing For Disaster Recovery With Expressroute Privatepeering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md | -However, taking Murphy's popular adage--*if anything can go wrong, it will*--into consideration, in this article let us focus on solutions that go beyond failures that can be addressed using a single ExpressRoute circuit. We'll be looking into network architecture considerations for building robust backend network connectivity for disaster recovery using geo-redundant ExpressRoute circuits. +However, taking Murphy's popular adage--*if anything can go wrong, it will*--into consideration, in this article let us focus on solutions that go beyond failures that can be addressed using a single ExpressRoute circuit. We'll look into network architecture considerations for building a robust backend network connectivity for disaster recovery using geo-redundant ExpressRoute circuits. ->[!NOTE] ->The concepts described in this article equally applies when an ExpressRoute circuit is created under Virtual WAN or outside of it. +> [!NOTE] +> The concepts described in this article equally applies when an ExpressRoute circuit is created under Virtual WAN or outside of it. > ## Need for redundant connectivity solution -There are possibilities and instances where an ExpressRoute peering locations or an entire regional service (be it that of Microsoft, network service providers, customer, or other cloud service providers) gets degraded. The root cause for such regional wide service impact include natural calamity. That's why, for business continuity and mission critical applications it's important to plan for disaster recovery. +There are possibilities and instances where an ExpressRoute peering locations or an entire regional service gets degraded. The root cause for such regional wide service outage include natural calamity. Therefore it's important to plan for disaster recovery for business continuity and mission critical applications. No matter what, whether you run your mission critical applications in an Azure region or on-premises or anywhere else, you can use another Azure region as your failover site. The following articles addresses disaster recovery from applications and frontend access perspectives: - [Enterprise-scale disaster recovery][Enterprise DR] - [SMB disaster recovery with Azure Site Recovery][SMB DR] -If you rely on ExpressRoute connectivity between your on-premises network and Microsoft for mission critical operations, you need to consider the following to plan for disaster recovery over ExpressRoute +If you rely on ExpressRoute connectivity between your on-premises network and Microsoft, you need to consider the following to plan for disaster recovery over ExpressRoute: - using geo-redundant ExpressRoute circuits - using diverse service provider network(s) for different ExpressRoute circuit If you rely on ExpressRoute connectivity between your on-premises network and Mi ## Challenges of using multiple ExpressRoute circuits -When you interconnect the same set of networks using more than one connection, you introduce parallel paths between the networks. Parallel paths, when not properly architected, could lead to asymmetrical routing. If you have stateful entities (for example, NAT, firewall) in the path, asymmetrical routing could block traffic flow. Typically, over the ExpressRoute private peering path you won't come across stateful entities such as NAT or Firewalls. That's why, asymmetrical routing over ExpressRoute private peering doesn't necessarily block traffic flow. +When you interconnect the same set of networks using more than one connection, you introduce parallel paths between the networks. Parallel paths, when not properly architected, could lead to asymmetrical routing. If you have stateful entities, for example, a NAT or firewall in the path, asymmetrical routing could block traffic flow. Typically, over the ExpressRoute private peering path you don't come across stateful entities such as NAT or Firewalls. Therefore, asymmetrical routing over ExpressRoute private peering doesn't necessarily block traffic flow. However, if you load balance traffic across geo-redundant parallel paths, regardless of whether you have stateful entities or not, you would experience inconsistent network performance. These geo-redundant parallel paths can be through the same metro or different metro found on the [providers by location](expressroute-locations-providers.md#partners) page. ### Redundancy with ExpressRoute circuits in same metro -[Many metros](expressroute-locations-providers.md#global-commercial-azure) have two ExpressRoute locations. An example would be *Amsterdam* and *Amsterdam2*. When designing redundancy, you could build two parallel paths to Azure with both locations in the same metro. You could do this with the same provider or choose to work with a different service provider to improve resiliency. Another advantage of this design is when application failover happens, end-to-end latency between your on-premises applications and Microsoft stays approximately the same. However, if there is a natural disaster such as an earthquake, connectivity for both paths may no longer be available. +[Many metros](expressroute-locations-providers.md#global-commercial-azure) have two ExpressRoute locations. An example would be *Amsterdam* and *Amsterdam2*. When designing redundancy, you could build two parallel paths to Azure with both locations in the same metro. You accomplish this task with the same provider or choose to work with a different service provider to improve resiliency. Another advantage of this design is when application failover happens, end-to-end latency between your on-premises applications and Microsoft stays approximately the same. However, if there's a natural disaster such as an earthquake, connectivity for both paths may no longer be available. ### Redundancy with ExpressRoute circuits in different metros -When using different metros for redundancy, you should select the secondary location in the same [geo-political region](expressroute-locations-providers.md#locations). To choose a location outside of the geo-political region, you'll need to use Premium SKU for both circuits in the parallel paths. The advantage of this configuration is the chances of a natural disaster causing an outage to both links are much lower but at the cost of increased latency end-to-end. +When using different metros for redundancy, you should select the secondary location in the same [geo-political region](expressroute-locations-providers.md#locations). To choose a location outside of the geo-political region, you need to use Premium SKU for both circuits in the parallel paths. The advantage of this configuration is the chances of a natural disaster causing an outage to both links are lower but at the cost of increased latency end-to-end. ->[!NOTE] ->Enabling BFD on the ExpressRoute circuits will help with faster link failure detection between Microsoft Enterprise Edge (MSEE) devices and the Customer/Partner Edge routers. However, the overall failover and convergence to redundant site may take up to 180 seconds under some failure conditions and you may experience increased latency or performance degradation during this time. +> [!NOTE] +> Enabling BFD on the ExpressRoute circuits will help with faster link failure detection between Microsoft Enterprise Edge (MSEE) devices and the Customer/Partner Edge routers. However, the overall failover and convergence to redundant site may take up to 180 seconds under some failure conditions and you may experience increased latency or performance degradation during this time. In this article, let's discuss how to address challenges you may face when configuring geo-redundant paths. Let's consider the example network illustrated in the following diagram. In the :::image type="content" source="./media/designing-for-disaster-recovery-with-expressroute-pvt/one-region.png" alt-text="Diagram of small to medium size on-premises network considerations."::: -By default, if you advertise routes identically over all the ExpressRoute paths, Azure will load-balance on-premises bound traffic across all the ExpressRoute paths using Equal-cost multi-path (ECMP) routing. +By default, if you advertise routes identically over all the ExpressRoute paths, Azure load-balances on-premises bound traffic across all the ExpressRoute paths using Equal-cost multi-path (ECMP) routing. However, with the geo-redundant ExpressRoute circuits we need to take into consideration different network performances with different network paths (particularly for network latency). To get more consistent network performance during normal operation, you may want to prefer the ExpressRoute circuit that offers the minimal latency. The following screenshot illustrates configuring the weight of an ExpressRoute c :::image type="content" source="./media/designing-for-disaster-recovery-with-expressroute-pvt/configure-weight.png" alt-text="Screenshot of configuring connection weight via Azure portal."::: -The following diagram illustrates influencing ExpressRoute path selection using connection weight. The default connection weight is 0. In the example below, the weight of the connection for ExpressRoute 1 is configured as 100. When a VNet receives a route prefix advertised via more than one ExpressRoute circuit, the VNet will prefer the connection with the highest weight. +The following diagram illustrates influencing ExpressRoute path selection using connection weight. The default connection weight is 0. In the following example, the weight of the connection for ExpressRoute 1 is configured as 100. When a VNet receives a route prefix advertised via more than one ExpressRoute circuit, the VNet prefers the connection with the highest weight. :::image type="content" source="./media/designing-for-disaster-recovery-with-expressroute-pvt/connection-weight.png" alt-text="Diagram of influencing path selection using connection weight."::: Let's consider the example illustrated in the following diagram. In the example, :::image type="content" source="./media/designing-for-disaster-recovery-with-expressroute-pvt/multi-region.png" alt-text="Diagram of large distributed on-premises network considerations."::: -How we architect the disaster recovery has an impact on how cross-regional to cross location (region1/region2 to location2/location1) traffic is routed. Let's consider two different disaster architectures that routes cross region-location traffic differently. +How we architect the disaster recovery has an effect on how cross-regional to cross location (region1/region2 to location2/location1) traffic is routed. Let's consider two different disaster architectures that routes cross region-location traffic differently. ### Scenario 1 You can architect the scenario using connection weight to influence VNets to pre ### Scenario 2 -The Scenario 2 is illustrated in the following diagram. In the diagram, green lines indicate paths for traffic flow between VNet1 and on-premises networks. The blue lines indicate paths for traffic flow between VNet2 and on-premises networks. In the steady-state (solid lines in the diagram), all the traffic between VNets and on-premises locations flow via Microsoft backbone for the most part, and flows through the interconnection between on-premises locations only in the failure state (dotted lines in the diagram) of an ExpressRoute. +The Scenario 2 is illustrated in the following diagram. In the diagram, green lines indicate paths for traffic flow between VNet1 and on-premises networks. The blue lines indicate paths for traffic flow between VNet2 and on-premises networks. In the steady-state, solid lines in the diagram, all the traffic between VNets and on-premises locations flow using the Microsoft backbone normally, and flows through the interconnection between on-premises locations only in the failure state, dotted lines in the diagram, of an ExpressRoute. :::image type="content" source="./media/designing-for-disaster-recovery-with-expressroute-pvt/multi-region-arch2.png" alt-text="Diagram of traffic flow for second scenario."::: In this article, we discussed how to design for disaster recovery of an ExpressR [HA]: ./designing-for-high-availability-with-expressroute.md [Enterprise DR]: https://azure.microsoft.com/solutions/architecture/disaster-recovery-enterprise-scale-dr/ [SMB DR]: https://azure.microsoft.com/solutions/architecture/disaster-recovery-smb-azure-site-recovery/-[con wgt]: ./expressroute-optimize-routing.md#solution-assign-a-high-weight-to-local-connection -[AS Path Pre]: ./expressroute-optimize-routing.md#solution-use-as-path-prepending |
expressroute | Designing For High Availability With Expressroute | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/designing-for-high-availability-with-expressroute.md | Title: 'Azure ExpressRoute: Designing for high availability' description: This page provides architectural recommendations for high availability while using Azure ExpressRoute. - Previously updated : 06/28/2019 Last updated : 06/15/2023 - # Designing for high availability with ExpressRoute -ExpressRoute is designed for high availability to provide carrier grade private network connectivity to Microsoft resources. In other words, there is no single point of failure in the ExpressRoute path within Microsoft network. To maximize the availability, the customer and the service provider segment of your ExpressRoute circuit should also be architected for high availability. In this article, first let's look into network architecture considerations for building robust network connectivity using an ExpressRoute, then let's look into the fine-tuning features that help you to improve the high availability of your ExpressRoute circuit. +ExpressRoute is designed for high availability to provide carrier grade private network connectivity to Microsoft resources. In other words, there's no single point of failure in the ExpressRoute path within Microsoft network. To maximize the availability, the customer and the service provider segment of your ExpressRoute circuit should also be architected for high availability. In this article, first let's look into network architecture considerations for building robust network connectivity using an ExpressRoute, then let's look into the fine-tuning features that help you to improve the high availability of your ExpressRoute circuit. ->[!NOTE] ->The concepts described in this article equally applies when an ExpressRoute circuit is created under Virtual WAN or outside of it. +> [!NOTE] +> The concepts described in this article equally applies when an ExpressRoute circuit is created under Virtual WAN or outside of it. > ## Architecture considerations The following figure illustrates the recommended way to connect using an Express [![1]][1] -For high availability, it's essential to maintain the redundancy of the ExpressRoute circuit throughout the end-to-end network. In other words, you need to maintain redundancy within your on-premises network, and shouldn't compromise redundancy within your service provider network. Maintaining redundancy at the minimum implies avoiding single point of network failures. Having redundant power and cooling for the network devices will further improve the high availability. +For high availability, it's essential to maintain the redundancy of the ExpressRoute circuit throughout the end-to-end network. In other words, you need to maintain redundancy within your on-premises network, and shouldn't compromise redundancy within your service provider network. Maintaining redundancy at the minimum implies avoiding single point of network failures. Having redundant power and cooling for the network devices further improves the high availability. ### First mile physical layer design considerations - If you terminate both the primary and secondary connections of an ExpressRoute circuits on the same Customer Premises Equipment (CPE), you're compromising the high availability within your on-premises network. Additionally, if you configure both the primary and secondary connections via the same port of a CPE (either by terminating the two connections under different subinterfaces or by merging the two connections within the partner network), you're forcing the partner to compromise high availability on their network segment as well. This compromise is illustrated in the following figure. + If you terminate both the primary and secondary connections of an ExpressRoute circuits on the same Customer Premises Equipment (CPE), you're compromising the high availability within your on-premises network. Additionally, if you configure both the primary and secondary connections using the same port of a CPE, you're forcing the partner to compromise high availability on their network segment as well. This event can happen by either terminating the two connections under different subinterfaces or by merging the two connections within the partner network. This compromise is illustrated in the following figure. [![2]][2] For geo-redundant design considerations, see [Designing for disaster recovery wi ### Active-active connections -Microsoft network is configured to operate the primary and secondary connections of ExpressRoute circuits in active-active mode. However, through your route advertisements, you can force the redundant connections of an ExpressRoute circuit to operate in active-passive mode. Advertising more specific routes and BGP AS path prepending are the common techniques used to make one path preferred over the other. +Microsoft network is configured to operate the primary and secondary connections of ExpressRoute circuits in active-active mode. However, through your route advertisements, you can force the redundant connections of an ExpressRoute circuit to operate in active-passive mode. Advertising more specific routes and BGP AS path prepending are the common techniques used to make one path prefer over the other. -To improve high availability, it's recommended to operate both the connections of an ExpressRoute circuit in active-active mode. If you let the connections operate in active-active mode, Microsoft network will load balance the traffic across the connections on per-flow basis. +To improve high availability, it's recommended to operate both the connections of an ExpressRoute circuit in active-active mode. If you let the connections operate in active-active mode, Microsoft network loads balance the traffic across the connections on per-flow basis. Running the primary and secondary connections of an ExpressRoute circuit in active-passive mode face the risk of both the connections failing following a failure in the active path. The common causes for failure on switching over are lack of active management of the passive connection, and passive connection advertising stale routes. -Alternatively, running the primary and secondary connections of an ExpressRoute circuit in active-active mode, results in only about half the flows failing and getting rerouted, following an ExpressRoute connection failure. Thus, active-active mode will significantly help improve the Mean Time To Recover (MTTR). +Alternatively, running the primary and secondary connections of an ExpressRoute circuit in active-active mode, results in only about half the flows failing and getting rerouted. Therefore, an active-active connection significantly helps improve the Mean Time To Recover (MTTR). > [!NOTE]-> During a maintenance activity or in case of unplanned events impacting one of the connection, Microsoft will prefer to use AS path prepending to drain traffic over to the healthy connection. You will need to ensure the traffic is able to route over the healthy path when path prepend is configured from Microsoft and required route advertisements are configured appropriately to avoid any service disruption. +> During a maintenance activity or in case of unplanned events impacting one of the connection, Microsoft will prefer to use AS path prepending to drain traffic over to the healthy connection. You will need to ensure the traffic is able to route over the healthy path when path prepend is configure from Microsoft and required route advertisements are configured appropriately to avoid any service disruption. > ### NAT for Microsoft peering -Microsoft peering is designed for communication between public end-points. So commonly, on-premises private endpoints are Network Address Translated (NATed) with public IP on the customer or partner network before they communicate over Microsoft peering. Assuming you use both the primary and secondary connections in active-active mode, where and how you NAT has an impact on how quickly you recover following a failure in one of the ExpressRoute connections. Two different NAT options are illustrated in the following figure: +Microsoft peering is designed for communication between public end-points. So commonly, on-premises private endpoints are Network Address Translated (NATed) with public IP on the customer or partner network before they communicate over Microsoft peering. Assuming you use both the primary and secondary connections in an active-active setup. Where and how your NAT has an effect on how quickly you recover following a failure in one of the ExpressRoute connections. Two different NAT options are illustrated in the following figure: [![3]][3] #### Option 1: -NAT gets applied after splitting the traffic between the primary and secondary connections of the ExpressRoute circuit. To meet the stateful requirements of NAT, independent NAT pools are used for the primary and the secondary devices. The return traffic will arrive on the same edge device through which the flow egressed. +NAT gets applied after splitting the traffic between the primary and secondary connections of the ExpressRoute circuit. To meet the stateful requirements of NAT, independent NAT pools are used for the primary and the secondary devices. The return traffic arrives on the same edge device through which the flow egressed. -If the ExpressRoute connection fails, the ability to reach the corresponding NAT pool is then broken. That's why all broken network flows have to be re-established either by TCP or by the application layer following the corresponding window timeout. During the failure, Azure can't reach the on-premises servers using the corresponding NAT until connectivity has been restored for either the primary or secondary connections of the ExpressRoute circuit. +If the ExpressRoute connection fails, the ability to reach the corresponding NAT pool is then broken. Therefore, all broken network flows have to get re-established either by TCP or by the application layer following the corresponding window timeout. During the failure, Azure can't reach the on-premises servers using the corresponding NAT until connectivity has been restored for either the primary or secondary connections of the ExpressRoute circuit. #### Option 2: -A common NAT pool is used before splitting the traffic between the primary and secondary connections of the ExpressRoute circuit. It's important to make the distinction that the common NAT pool before splitting the traffic doesn't mean it will introduce a single-point of failure as such compromising high-availability. +A common NAT pool is used before splitting the traffic between the primary and secondary connections of the ExpressRoute circuit. It's important to make the distinction that the common NAT pool before splitting the traffic doesn't mean it introduces a single-point of failure as such compromising high-availability. -The NAT pool is reachable even after the primary or secondary connection fail. That's why the network layer itself can reroute the packets and help recover faster following a failure. +The NAT pool is reachable even after the primary or secondary connection fail. So the network layer itself can reroute the packets and help recover faster following a failure. > [!NOTE] > * If you use NAT option 1 (independent NAT pools for primary and secondary ExpressRoute connections) and map a port of an IP address from one of the NAT pool to an on-premises server, the server will not be reachable via the ExpressRoute circuit when the corresponding connection fails. ExpressRoute supports BFD over private peering. BFD reduces detection time of fa ## Next steps -In this article, we discussed how to design for high availability of an ExpressRoute circuit connectivity. An ExpressRoute circuit peering point is pinned to a geographical location and therefore could be impacted by catastrophic failure that impacts the entire location. +In this article, we discussed how to design for high availability of an ExpressRoute circuit connectivity. An ExpressRoute circuit peering point is pinned to a geographical location and therefore get affected by catastrophic failure that affects the entire location. -For design considerations to build geo-redundant network connectivity to Microsoft backbone that can withstand catastrophic failures, which impact an entire region, see [Designing for disaster recovery with ExpressRoute private peering][DR]. +For design considerations to build geo-redundant network connectivity to Microsoft backbone that can withstand catastrophic failures, which affect an entire region, see [Designing for disaster recovery with ExpressRoute private peering][DR]. <!--Image References--> [1]: ./media/designing-for-high-availability-with-expressroute/exr-reco.png "Recommended way to connect using ExpressRoute" |
expressroute | Expressroute Howto Coexist Resource Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-coexist-resource-manager.md | Title: 'Configure ExpressRoute and S2S VPN coexisting connections: Azure PowerShell' -description: Configure ExpressRoute and a Site-to-Site VPN connection that can coexist for the Resource Manager model using PowerShell. + Title: Configure ExpressRoute and S2S VPN coexisting connections with Azure PowerShell +description: Configure ExpressRoute and a site-to-site VPN connection that can coexist for the Resource Manager model using Azure PowerShell. - Previously updated : 09/16/2021 Last updated : 06/15/2023 - -# Configure ExpressRoute and Site-to-Site coexisting connections using PowerShell +# Configure ExpressRoute and site-to-site coexisting connections using PowerShell > [!div class="op_single_selector"] > * [PowerShell - Resource Manager](expressroute-howto-coexist-resource-manager.md) > * [PowerShell - Classic](expressroute-howto-coexist-classic.md) > -> -This article helps you configure ExpressRoute and Site-to-Site VPN connections that coexist. Having the ability to configure Site-to-Site VPN and ExpressRoute has several advantages. You can configure Site-to-Site VPN as a secure failover path for ExpressRoute, or use Site-to-Site VPNs to connect to sites that are not connected through ExpressRoute. We will cover the steps to configure both scenarios in this article. This article applies to the Resource Manager deployment model. +This article helps you configure ExpressRoute and site-to-site VPN connections that coexist. Having the ability to configure site-to-site VPN and ExpressRoute has several advantages. You can configure site-to-site VPN as a secure failover path for ExpressRoute, or use site-to-site VPNs to connect to sites that aren't connected through ExpressRoute. We cover the steps to configure both scenarios in this article. This article applies to the Resource Manager deployment model. -Configuring Site-to-Site VPN and ExpressRoute coexisting connections has several advantages: +Configuring site-to-site VPN and ExpressRoute coexisting connections has several advantages: -* You can configure a Site-to-Site VPN as a secure failover path for ExpressRoute. -* Alternatively, you can use Site-to-Site VPNs to connect to sites that are not connected through ExpressRoute. +* You can configure a site-to-site VPN as a secure failover path for ExpressRoute. +* Alternatively, you can use site-to-site VPNs to connect to sites that aren't connected through ExpressRoute. -The steps to configure both scenarios are covered in this article. This article applies to the Resource Manager deployment model and uses PowerShell. You can also configure these scenarios using the Azure portal, although documentation is not yet available. You can configure either gateway first. Typically, you will incur no downtime when adding a new gateway or gateway connection. +The steps to configure both scenarios are covered in this article. This article applies to the Resource Manager deployment model and uses PowerShell. You can also configure these scenarios using the Azure portal, although documentation isn't yet available. You can configure either gateway first. Typically, you don't experience any downtime when adding a new gateway or gateway connection. ->[!NOTE] ->If you want to create a Site-to-Site VPN over an ExpressRoute circuit, please see [this article](site-to-site-vpn-over-microsoft-peering.md). +> [!NOTE] +> If you want to create a site-to-site VPN over an ExpressRoute circuit, see [**site-to-site VPN over Microsoft peering**](site-to-site-vpn-over-microsoft-peering.md). > ## Limits and limitations+ * **Only route-based VPN gateway is supported.** You must use a route-based [VPN gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md). You also can use a route-based VPN gateway with a VPN connection configured for 'policy-based traffic selectors' as described in [Connect to multiple policy-based VPN devices](../vpn-gateway/vpn-gateway-connect-multiple-policybased-rm-ps.md).-* **ExpressRoute-VPN Gateway coexist configurations are not supported on the Basic SKU**. -* **If you want to use transit routing between ExpressRoute and VPN, the ASN of Azure VPN Gateway must be set to 65515 and Azure Route Server should be used.** Azure VPN Gateway supports the BGP routing protocol. For ExpressRoute and Azure VPN to work together, you must keep the Autonomous System Number of your Azure VPN gateway at its default value, 65515. If you previously selected an ASN other than 65515 and you change the setting to 65515, you must reset the VPN gateway for the setting to take effect. -* **The gateway subnet must be /27 or a shorter prefix**, (such as /26, /25), or you will receive an error message when you add the ExpressRoute virtual network gateway. -* **Coexistence in a dual-stack vnet is not supported.** If you are using ExpressRoute IPv6 support and a dual-stack ExpressRoute gateway, coexistence with VPN Gateway will not be possible. +* ExpressRoute-VPN Gateway coexist configurations are **not supported on the Basic SKU**. +* If you want to use transit routing between ExpressRoute and VPN, **the ASN of Azure VPN Gateway must be set to 65515, and Azure Route Server should be used.** Azure VPN Gateway supports the BGP routing protocol. For ExpressRoute and Azure VPN to work together, you must keep the Autonomous System Number of your Azure VPN gateway at its default value, 65515. If you previously selected an ASN other than 65515 and you change the setting to 65515, you must reset the VPN gateway for the setting to take effect. +* **The gateway subnet must be /27 or a shorter prefix**, such as /26, /25, or you receive an error message when you add the ExpressRoute virtual network gateway. +* **Coexistence in a dual-stack virtual network is not supported.** If you're using ExpressRoute IPv6 support and a dual-stack ExpressRoute gateway, coexistence with VPN Gateway isn't possible. ## Configuration designs-### Configure a Site-to-Site VPN as a failover path for ExpressRoute -You can configure a Site-to-Site VPN connection as a backup for ExpressRoute. This connection applies only to virtual networks linked to the Azure private peering path. There is no VPN-based failover solution for services accessible through Azure Microsoft peering. The ExpressRoute circuit is always the primary link. Data flows through the Site-to-Site VPN path only if the ExpressRoute circuit fails. To avoid asymmetrical routing, your local network configuration should also prefer the ExpressRoute circuit over the Site-to-Site VPN. You can prefer the ExpressRoute path by setting higher local preference for the routes received the ExpressRoute. ->[!NOTE] -> If you have ExpressRoute Microsoft Peering enabled, you can receive the public IP address of your Azure VPN gateway on the ExpressRoute connection. To set up your site-to-site VPN connection as a backup, you must configure your on-premises network so that the VPN connection is routed to the Internet. -> +### Configure a site-to-site VPN as a failover path for ExpressRoute ++You can configure a site-to-site VPN connection as a backup for your ExpressRoute connection. This connection applies only to virtual networks linked to the Azure private peering path. There's no VPN-based failover solution for services accessible through Azure Microsoft peering. The ExpressRoute circuit is always the primary link. Data flows through the site-to-site VPN path only if the ExpressRoute circuit fails. To avoid asymmetrical routing, your local network configuration should also prefer the ExpressRoute circuit over the site-to-site VPN. You can prefer the ExpressRoute path by setting higher local preference for the routes received the ExpressRoute. > [!NOTE]-> While ExpressRoute circuit is preferred over Site-to-Site VPN when both routes are the same, Azure will use the longest prefix match to choose the route towards the packet's destination. +> * If you have ExpressRoute Microsoft peering enabled, you can receive the public IP address of your Azure VPN gateway on the ExpressRoute connection. To set up your site-to-site VPN connection as a backup, you must configure your on-premises network so that the VPN connection is routed to the Internet. +> +> * While the ExpressRoute circuit path is preferred over the site-to-site VPN when both routes are the same, Azure uses the longest prefix match to choose the route towards the packet's destination. > - --### Configure a Site-to-Site VPN to connect to sites not connected through ExpressRoute -You can configure your network where some sites connect directly to Azure over Site-to-Site VPN, and some sites connect through ExpressRoute. + - +### Configure a site-to-site VPN to connect to sites not connected through ExpressRoute +You can configure your network where some sites connect directly to Azure over site-to-site VPN, and some sites connect through ExpressRoute. + ## Selecting the steps to use+ There are two different sets of procedures to choose from. The configuration procedure that you select depends on whether you have an existing virtual network that you want to connect to, or you want to create a new virtual network. * I don't have a VNet and need to create one. - If you donΓÇÖt already have a virtual network, this procedure walks you through creating a new virtual network using Resource Manager deployment model and creating new ExpressRoute and Site-to-Site VPN connections. To configure a virtual network, follow the steps in [To create a new virtual network and coexisting connections](#new). + If you donΓÇÖt already have a virtual network, this procedure walks you through creating a new virtual network using Resource Manager deployment model and creating new ExpressRoute and site-to-site VPN connections. + * I already have a Resource Manager deployment model VNet. - You may already have a virtual network in place with an existing Site-to-Site VPN connection or ExpressRoute connection. In this scenario if the gateway subnet prefix is /28 or longer (/29, /30, etc.), you have to delete the existing gateway. The [To configure coexisting connections for an already existing VNet](#add) section walks you through deleting the gateway, and then creating new ExpressRoute and Site-to-Site VPN connections. + You may already have a virtual network in place with an existing site-to-site VPN connection or ExpressRoute connection. In this scenario if the gateway subnet prefix is /28 or longer (/29, /30, etc.), you have to delete the existing gateway. The steps to configure coexisting connections for an already existing VNet section walks you through deleting the gateway, and then creating new ExpressRoute and site-to-site VPN connections. - If you delete and recreate your gateway, you will have downtime for your cross-premises connections. However, your VMs and services will still be able to communicate out through the load balancer while you configure your gateway if they are configured to do so. + If you delete and recreate your gateway, you experience downtime for your cross-premises connections. However, your VMs and services can connect through the internet while you configure your gateway if they're configured to do so. ## Before you begin There are two different sets of procedures to choose from. The configuration pro [!INCLUDE [working with cloud shell](../../includes/expressroute-cloudshell-powershell-about.md)] -## <a name="new"></a>To create a new virtual network and coexisting connections -This procedure walks you through creating a VNet and Site-to-Site and ExpressRoute connections that will coexist. The cmdlets that you use for this configuration may be slightly different than what you might be familiar with. Be sure to use the cmdlets specified in these instructions. +#### [New virtual network and coexisting connections](#tab/new-virtual-network) ++This procedure walks you through creating a VNet and site-to-site and ExpressRoute connections that coexist. The cmdlets that you use for this configuration may be slightly different than what you might be familiar with. Be sure to use the cmdlets specified in these instructions. 1. Sign in and select your subscription. [!INCLUDE [sign in](../../includes/expressroute-cloud-shell-connect.md)]-2. Set variables. ++2. Define variables and create resource group. ```azurepowershell-interactive $location = "Central US" $resgrp = New-AzResourceGroup -Name "ErVpnCoex" -Location $location $VNetASN = 65515 ```-3. Create a virtual network including Gateway Subnet. For more information about creating a virtual network, see [Create a virtual network](../virtual-network/manage-virtual-network.md#create-a-virtual-network). For more information about creating subnets, see [Create a subnet](../virtual-network/virtual-network-manage-subnet.md#add-a-subnet) +3. Create a virtual network including the `GatewaySubnet`. For more information about creating a virtual network, see [Create a virtual network](../virtual-network/manage-virtual-network.md#create-a-virtual-network). For more information about creating subnets, see [Create a subnet](../virtual-network/virtual-network-manage-subnet.md#add-a-subnet) > [!IMPORTANT]- > The Gateway Subnet must be /27 or a shorter prefix (such as /26 or /25). - > + > The **GatewaySubnet** must be a /27 or a shorter prefix, such as /26 or /25. > - Create a new VNet. + Create a new virtual network. ```azurepowershell-interactive $vnet = New-AzVirtualNetwork -Name "CoexVnet" -ResourceGroupName $resgrp.ResourceGroupName -Location $location -AddressPrefix "10.200.0.0/16" ``` - Add subnets. + Add two subnets named **App** and **GatewaySubnet**. ```azurepowershell-interactive Add-AzVirtualNetworkSubnetConfig -Name "App" -VirtualNetwork $vnet -AddressPrefix "10.200.1.0/24" Add-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -VirtualNetwork $vnet -AddressPrefix "10.200.255.0/24" ``` - Save the VNet configuration. + Save the virtual network configuration. ```azurepowershell-interactive $vnet = Set-AzVirtualNetwork -VirtualNetwork $vnet ```-4. <a name="vpngw"></a>Next, create your Site-to-Site VPN gateway. For more information about the VPN gateway configuration, see [Configure a VNet with a Site-to-Site connection](../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md). The GatewaySku is only supported for *VpnGw1*, *VpnGw2*, *VpnGw3*, *Standard*, and *HighPerformance* VPN gateways. ExpressRoute-VPN Gateway coexist configurations are not supported on the Basic SKU. The VpnType must be *RouteBased*. +4. <a name="vpngw"></a>Next, create your site-to-site VPN gateway. For more information about the VPN gateway configuration, see [Configure a VNet with a site-to-site connection](../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md). The GatewaySku is only supported for *VpnGw1*, *VpnGw2*, *VpnGw3*, *Standard*, and *HighPerformance* VPN gateways. ExpressRoute-VPN Gateway coexist configurations aren't supported on the Basic SKU. The VpnType must be **RouteBased**. ```azurepowershell-interactive $gwSubnet = Get-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -VirtualNetwork $vnet This procedure walks you through creating a VNet and Site-to-Site and ExpressRou New-AzVirtualNetworkGateway -Name "VPNGateway" -ResourceGroupName $resgrp.ResourceGroupName -Location $location -IpConfigurations $gwConfig -GatewayType "Vpn" -VpnType "RouteBased" -GatewaySku "VpnGw1" ``` - Azure VPN gateway supports BGP routing protocol. You can specify ASN (AS Number) for that Virtual Network by adding the -Asn switch in the following command. Not specifying that parameter will default to AS number 65515. + The Azure VPN gateway supports BGP routing protocol. You can specify ASN (AS Number) for the virtual network by adding the `-Asn` flag in the following command. Not specifying the `Asn` parameter defaults to the AS number to **65515**. ```azurepowershell-interactive- $azureVpn = New-AzVirtualNetworkGateway -Name "VPNGateway" -ResourceGroupName $resgrp.ResourceGroupName -Location $location -IpConfigurations $gwConfig -GatewayType "Vpn" -VpnType "RouteBased" -GatewaySku "VpnGw1" -Asn $VNetASN + $azureVpn = New-AzVirtualNetworkGateway -Name "VPNGateway" -ResourceGroupName $resgrp.ResourceGroupName -Location $location -IpConfigurations $gwConfig -GatewayType "Vpn" -VpnType "RouteBased" -GatewaySku "VpnGw1" ``` > [!NOTE]- > For coexisting gateways, you must use the default ASN of 65515. See [limits and limitations](#limits-and-limitations). + > For coexisting gateways, you must use the default ASN of 65515. For more information, see [limits and limitations](#limits-and-limitations). > - You can find the BGP peering IP and the AS number that Azure uses for the VPN gateway in $azureVpn.BgpSettings.BgpPeeringAddress and $azureVpn.BgpSettings.Asn. For more information, see [Configure BGP](../vpn-gateway/vpn-gateway-bgp-resource-manager-ps.md) for Azure VPN gateway. + You can find the BGP peering IP and the AS number that Azure uses for the VPN gateway by running `$azureVpn.BgpSettings.BgpPeeringAddress` and `$azureVpn.BgpSettings.Asn`. For more information, see [Configure BGP](../vpn-gateway/vpn-gateway-bgp-resource-manager-ps.md) for Azure VPN gateway. + 5. Create a local site VPN gateway entity. This command doesnΓÇÖt configure your on-premises VPN gateway. Rather, it allows you to provide the local gateway settings, such as the public IP and the on-premises address space, so that the Azure VPN gateway can connect to it. If your local VPN device only supports static routing, you can configure the static routes in the following way: This procedure walks you through creating a VNet and Site-to-Site and ExpressRou $localVpn = New-AzLocalNetworkGateway -Name "LocalVPNGateway" -ResourceGroupName $resgrp.ResourceGroupName -Location $location -GatewayIpAddress *<Public IP>* -AddressPrefix $MyLocalNetworkAddress ``` - If your local VPN device supports the BGP and you want to enable dynamic routing, you need to know the BGP peering IP and the AS number that your local VPN device uses. + If your local VPN device supports the BGP and you want to enable dynamic routing, you need to know the BGP peering IP and the AS number of your local VPN device. ```azurepowershell-interactive $localVPNPublicIP = "<Public IP>" This procedure walks you through creating a VNet and Site-to-Site and ExpressRou ``` 6. Configure your local VPN device to connect to the new Azure VPN gateway. For more information about VPN device configuration, see [VPN Device Configuration](../vpn-gateway/vpn-gateway-about-vpn-devices.md). -7. Link the Site-to-Site VPN gateway on Azure to the local gateway. +7. Link the site-to-site VPN gateway on Azure to the local gateway. ```azurepowershell-interactive $azureVpn = Get-AzVirtualNetworkGateway -Name "VPNGateway" -ResourceGroupName $resgrp.ResourceGroupName This procedure walks you through creating a VNet and Site-to-Site and ExpressRou ``` -8. If you are connecting to an existing ExpressRoute circuit, skip steps 8 & 9 and, jump to step 10. Configure ExpressRoute circuits. For more information about configuring ExpressRoute circuit, see [create an ExpressRoute circuit](expressroute-howto-circuit-arm.md). +8. If you're connecting to an existing ExpressRoute circuit, skip steps 8 & 9 and, jump to step 10. Configure ExpressRoute circuits. For more information about configuring ExpressRoute circuit, see [create an ExpressRoute circuit](expressroute-howto-circuit-arm.md). 9. Configure Azure private peering over the ExpressRoute circuit. For more information about configuring Azure private peering over the ExpressRoute circuit, see [configure peering](expressroute-howto-routing-arm.md) This procedure walks you through creating a VNet and Site-to-Site and ExpressRou New-AzVirtualNetworkGatewayConnection -Name "ERConnection" -ResourceGroupName $resgrp.ResourceGroupName -Location $location -VirtualNetworkGateway1 $gw -PeerId $ckt.Id -ConnectionType ExpressRoute ``` -## <a name="add"></a>To configure coexisting connections for an already existing VNet -If you have a virtual network that has only one virtual network gateway (let's say, Site-to-Site VPN gateway) and you want to add another gateway of a different type (let's say, ExpressRoute gateway), check the gateway subnet size. If the gateway subnet is /27 or larger, you can skip the steps below and follow the steps in the previous section to add either a Site-to-Site VPN gateway or an ExpressRoute gateway. If the gateway subnet is /28 or /29, you have to first delete the virtual network gateway and increase the gateway subnet size. The steps in this section show you how to do that. +#### [Existing virtual network with a gateway](#tab/existing-virtual-network) -The cmdlets that you use for this configuration may be slightly different than what you might be familiar with. Be sure to use the cmdlets specified in these instructions. +If you have a virtual network that has only one virtual network gateway and you want to add another gateway of a different type, first check the gateway subnet size. If the gateway subnet is /27 or larger, you can skip the steps in this section and follow the steps in the previous section to add either a site-to-site VPN gateway or an ExpressRoute gateway. If the gateway subnet is /28 or /29, you have to first delete the virtual network gateway and increase the gateway subnet size. The steps in this section show you how to do that. -1. Delete the existing ExpressRoute or Site-to-Site VPN gateway. +1. Delete the existing ExpressRoute or site-to-site VPN gateway. ```azurepowershell-interactive Remove-AzVirtualNetworkGateway -Name <yourgatewayname> -ResourceGroupName <yourresourcegroup> The cmdlets that you use for this configuration may be slightly different than w > [!NOTE] > If you don't have enough IP addresses left in your virtual network to increase the gateway subnet size, you need to add more IP address space. > - > ```azurepowershell-interactive $vnet = Get-AzVirtualNetwork -Name <yourvnetname> -ResourceGroupName <yourresourcegroup> The cmdlets that you use for this configuration may be slightly different than w New-AzVirtualNetworkGatewayConnection -Name "ERConnection" -ResourceGroupName $resgrp.ResourceGroupName -Location $location -VirtualNetworkGateway1 $gw -PeerId $ckt.Id -ConnectionType ExpressRoute ``` ++ ## To add point-to-site configuration to the VPN gateway -You can follow the steps below to add Point-to-Site configuration to your VPN gateway in a coexistence setup. To upload the VPN root certificate, you must either install PowerShell locally to your computer, or use the Azure portal. +You can follow these steps to add a point-to-site configuration to your VPN gateway in a coexistence setup. To upload the VPN root certificate, you must either install PowerShell locally to your computer, or use the Azure portal. 1. Add VPN Client address pool. You can follow the steps below to add Point-to-Site configuration to your VPN ga $azureVpn = Get-AzVirtualNetworkGateway -Name "VPNGateway" -ResourceGroupName $resgrp.ResourceGroupName Set-AzVirtualNetworkGateway -VirtualNetworkGateway $azureVpn -VpnClientAddressPool "10.251.251.0/24" ```-2. Upload the VPN [root certificate](../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md#Certificates) to Azure for your VPN gateway. In this example, it's assumed that the root certificate is stored in the local machine where the following PowerShell cmdlets are run and that you are running PowerShell locally. You can also upload the certificate using the Azure portal. +2. Upload the VPN [root certificate](../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md#Certificates) to Azure for your VPN gateway. In this example, we assume the root certificate gets stored in the local machine where the following PowerShell cmdlets run and that you're running PowerShell locally. You can also upload the certificate using the Azure portal. ```powershell $p2sCertFullName = "RootErVpnCoexP2S.cer" You can follow the steps below to add Point-to-Site configuration to your VPN ga For more information on Point-to-Site VPN, see [Configure a Point-to-Site connection](../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md). ## To enable transit routing between ExpressRoute and Azure VPN-If you want to enable connectivity between one of your local network that is connected to ExpressRoute and another of your local network that is connected to a site-to-site VPN connection, you'll need to set up [Azure Route Server](../route-server/expressroute-vpn-support.md). +If you want to enable connectivity between one of your local networks that is connected to ExpressRoute and another of your local network that is connected to a site-to-site VPN connection, you need to set up [Azure Route Server](../route-server/expressroute-vpn-support.md). ## Next steps+ For more information about ExpressRoute, see the [ExpressRoute FAQ](expressroute-faqs.md). |
expressroute | Expressroute Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-prerequisites.md | Title: 'Azure ExpressRoute: Prerequisites' description: This page provides a list of requirements to be met before you can order an Azure ExpressRoute circuit. It includes a checklist. - Previously updated : 09/18/2019 Last updated : 06/15/2023 -- + # ExpressRoute prerequisites & checklist+ To connect to Microsoft cloud services using ExpressRoute, you need to verify that the following requirements listed in the following sections have been met. [!INCLUDE [expressroute-office365-include](../../includes/expressroute-office365-include.md)] ## Azure account+ * A valid and active Microsoft Azure account. This account is required to set up the ExpressRoute circuit. ExpressRoute circuits are resources within Azure subscriptions. An Azure subscription is a requirement even if connectivity is limited to non-Azure Microsoft cloud services, such as Microsoft 365. * An active Microsoft 365 subscription (if using Microsoft 365 services). For more information, see the Microsoft 365 specific requirements section of this article. ## Connectivity provider * You can work with an [ExpressRoute connectivity partner](expressroute-locations.md#partners) to connect to the Microsoft cloud. You can set up a connection between your on-premises network and Microsoft in [three ways](expressroute-introduction.md).-* If your provider is not an ExpressRoute connectivity partner, you can still connect to the Microsoft cloud through a [cloud exchange provider](expressroute-locations.md#connectivity-through-exchange-providers). +* If your provider isn't an ExpressRoute connectivity partner, you can still connect to the Microsoft cloud through a [cloud exchange provider](expressroute-locations.md#connectivity-through-exchange-providers). ## Network requirements+ * **Redundancy at each peering location**: Microsoft requires redundant BGP sessions to be set up between Microsoft's routers and the peering routers on each ExpressRoute circuit (even when you have just [one physical connection to a cloud exchange](expressroute-faqs.md#onep2plink)). * **Redundancy for Disaster Recovery**: Microsoft strongly recommends you set up at least two ExpressRoute circuits in different peering locations to avoid a single point of failure. * **Routing**: depending on how you connect to the Microsoft Cloud, you or your provider needs to set up and manage the BGP sessions for [routing domains](expressroute-circuit-peerings.md). Some Ethernet connectivity providers or cloud exchange providers may offer BGP management as a value-add service.-* **NAT**: Microsoft only accepts public IP addresses through Microsoft peering. If you are using private IP addresses in your on-premises network, you or your provider needs to translate the private IP addresses to the public IP addresses [using the NAT](expressroute-nat.md). +* **NAT**: Microsoft only accepts public IP addresses through Microsoft peering. If you're using private IP addresses in your on-premises network, you or your provider needs to translate the private IP addresses to the public IP addresses [using the NAT](expressroute-nat.md). * **QoS**: Skype for Business has various services (for example; voice, video, text) that require differentiated QoS treatment. You and your provider should follow the [QoS requirements](expressroute-qos.md). * **Network Security**: consider [network security](/azure/cloud-adoption-framework/reference/networking-vdc) when connecting to the Microsoft Cloud via ExpressRoute. ## Microsoft 365+ If you plan to enable Microsoft 365 on ExpressRoute, review the following documents for more information about Microsoft 365 requirements. * [Azure ExpressRoute for Microsoft 365](/microsoft-365/enterprise/azure-expressroute) If you plan to enable Microsoft 365 on ExpressRoute, review the following docume * ExpressRoute on Office 365 advanced training videos ## Next steps+ * For more information about ExpressRoute, see the [ExpressRoute FAQ](expressroute-faqs.md). * Find an ExpressRoute connectivity provider. See [ExpressRoute partners and peering locations](expressroute-locations.md).+* Review [Azure Well-architected Framework for ExpressRoute](/azure/well-architected/services/networking/azure-expressroute) to learn about best practices for designing and implementing ExpressRoute. * Refer to requirements for [Routing](expressroute-routing.md), [NAT](expressroute-nat.md), and [QoS](expressroute-qos.md). * Configure your ExpressRoute connection. * [Create an ExpressRoute circuit](expressroute-howto-circuit-arm.md) |
expressroute | Expressroute Troubleshooting Expressroute Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-troubleshooting-expressroute-overview.md | Title: 'Verify Azure ExpressRoute connectivity - troubleshooting guide' description: This article provides instructions on troubleshooting and validating end-to-end connectivity of an ExpressRoute circuit. - Previously updated : 01/07/2022 Last updated : 06/15/2023 - + # Verify ExpressRoute connectivity -This article helps you verify and troubleshoot Azure ExpressRoute connectivity. ExpressRoute extends an on-premises network into the Microsoft Cloud over a private connection that's commonly facilitated by a connectivity provider. ExpressRoute connectivity traditionally involves three distinct network zones: +This article helps you verify and troubleshoot Azure ExpressRoute connectivity. ExpressRoute extends an on-premises network into the Microsoft Cloud over a private connection commonly facilitated by a connectivity provider. ExpressRoute connectivity traditionally involves three distinct network zones: - Customer network - Provider network - Microsoft datacenter > [!NOTE]-> In the ExpressRoute direct connectivity model (offered at a bandwidth of 10/100 Gbps), customers can directly connect to the port for Microsoft Enterprise Edge (MSEE) routers. The direct connectivity model includes only customer and Microsoft network zones. +> In the ExpressRoute Direct connectivity model, you can directly connect to the port for Microsoft Enterprise Edge (MSEE) routers. The direct connectivity model includes only yours and Microsoft network zones. This article helps you identify if and where a connectivity issue exists. You can then seek support from the appropriate team to resolve the issue. In the preceding diagram, the numbers indicate key network points: At times, this article references these network points by their associated number. -Depending on the ExpressRoute connectivity model, network points 3 and 4 might be switches (layer 2 devices) or routers (layer 3 devices). The ExpressRoute connectivity models are cloud exchange co-location, point-to-point Ethernet connection, or any-to-any (IPVPN). +Depending on the ExpressRoute connectivity model, network points 3 and 4 might be switches (layer 2 devices) or routers (layer 3 devices). The ExpressRoute connectivity models are cloud exchange colocation, point-to-point Ethernet connection, or any-to-any (IPVPN). In the direct connectivity model, there are no network points 3 and 4. Instead, CEs (2) are directly connected to MSEEs via dark fiber. -If the cloud exchange co-location, point-to-point Ethernet, or direct connectivity model is used, CEs (2) establish Border Gateway Protocol (BGP) peering with MSEEs (5). +If the cloud exchange colocation, point-to-point Ethernet, or direct connectivity model is used, CEs (2) establish Border Gateway Protocol (BGP) peering with MSEEs (5). If the any-to-any (IPVPN) connectivity model is used, PE-MSEEs (4) establish BGP peering with MSEEs (5). PE-MSEEs propagate the routes received from Microsoft back to the customer network via the IPVPN service provider network. To list all the ExpressRoute circuits in a resource group, use the following com Get-AzExpressRouteCircuit -ResourceGroupName "Test-ER-RG" ``` ->[!TIP] ->If you're looking for the name of a resource group, you can get it by using the `Get-AzResourceGroup` command to list all the resource groups in your subscription. +> [!TIP] +> If you're looking for the name of a resource group, you can get it by using the `Get-AzResourceGroup` command to list all the resource groups in your subscription. To select a particular ExpressRoute circuit in a resource group, use the following command: ServiceProviderProvisioningState : Provisioned ``` > [!NOTE]-> After you configure an ExpressRoute circuit, if **Circuit status** is stuck in a **Not enabled** status, contact [Microsoft Support][Support]. If **Provider status** is stuck in **Not provisioned** status, contact your service provider. +> After you configure an ExpressRoute circuit, if the **Circuit status** is stuck in a **Not enabled** status, contact [Microsoft Support][Support]. If the **Provider status** is stuck in a **Not provisioned** status, contact your service provider. ## Validate peering configuration -After the service provider has completed provisioning the ExpressRoute circuit, multiple routing configurations based on external BGP (eBGP) can be created over the ExpressRoute circuit between CEs/MSEE-PEs (2/4) and MSEEs (5). Each ExpressRoute circuit can have one or both of the following: +After the service provider has completed provisioning the ExpressRoute circuit, multiple routing configurations based on external BGP (eBGP) can be created over the ExpressRoute circuit between CEs/MSEE-PEs (2/4) and MSEEs (5). Each ExpressRoute circuit can have one or both of the following peering configurations: - Azure private peering: traffic to private virtual networks in Azure - Microsoft peering: traffic to public endpoints of platform as a service (PaaS) and software as a service (SaaS) $ckt = Get-AzExpressRouteCircuit -ResourceGroupName "Test-ER-RG" -Name "Test-ER- Get-AzExpressRouteCircuitPeeringConfig -Name "MicrosoftPeering" -ExpressRouteCircuit $ckt ``` -If a peering isn't configured, you'll get an error message. Here's an example response when the stated peering (Azure public peering in this case) isn't configured within the circuit: +If a peering isn't configured, you get an error message. Here's an example response when the stated peering (Azure public peering in this case) isn't configured within the circuit: ```azurepowershell Get-AzExpressRouteCircuitPeeringConfig : Sequence contains no matching element StatusCode: 400 ## Test private peering connectivity -Test your private peering connectivity by counting packets arriving at and leaving the Microsoft edge of your ExpressRoute circuit on the MSEE devices. This diagnostic tool works by applying an ACL to the MSEE to count the number of packets that hit specific ACL rules. Using this tool will allow you to confirm connectivity by answering questions such as: +Test your private peering connectivity by counting packets arriving at and leaving the Microsoft edge of your ExpressRoute circuit on the MSEE devices. This diagnostic tool works by applying an ACL to the MSEE to count the number of packets that hit specific ACL rules. Using this tool allows you to confirm connectivity by answering questions such as: * Are my packets getting to Azure? * Are they getting back to on-premises? Test your private peering connectivity by counting packets arriving at and leavi ### Interpret results -When your results are ready, you'll have two sets of them for the primary and secondary MSEE devices. Review the number of matches in and out, and use the following scenarios to interpret the results: +When your results are ready, you have two sets of them for the primary and secondary MSEE devices. Review the number of matches in and out, and use the following scenarios to interpret the results: * **You see packet matches sent and received on both MSEEs**: This result indicates healthy traffic inbound to and outbound from the MSEEs on your circuit. If loss is occurring either on-premises or in Azure, it's happening downstream from the MSEEs. * **If you're testing PsPing from on-premises to Azure, received results show matches, but sent results show no matches**: This result indicates that traffic is coming in to Azure but isn't returning to on-premises. Check for return-path routing issues. For example, are you advertising the appropriate prefixes to Azure? Is a user-defined route (UDR) overriding prefixes? * **If you're testing PsPing from Azure to on-premises, sent results show matches, but received results show no matches**: This result indicates that traffic is coming in to on-premises but isn't returning to Azure. Work with your provider to find out why traffic isn't being routed to Azure via your ExpressRoute circuit. * **One MSEE shows no matches, but the other shows good matches**: This result indicates that one MSEE isn't receiving or passing any traffic. It might be offline (for example, BGP/ARP is down). -Your test results for each MSEE device will look like the following example: +Your test results for each MSEE device look like the following example: ``` src 10.0.0.0 dst 20.0.0.0 dstport 3389 (received): 120 matches This test result has the following properties: ## Verify availability of the virtual network gateway -The ExpressRoute virtual network gateway facilitates the management and control plane connectivity to private link services and private IPs deployed to an Azure virtual network. The virtual network gateway infrastructure is managed by Microsoft and sometimes undergoes maintenance. +The ExpressRoute virtual network gateway facilitates the management and control plane connectivity to private link services and private IPs deployed to an Azure virtual network. Microsoft manages the virtual network gateway infrastructure and sometimes undergoes maintenance. -During a maintenance period, performance of the virtual network gateway might be reduced. To troubleshoot connectivity issues to the virtual network and reactively detect if recent maintenance events reduced capacity for the virtual network gateway: +During a maintenance period, performance of the virtual network gateway may reduce. To troubleshoot connectivity issues to the virtual network and see if a recent maintenance event caused reduce capacity, follow these steps: 1. Select **Diagnose and solve problems** from your ExpressRoute circuit in the Azure portal. During a maintenance period, performance of the virtual network gateway might be :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/gateway-result.png" alt-text="Screenshot of the diagnostic results."::: - If maintenance on your virtual network gateway occurred during a period when you experienced packet loss or latency, it's possible that the reduced capacity of the gateway contributed to connectivity issues you're experiencing with the target virtual network. Follow the recommended steps. To support a higher network throughput and avoid connectivity issues during future maintenance events, consider upgrading the [virtual network gateway SKU](expressroute-about-virtual-network-gateways.md#gwsku). + If maintenance was done on your virtual network gateway during a period when you experienced packet loss or latency. It's possible that the reduced capacity of the gateway contributed to connectivity issues you're experiencing for the targeted virtual network. Follow the recommended steps. To support a higher network throughput and avoid connectivity issues during future maintenance events, consider upgrading the [virtual network gateway SKU](expressroute-about-virtual-network-gateways.md#gwsku). ## Next steps For more information or help, check out the following links: <!--Image References--> [1]: ./media/expressroute-troubleshooting-expressroute-overview/expressroute-logical-diagram.png "Diagram that shows logical ExpressRoute connectivity and connections between a customer network, a provider network, and a Microsoft datacenter."-[2]: ./media/expressroute-troubleshooting-expressroute-overview/portal-all-resources.png "All resources icon" [3]: ./media/expressroute-troubleshooting-expressroute-overview/portal-overview.png "Overview icon" [4]: ./media/expressroute-troubleshooting-expressroute-overview/portal-circuit-status.png "Screenshot that shows an example of ExpressRoute essentials listed in the Azure portal."-[5]: ./media/expressroute-troubleshooting-expressroute-overview/portal-private-peering.png "Screenshot that shows an example ExpressRoute peerings listed in the Azure portal." +[5]: ./media/expressroute-troubleshooting-expressroute-overview/portal-private-peering.png "Screenshot that shows an example ExpressRoute peering listed in the Azure portal." <!--Link References--> [Support]: https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade For more information or help, check out the following links: [CreatePeering]: ./expressroute-howto-routing-portal-resource-manager.md [ARP]: ./expressroute-troubleshooting-arp-resource-manager.md [HA]: ./designing-for-high-availability-with-expressroute.md-[DR-Pvt]: ./designing-for-disaster-recovery-with-expressroute-privatepeering.md |
frontdoor | Front Door Ddos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-ddos.md | Front Door is a large scaled, globally distributed service. We have many custome * You can create [custom WAF rules](../web-application-firewall/afds/waf-front-door-custom-rules.md) to automatically block and rate limit HTTP or HTTPS attacks that have known signatures. * Using the bot protection managed rule set provides protection against known bad bots. For more information, see [Configuring bot protection](../web-application-firewall/afds/waf-front-door-policy-configure-bot-protection.md). +Refer to [Application DDoS protection](../web-application-firewall/shared/application-ddos-protection.md) for guidance on how to use Azure WAF to protect against DDoS attacks. + ## Protect VNet origins Enable [Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md) on the origin VNet to protect your public IPs against DDoS attacks. DDoS Protection customers receive extra benefits including cost protection, SLA guarantee, and access to experts from the DDoS Rapid Response Team for immediate help during an attack. |
frontdoor | Front Door Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-overview.md | Modernize your internet first applications on Azure with Cloud Native experience * Secure applications with built-in layer 3-4 DDoS protection, seamlessly attached [Web Application Firewall (WAF)](../web-application-firewall/afds/afds-overview.md), and [Azure DNS to protect your domains](how-to-configure-endpoints.md). -* Protect your apps from malicious actors with Bot manager rules based on MicrosoftΓÇÖs own Threat Intelligence. +* Protect your applications against layer 7 DDoS attacks using WAF. For more information, see [Application DDoS protection](../web-application-firewall/shared/application-ddos-protection.md). ++* Protect your applications from malicious actors with Bot manager rules based on MicrosoftΓÇÖs own Threat Intelligence. * Privately connect to your backend behind Azure Front Door with [Private Link](private-link.md) and embrace a zero-trust access model. |
governance | Effects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md | Title: Understand how effects work description: Azure Policy definitions have various effects that determine how compliance is managed and reported. Previously updated : 02/22/2023 Last updated : 06/15/2023 + # Understand Azure Policy effects Each policy definition in Azure Policy has a single effect. That effect determines what happens when These effects are currently supported in a policy definition: ## Interchanging effects -Sometimes multiple effects can be valid for a given policy definition. Parameters are often used to specify allowed effect values so that a single definition can be more versatile. However, it's important to note that not all effects are interchangeable. Resource properties and logic in the policy rule can determine whether a certain effect is considered valid to the policy definition. For example, policy definitions with effect **AuditIfNotExists** require additional details in the policy rule that aren't required for policies with effect **Audit**. The effects also behave differently. **Audit** policies will assess a resource's compliance based on its own properties, while **AuditIfNotExists** policies will assess a resource's compliance based on a child or extension resource's properties. +Sometimes multiple effects can be valid for a given policy definition. Parameters are often used to specify allowed effect values so that a single definition can be more versatile. However, it's important to note that not all effects are interchangeable. Resource properties and logic in the policy rule can determine whether a certain effect is considered valid to the policy definition. For example, policy definitions with effect **AuditIfNotExists** require other details in the policy rule that aren't required for policies with effect **Audit**. The effects also behave differently. **Audit** policies will assess a resource's compliance based on its own properties, while **AuditIfNotExists** policies will assess a resource's compliance based on a child or extension resource's properties. -Below is some general guidance around interchangeable effects: +The following list is some general guidance around interchangeable effects: - **Audit**, **Deny**, and either **Modify** or **Append** are often interchangeable. - **AuditIfNotExists** and **DeployIfNotExists** are often interchangeable. - **Manual** isn't interchangeable. manages the evaluation and outcome and reports the results back to Azure Policy. - **denyAction** is evaluated last. After the Resource Provider returns a success code on a Resource Manager mode request,-**AuditIfNotExists** and **DeployIfNotExists** evaluate to determine whether additional compliance +**AuditIfNotExists** and **DeployIfNotExists** evaluate to determine whether more compliance logging or action is required. -Additionally, `PATCH` requests that only modify `tags` related fields restricts policy evaluation to +`PATCH` requests that only modify `tags` related fields restricts policy evaluation to policies containing conditions that inspect `tags` related fields. ## Append -Append is used to add additional fields to the requested resource during creation or update. A +Append is used to add more fields to the requested resource during creation or update. A common example is specifying allowed IPs for a storage resource. > [!IMPORTANT] Append evaluates before the request gets processed by a Resource Provider during updating of a resource. Append adds fields to the resource when the **if** condition of the policy rule is met. If the append effect would override a value in the original request with a different value, then it acts as a deny effect and rejects the request. To append a new value to an existing-array, use the **\[\*\]** version of the alias. +array, use the `[*]` version of the alias. When a policy definition using the append effect is run as part of an evaluation cycle, it doesn't make changes to resources that already exist. Instead, it marks any resource that meets the **if** take either a single **field/value** pair or multiples. Refer to ### Append examples -Example 1: Single **field/value** pair using a non-**\[\*\]** +Example 1: Single **field/value** pair using a non-`[*]` [alias](definition-structure.md#aliases) with an array **value** to set IP rules on a storage-account. When the non-**\[\*\]** alias is an array, the effect appends the **value** as the entire +account. When the non-`[*]` alias is an array, the effect appends the **value** as the entire array. If the array already exists, a deny event occurs from the conflict. ```json array. If the array already exists, a deny event occurs from the conflict. } ``` -Example 2: Single **field/value** pair using an **\[\*\]** [alias](definition-structure.md#aliases) -with an array **value** to set IP rules on a storage account. By using the **\[\*\]** alias, the +Example 2: Single **field/value** pair using an `[*]` [alias](definition-structure.md#aliases) +with an array **value** to set IP rules on a storage account. When you use the `[*]` alias, the effect appends the **value** to a potentially pre-existing array. If the array doesn't exist yet, it's created. resource is updated. ### Audit properties -For a Resource Manager mode, the audit effect doesn't have any additional properties for use in the +For a Resource Manager mode, the audit effect doesn't have any other properties for use in the **then** condition of the policy definition. For a Resource Provider mode of `Microsoft.Kubernetes.Data`, the audit effect has the following-additional subproperties of **details**. Use of `templateInfo` is required for new or updated policy +subproperties of **details**. Use of `templateInfo` is required for new or updated policy definitions as `constraintTemplate` is deprecated. - **templateInfo** (required) definitions as `constraintTemplate` is deprecated. - The CRD implementation of the Constraint template. Uses parameters passed via **values** as `{{ .Values.<valuename> }}`. In example 2 below, these values are `{{ .Values.excludedNamespaces }}` and `{{ .Values.allowedContainerImagesRegex }}`.+- **constraintTemplate** (deprecated) + - Can't be used with `templateInfo`. + - Must be replaced with `templateInfo` when creating or updating a policy definition. + - The Constraint template CustomResourceDefinition (CRD) that defines new Constraints. The + template defines the Rego logic, the Constraint schema, and the Constraint parameters that are + passed via **values** from Azure Policy. For more information, go to [Gatekeeper constraints](https://open-policy-agent.github.io/gatekeeper/website/docs/howto/#constraints). +- **constraintInfo** (optional) + - Can't be used with `constraint`, `constraintTemplate`, `apiGroups`, or `kinds`. + - If `constraintInfo` isn't provided, the constraint can be generated from `templateInfo` and policy. + - **sourceType** (required) + - Defines the type of source for the constraint. Allowed values: _PublicURL_ or _Base64Encoded_. + - If _PublicURL_, paired with property `url` to provide location of the constraint. The location must be publicly accessible. ++ > [!WARNING] + > Don't use SAS URIs or tokens in `url` or anything else that could expose a secret. - **namespaces** (optional) - An _array_ of [Kubernetes namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) definitions as `constraintTemplate` is deprecated. - **values** (optional) - Defines any parameters and values to pass to the Constraint. Each value must exist in the Constraint template CRD.-- **constraintTemplate** (deprecated)- - Can't be used with `templateInfo`. - - Must be replaced with `templateInfo` when creating or updating a policy definition. - - The Constraint template CustomResourceDefinition (CRD) that defines new Constraints. The - template defines the Rego logic, the Constraint schema, and the Constraint parameters that are - passed via **values** from Azure Policy. ### Audit example non-compliant. ### Deny properties -For a Resource Manager mode, the deny effect doesn't have any additional properties for use in the +For a Resource Manager mode, the deny effect doesn't have any more properties for use in the **then** condition of the policy definition. For a Resource Provider mode of `Microsoft.Kubernetes.Data`, the deny effect has the following-additional subproperties of **details**. Use of `templateInfo` is required for new or updated policy +subproperties of **details**. Use of `templateInfo` is required for new or updated policy definitions as `constraintTemplate` is deprecated. - **templateInfo** (required) definitions as `constraintTemplate` is deprecated. - The CRD implementation of the Constraint template. Uses parameters passed via **values** as `{{ .Values.<valuename> }}`. In example 2 below, these values are `{{ .Values.excludedNamespaces }}` and `{{ .Values.allowedContainerImagesRegex }}`.+- **constraintTemplate** (deprecated) + - Can't be used with `templateInfo`. + - Must be replaced with `templateInfo` when creating or updating a policy definition. + - The Constraint template CustomResourceDefinition (CRD) that defines new Constraints. The + template defines the Rego logic, the Constraint schema, and the Constraint parameters that are + passed via **values** from Azure Policy. For more information, go to [Gatekeeper constraints](https://open-policy-agent.github.io/gatekeeper/website/docs/howto/#constraints). +- **constraintInfo** (optional) + - Can't be used with `constraint`, `constraintTemplate`, `apiGroups`, or `kinds`. + - If `constraintInfo` isn't provided, the constraint can be generated from `templateInfo` and policy. + - **sourceType** (required) + - Defines the type of source for the constraint. Allowed values: _PublicURL_ or _Base64Encoded_. + - If _PublicURL_, paired with property `url` to provide location of the constraint. The location must be publicly accessible. ++ > [!WARNING] + > Don't use SAS URIs or tokens in `url` or anything else that could expose a secret. - **namespaces** (optional) - An _array_ of [Kubernetes namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) definitions as `constraintTemplate` is deprecated. - **values** (optional) - Defines any parameters and values to pass to the Constraint. Each value must exist in the Constraint template CRD.-- **constraintTemplate** (deprecated)- - Can't be used with `templateInfo`. - - Must be replaced with `templateInfo` when creating or updating a policy definition. - - The Constraint template CustomResourceDefinition (CRD) that defines new Constraints. The - template defines the Rego logic, the Constraint schema, and the Constraint parameters that are - passed via **values** from Azure Policy. It's recommended to use the newer `templateInfo` to - replace `constraintTemplate`. ### Deny example location of the Constraint template to use in Kubernetes to limit the allowed co } } ```+ ## DenyAction (preview) -`DenyAction` is used to block requests on intended action to resources. The only supported action today is `DELETE`. This effect will help prevent any accidental deletion of critical resources. +`DenyAction` is used to block requests on intended action to resources. The only supported action today is `DELETE`. This effect helps prevent any accidental deletion of critical resources. ### DenyAction evaluation assignment. > Under preview, assignments with `denyAction` effect will show a `Not Started` compliance state. #### Subscription deletion+ Policy won't block removal of resources that happens during a subscription deletion. #### Resource group deletion+ Policy will evaluate resources that support location and tags against `DenyAction` policies during a resource group deletion. Only policies that have the `cascadeBehaviors` set to `deny` in the policy rule will block a resource group deletion. Policy won't block removal of resources that don't support location and tags nor any policy with `mode:all`. #### Cascade deletion+ Cascade deletion occurs when deleting of a parent resource is implicitly deletes all its child resources. Policy won't block removal of child resources when a delete action targets the parent resources. For example, `Microsoft.Insights/diagnosticSettings` is a child resource of `Microsoft.Storage/storageaccounts`. If a `denyAction` policy targets `Microsoft.Insights/diagnosticSettings`, a delete call to the diagnostic setting (child) will fail, but a delete to the storage account (parent) will implicitly delete the diagnostic setting (child). [!INCLUDE [policy-denyAction](../../../../includes/azure-policy-deny-action.md)] The **details** property of the DenyAction effect has all the subproperties that - Default value is `deny`. ### DenyAction example-Example: Deny any delete calls targeting database accounts that have a tag environment that equals prod. Since cascade behavior is set to deny, block any DELETE call that targets a resource group with an applicable database account. ++Example: Deny any delete calls targeting database accounts that have a tag environment that equals prod. Since cascade behavior is set to deny, block any `DELETE` call that targets a resource group with an applicable database account. ```json { related resources to match and the template deployment to execute. ### DeployIfNotExists example -Example: Evaluates SQL Server databases to determine whether transparentDataEncryption is enabled. +Example: Evaluates SQL Server databases to determine whether `transparentDataEncryption` is enabled. If not, then a deployment to enable is executed. ```json The following operations are supported by Modify: > [!IMPORTANT] > If you're managing tags, it's recommended to use Modify instead of Append as Modify provides-> additional operation types and the ability to remediate existing resources. However, Append is +> more operation types and the ability to remediate existing resources. However, Append is > recommended if you aren't able to create a managed identity or Modify doesn't yet support the > alias for the resource property. properties. Operation determines what the remediation task does to the tags, fie tag is altered, and value defines the new setting for that tag. The following example makes the following tag changes: -- Sets the `environment` tag to "Test", even if it already exists with a different value.+- Sets the `environment` tag to "Test" even if it already exists with a different value. - Removes the tag `TempResource`. - Sets the `Dept` tag to the policy parameter _DeptName_ configured on the policy assignment. with a parameterized value: ``` Example 3: Ensure that a storage account doesn't allow blob public access, the Modify operation-is applied only when evaluating requests with API version greater or equals to '2019-04-01': +is applied only when evaluating requests with API version greater or equals to `2019-04-01`: ```json "then": { different scopes. Each of these assignments is also likely to have a different e condition and effect for each policy is independently evaluated. For example: - Policy 1- - Restricts resource location to 'westus' + - Restricts resource location to `westus` - Assigned to subscription A - Deny effect - Policy 2- - Restricts resource location to 'eastus' + - Restricts resource location to `eastus` - Assigned to resource group B in subscription A - Audit effect This setup would result in the following outcome: -- Any resource already in resource group B in 'eastus' is compliant to policy 2 and non-compliant to+- Any resource already in resource group B in `eastus` is compliant to policy 2 and non-compliant to policy 1-- Any resource already in resource group B not in 'eastus' is non-compliant to policy 2 and- non-compliant to policy 1 if not in 'westus' -- Any new resource in subscription A not in 'westus' is denied by policy 1-- Any new resource in subscription A and resource group B in 'westus' is created and non-compliant+- Any resource already in resource group B not in `eastus` is non-compliant to policy 2 and + non-compliant to policy 1 if not in `westus` +- Any new resource in subscription A not in `westus` is denied by policy 1 +- Any new resource in subscription A and resource group B in `westus` is created and non-compliant on policy 2 If both policy 1 and policy 2 had effect of deny, the situation changes to: -- Any resource already in resource group B not in 'eastus' is non-compliant to policy 2-- Any resource already in resource group B not in 'westus' is non-compliant to policy 1-- Any new resource in subscription A not in 'westus' is denied by policy 1+- Any resource already in resource group B not in `eastus` is non-compliant to policy 2 +- Any resource already in resource group B not in `westus` is non-compliant to policy 1 +- Any new resource in subscription A not in `westus` is denied by policy 1 - Any new resource in resource group B of subscription A is denied Each assignment is individually evaluated. As such, there isn't an opportunity for a resource to |
governance | Scope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/scope.md | Title: Understand scope in Azure Policy description: Describes the concept of scope in Azure Resource Manager and how it applies to Azure Policy to control which resources Azure Policy evaluates. Previously updated : 08/17/2021 Last updated : 06/15/2023 + # Understand scope in Azure Policy There are many settings that determine which resources are capable of being evaluated and which resources are evaluated by Azure Policy. The primary concept for these controls is _scope_. Scope in Azure Policy is based on how scope works in Azure Resource Manager. For a high-level overview, see [Scope in Azure Resource Manager](../../../azure-resource-manager/management/overview.md#understand-scope).+ This article explains the importance of _scope_ in Azure Policy and it's related objects and properties. properties. The first instance scope used by Azure Policy is when a policy definition is created. The definition may be saved in either a management group or a subscription. The location determines the scope to which the initiative or policy can be assigned. Resources must be within the resource hierarchy of-the definition location to target for assignment. +the definition location to target for assignment. The [resources covered by Azure Policy](../overview.md#resources-covered-by-azure-policy) describes how policies are evaluated. If the definition location is a: The following table is a comparison of the scope options: |**Resource Manager object** | - | - | ✔ | |**Requires modifying policy assignment object** | ✔ | ✔ | - | -So how do you choose whether to use an exclusion or exemption? Typically exclusions are recommended to permanently bypass evaluation for a broad scope like a test environment which doesn't require the same level of governance. Exemptions are recommended for time-bound or more specific scenarios where a resource or resource hierarchy should still be tracked and would otherwise be evaluated, but there is a specific reason it should not be assessed for compliance. +So how do you choose whether to use an exclusion or exemption? Typically exclusions are recommended to permanently bypass evaluation for a broad scope like a test environment that doesn't require the same level of governance. Exemptions are recommended for time-bound or more specific scenarios where a resource or resource hierarchy should still be tracked and would otherwise be evaluated, but there's a specific reason it shouldn't be assessed for compliance. ## Next steps |
governance | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/overview.md | Title: Overview of Azure Policy description: Azure Policy is a service in Azure, that you use to create, assign and, manage policy definitions in your Azure environment. Previously updated : 12/02/2022 Last updated : 06/15/2023 + # What is Azure Policy? Azure Policy helps to enforce organizational standards and to assess compliance at-scale. Through in their environment. Azure RBAC focuses on managing user [actions](../../role-based-access-control/resource-provider-operations.md) at different scopes. If-control of an action is required based on user information, then Azure RBAC is the correct tool to use. Even if an individual has access to perform an action, if the result is a non-compliant resource, Azure Policy still -blocks the create or update. +control of an action is required based on user information, then Azure RBAC is the correct tool to use. Even if an individual has access to perform an action, if the result is a non-compliant resource, Azure Policy still blocks the create or update. The combination of Azure RBAC and Azure Policy provides full scope control in Azure. permissions. If none of the built-in roles have the permissions required, create a [custom role](../../role-based-access-control/custom-roles.md). -Azure Policy operations can have a significant impact on your Azure environment. Only the minimum set of -permissions necessary to perform a task should be assigned and these permissions should not be granted -to users who do not need them. +Azure Policy operations can have a significant effect on your Azure environment. Only the minimum set of permissions necessary to perform a task should be assigned and these permissions shouldn't be granted to users who don't need them. > [!NOTE] > The managed identity of a **deployIfNotExists** or **modify** policy assignment needs enough to users who do not need them. To create, edit, or delete Azure Virtual Network Manager dynamic group policies, you need: - Read and write Azure RBAC permissions to the underlying policy-- Azure RBAC permissions to join the network group (Note: Classic Admin authorization is not supported)+- Azure RBAC permissions to join the network group (Classic Admin authorization isn't supported). Specifically, the required resource provider permission is `Microsoft.Network/networkManagers/networkGroups/join/action`. Specifically, the required resource provider permission is `Microsoft.Network/ne ### Resources covered by Azure Policy -Azure Policy evaluates all Azure resources at or below subscription-level, including Arc enabled -resources. For certain resource providers such as -[Machine configuration](../machine-configuration/overview.md), -[Azure Kubernetes Service](../../aks/intro-kubernetes.md), and -[Azure Key Vault](../../key-vault/general/overview.md), there's a deeper integration for managing -settings and objects. To find out more, see -[Resource Provider modes](./concepts/definition-structure.md). +Although a policy can be assigned at the management group level, _only_ resources at the subscription or resource group level are evaluated. ++For certain resource providers such as [Machine configuration](../machine-configuration/overview.md), [Azure Kubernetes Service](../../aks/intro-kubernetes.md), and [Azure Key Vault](../../key-vault/general/overview.md), there's a deeper integration for managing settings and objects. To find out more, go to [Resource Provider modes](./concepts/definition-structure.md#resource-provider-modes). ### Recommendations for managing policies In Azure Policy, we offer several built-in policies that are available by defaul specified by the deploy request. - **Not allowed resource types** (Deny): Prevents a list of resource types from being deployed. -To implement these policy definitions (both built-in and custom definitions), you'll need to assign +To implement these policy definitions (both built-in and custom definitions), you need to assign them. You can assign any of these policies through the Azure portal, PowerShell, or Azure CLI. Policy evaluation happens with several different actions, such as policy assignment or policy on the child management group or subscription level. If any assignment results i denied, then the only way to allow the resource is to modify the denying assignment. Policy assignments always use the latest state of their assigned definition or initiative when-evaluating resources. If a policy definition that is already assigned is changed all existing +evaluating resources. If a policy definition that's already assigned is changed, all existing assignments of that definition will use the updated logic when evaluating. For more information on setting assignments through the portal, see [Create a policy assignment to |
healthcare-apis | Dicom Change Feed Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-change-feed-overview.md | -Client applications can read these logs at any time, either in streaming, or in batch mode. The Change Feed enables you to build efficient and scalable solutions that process change events that occur in your DICOM service. +Client applications can read these logs at any time in batches of any size. The Change Feed enables you to build efficient and scalable solutions that process change events that occur in your DICOM service. You can process these change events asynchronously, incrementally or in-full. Any number of client applications can independently read the Change Feed, in parallel, and at their own pace. +As of v2 of the API, the Change Feed can be queried for a particular time window. + Make sure to specify the version as part of the URL when making requests. More information can be found in the [API Versioning for DICOM service Documentation](api-versioning-dicom-service.md). ## API Design -The API exposes two `GET` endpoints for interacting with the Change Feed. A typical flow for consuming the Change Feed is [provided below](#example-usage-flow). +The API exposes two `GET` endpoints for interacting with the Change Feed. A typical flow for consuming the Change Feed is [provided below](#usage). Verb | Route | Returns | Description : | :-- | :- | :-GET | /changefeed | JSON Array | [Read the Change Feed](#read-change-feed) -GET | /changefeed/latest | JSON Object | [Read the latest entry in the Change Feed](#get-latest-change-feed-item) +GET | /changefeed | JSON Array | [Read the Change Feed](#change-feed) +GET | /changefeed/latest | JSON Object | [Read the latest entry in the Change Feed](#latest-change-feed) ### Object model Field | Type | Description : | :-- | :-Sequence | int | The sequence ID that can be used for paging (via offset) or anchoring +Sequence | long | The unique ID per change event StudyInstanceUid | string | The study instance UID SeriesInstanceUid | string | The series instance UID SopInstanceUid | string | The sop instance UID current | This instance is the current version. replaced | This instance has been replaced by a new version. deleted | This instance has been deleted and is no longer available in the service. -### Read Change Feed +## Change Feed ++The Change Feed resource is a collection of events that have occurred within the DICOM server. ++### Version 2 ++#### Request +```http +GET /changefeed?startTime={datetime}&endtime={datetime}&offset={int}&limit={int}&includemetadata={bool} HTTP/1.1 +Accept: application/json +Content-Type: application/json +``` -**Route**: /changefeed?offset={int}&limit={int}&includemetadata={**true**|false} +#### Response ```json [ { deleted | This instance has been deleted and is no longer available in the serv "Timestamp": "2020-03-04T01:03:08.4834Z", "State": "current|replaced|deleted", "Metadata": {- "actual": "metadata" + // DICOM JSON } }, { deleted | This instance has been deleted and is no longer available in the serv "Timestamp": "2020-03-05T07:13:16.4834Z", "State": "current|replaced|deleted", "Metadata": {- "actual": "metadata" + // DICOM JSON }- } - ... + }, + //... ] ``` #### Parameters -Name | Type | Description -:-- | : | : -offset | int | The number of records to skip before the values to return -limit | int | The number of records to return (default: 10, min: 1, max: 100) -includemetadata | bool | Whether or not to include the metadata (default: true) +Name | Type | Description | Default | Min | Max | +:-- | :- | :- | : | :-- | :-- | +offset | long | The number of events to skip from the beginning of the result set | `0` | `0` | | +limit | int | The maximum number of events to return | `100` | `1` | `200` | +startTime | DateTime | The inclusive start time for change events | `"0001-01-01T00:00:00Z"` | `"0001-01-01T00:00:00Z"` | `"9999-12-31T23:59:59.9999998Z"`| +endTime | DateTime | The exclusive end time for change events | `"9999-12-31T23:59:59.9999999Z"` | `"0001-01-01T00:00:00.0000001"` | `"9999-12-31T23:59:59.9999999Z"` | +includeMetadata | bool | Indicates whether or not to include the DICOM metadata | `true` | | | ++### Version 1 ++#### Request +```http +GET /changefeed?offset={int}&limit={int}&includemetadata={bool} HTTP/1.1 +Accept: application/json +Content-Type: application/json +``` -### Get latest Change Feed item +#### Response +```json +[ + { + "Sequence": 1, + "StudyInstanceUid": "{uid}", + "SeriesInstanceUid": "{uid}", + "SopInstanceUid": "{uid}", + "Action": "create|delete", + "Timestamp": "2020-03-04T01:03:08.4834Z", + "State": "current|replaced|deleted", + "Metadata": { + // DICOM JSON + } + }, + { + "Sequence": 2, + "StudyInstanceUid": "{uid}", + "SeriesInstanceUid": "{uid}", + "SopInstanceUid": "{uid}", + "Action": "create|delete", + "Timestamp": "2020-03-05T07:13:16.4834Z", + "State": "current|replaced|deleted", + "Metadata": { + // DICOM JSON + } + }, + // ... +] +``` -**Route**: /changefeed/latest?includemetadata={**true**|false} +#### Parameters +Name | Type | Description | Default | Min | Max | +:-- | :- | :- | : | :-- | :-- | +offset | long | The exclusive starting sequence number for events | `0` | `0` | | +limit | int | The maximum value of the sequence number relative to the offset. For example, if the offset is 10 and the limit is 5, then the maximum sequence number returned will be 15. | `10` | `1` | `100` | +includeMetadata | bool | Indicates whether or not to include the DICOM metadata | `true` | | | ++## Latest Change Feed +The latest Change Feed resource represents the latest event that has occurred within the DICOM Server. ++### Request +```http +GET /changefeed/latest?includemetadata={bool} HTTP/1.1 +Accept: application/json +Content-Type: application/json +``` +### Response ```json { "Sequence": 2, includemetadata | bool | Whether or not to include the metadata (default: true) "Timestamp": "2020-03-05T07:13:16.4834Z", "State": "current|replaced|deleted", "Metadata": {- "actual": "metadata" + //DICOM JSON } } ``` -#### Parameters +### Parameters -Name | Type | Description -:-- | : | : -includemetadata | bool | Whether or not to include the metadata (default: true) +Name | Type | Description | Default | +:-- | : | :- | : | +includeMetadata | bool | Indicates whether or not to include the metadata | `true` | ## Usage -### Example usage flow --Below is the usage flow for an example application that does other processing on the instances within DICOM service. --1. Application that wants to monitor the Change Feed starts. -2. It determines if there's a current state that it should start with: - * If it has a state, it uses the offset (sequence) stored. - * If it has never started and wants to start from beginning, it uses `offset=0`. - * If it only wants to process from now, it queries `/changefeed/latest` to obtain the last sequence. -3. It queries the Change Feed with the given offset `/changefeed?offset={offset}` -4. If there are entries: - * It performs extra processing. - * It updates its current state. - * It starts again above at step 2. -5. If there are no entries, it sleeps for a configured amount of time and starts back at step 2. +### User application ++#### Version 2 ++1. An application regularly queries the Change Feed on some time interval + * For example, if querying every hour, a query for the Change Feed may look like `/changefeed?startTime=2023-05-10T16:00:00Z&endTime=2023-05-10T17:00:00Z` + * If starting from the beginning, the Change Feed query may omit the `startTime` to read all of the changes up to, but excluding, the `endTime` + * E.g. `/changefeed?endTime=2023-05-10T17:00:00Z` +2. Based on the `limit` (if provided), an application continues to query for additional pages of change events if the number of returned events is equal to the `limit` (or default) by updating the offset on each subsequent query + * For example, if the `limit` is `100`, and 100 events are returned, then the subsequent query would include `offset=100` to fetch the next "page" of results. The below queries demonstrate the pattern: + * `/changefeed?offset=0&limit=100&startTime=2023-05-10T16:00:00Z&endTime=2023-05-10T17:00:00Z` + * `/changefeed?offset=100&limit=100&startTime=2023-05-10T16:00:00Z&endTime=2023-05-10T17:00:00Z` + * `/changefeed?offset=200&limit=100&startTime=2023-05-10T16:00:00Z&endTime=2023-05-10T17:00:00Z` + * If fewer events than the `limit` are returned, then the application can assume that there are no more results within the time range ++#### Version 1 ++1. An application determines from which sequence number it wishes to start reading change events: + * To start from the first event, the application should use `offset=0` + * To start from the latest event, the application should specify the `offset` parameter with the value of `Sequence` from the latest change event using the `/changefeed/latest` resource +2. On some regular polling interval, the application performs the following actions: + * Fetches the latest sequence number from the `/changefeed/latest` endpoint + * Fetches the next set of changes for processing by querying the change feed with the current offset + * For example, if the application has currently processed up to sequence number 15 and it only wants to process at most 5 events at once, then it should use the URL `/changefeed?offset=15&limit=5` + * Processes any entries return by the `/changefeed` resource + * Updates its current sequence number to either: + 1. The maximum sequence number returned by the `/changefeed` resource + 2. The `offset` + `limit` if no change events were returned from the `/changefeed` resource, but the latest sequence number returned by `/changefeed/latest` is greater than the current sequence number used for `offset` ### Other potential usage patterns |
healthcare-apis | Events Faqs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-faqs.md | Title: Frequently asked questions about Events - Azure Health Data Services -description: This article provides answers to the frequently asked questions about Events. +description: Learn about the frequently asked questions about Events. Previously updated : 04/04/2022 Last updated : 06/16/2022 -### Can I use Events with a different FHIR/DICOM service other than the Azure Health Data Services FHIR/DICOM service? +## Can I use Events with a different FHIR/DICOM service other than the Azure Health Data Services FHIR/DICOM service? No. The Azure Health Data Services Events feature only currently supports the Azure Health Data Services FHIR and DICOM services. -### What FHIR resource events does Events support? +## What FHIR resource events does Events support? Events are generated from the following FHIR service types: Events are generated from the following FHIR service types: For more information about the FHIR service delete types, see [FHIR REST API capabilities for Azure Health Data Services FHIR service](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md). -### Does Events support FHIR bundles? +## Does Events support FHIR bundles? Yes. The Events feature is designed to emit notifications of data changes at the FHIR resource level. Events support these [FHIR bundle types](http://hl7.org/fhir/R4/valueset-bundle- > [!NOTE] > Events are not sent in the sequence of the data operations in the FHIR bundle. -### What DICOM image events does Events support? +## What DICOM image events does Events support? Events are generated from the following DICOM service types: Events are generated from the following DICOM service types: - **DicomImageUpdated** - The event emitted after a DICOM image gets updated successfully. -### What is the payload of an Events message? +## What is the payload of an Events message? For a detailed description of the Events message structure and both required and nonrequired elements, see [Events troubleshooting guide](events-troubleshooting-guide.md). -### What is the throughput for the Events messages? +## What is the throughput for the Events messages? The throughput of the FHIR or DICOM service and the Event Grid govern the throughput of FHIR and DICOM events. When a request made to the FHIR service is successful, it returns a 2xx HTTP status code. It also generates a FHIR resource or DICOM image changing event. The current limitation is 5,000 events/second per a workspace for all FHIR or DICOM service instances in it. -### How am I charged for using Events? +## How am I charged for using Events? There are no extra charges for using [Azure Health Data Services Events](https://azure.microsoft.com/pricing/details/health-data-services/). However, applicable charges for the [Event Grid](https://azure.microsoft.com/pricing/details/event-grid/) are assessed against your Azure subscription. -### How do I subscribe to multiple FHIR and/or DICOM services in the same workspace separately? +## How do I subscribe to multiple FHIR and/or DICOM services in the same workspace separately? You can use the Event Grid filtering feature. There are unique identifiers in the event message payload to differentiate different accounts and workspaces. You can find a global unique identifier for workspace in the `source` field, which is the Azure Resource ID. You can locate the unique FHIR account name in that workspace in the `data.resourceFhirAccount` field. You can locate the unique DICOM account name in that workspace in the `data.serviceHostName` field. When you create a subscription, you can use the filtering operators to select the events you want to get in that subscription. :::image type="content" source="media\event-grid\event-grid-filters.png" alt-text="Screenshot of the Event Grid filters tab." lightbox="media\event-grid\event-grid-filters.png"::: -### Can I use the same subscriber for multiple workspaces, FHIR accounts, or DICOM accounts? +## Can I use the same subscriber for multiple workspaces, FHIR accounts, or DICOM accounts? Yes. We recommend that you use different subscribers for each individual FHIR or DICOM account to process in isolated scopes. -### Is Event Grid compatible with HIPAA and HITRUST compliance obligations? +## Is Event Grid compatible with HIPAA and HITRUST compliance obligations? Yes. Event Grid supports customer's Health Insurance Portability and Accountability Act (HIPAA) and Health Information Trust Alliance (HITRUST) obligations. For more information, see [Microsoft Azure Compliance Offerings](https://azure.microsoft.com/resources/microsoft-azure-compliance-offerings/). -### What is the expected time to receive an Events message? +## What is the expected time to receive an Events message? On average, you should receive your event message within one second after a successful HTTP request. 99.99% of the event messages should be delivered within five seconds unless the limitation of either the FHIR service, DICOM service, or [Event Grid](../../event-grid/quotas-limits.md) has been met. -### Is it possible to receive duplicate Events messages? +## Is it possible to receive duplicate Events messages? Yes. The Event Grid guarantees at least one Events message delivery with its push mode. There may be chances that the event delivery request returns with a transient failure status code for random reasons. In this situation, the Event Grid considers that as a delivery failure and resends the Events message. For more information, see [Azure Event Grid delivery and retry](../../event-grid/delivery-and-retry.md). Generally, we recommend that developers ensure idempotency for the event subscriber. The event ID or the combination of all fields in the `data` property of the message content are unique per each event. The developer can rely on them to deduplicate. -### More frequently asked questions +## More frequently asked questions [FAQs about the Azure Health Data Services](../healthcare-apis-faqs.md) |
healthcare-apis | Concepts Machine Learning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/concepts-machine-learning.md | In this article, we explore using the MedTech service and the Azure Machine Lear ## The MedTech service and Azure Machine Learning Service reference architecture -The MedTech service enables IoT devices to seamless integration with FHIR services. This reference architecture is designed to accelerate adoption of Internet of Things (IoT) projects. This solution uses Azure Databricks for the Machine Learning (ML) compute. However, Azure Machine Learning Services with Kubernetes or a partner ML solution could fit into the Machine Learning Scoring Environment. +The MedTech service enables IoT devices to seamlessly integrate with FHIR services. This reference architecture is designed to accelerate adoption of Internet of Things (IoT) projects. This solution uses Azure Databricks for the Machine Learning (ML) compute. However, Azure Machine Learning Services with Kubernetes or a partner ML solution could fit into the Machine Learning Scoring Environment. The four line colors show the different parts of the data journey. The four line colors show the different parts of the data journey. :::image type="content" source="media/concepts-machine-learning/iot-connector-machine-learning.png" alt-text="Screenshot of the MedTech service and Machine Learning Service reference architecture." lightbox="media/concepts-machine-learning/iot-connector-machine-learning.png"::: -**Data ingest ΓÇô Steps 1 through 5** +## Data ingest: Steps 1 - 5 1. Data from IoT device or via device gateway sent to Azure IoT Hub/Azure IoT Edge. 2. Data from Azure IoT Edge sent to Azure IoT Hub. 3. Copy of raw IoT device data sent to a secure storage environment for device administration.-4. PHI IoT payload moves from Azure IoT Hub to the MedTech service. The MedTech service icon represents multiple Azure services. -5. Three parts to number 5: - a. The MedTech service requests Patient resource from the FHIR service. - b. The FHIR service sends Patient resource back to the MedTech service. - c. IoT Patient Observation is record in the FHIR service. +4. IoT payload moves from Azure IoT Hub to the MedTech service. The MedTech service icon represents multiple Azure services. +5. Three parts to number five: + 1. The MedTech service requests Patient resource from the FHIR service. + 2. The FHIR service sends Patient resource back to the MedTech service. + 3. IoT Patient Observation is record in the FHIR service. -**Machine Learning and AI Data Route ΓÇô Steps 6 through 11** +## Machine Learning and AI Data Route: Steps 6 - 11 6. Normalized ungrouped data stream sent to an Azure Function (ML Input). 7. Azure Function (ML Input) requests Patient resource to merge with IoT payload.-8. IoT payload with PHI is sent to an event hub for distribution to Machine Learning compute and storage. -9. PHI IoT payload is sent to Azure Data Lake Storage Gen 2 for scoring observation over longer time windows. -10. PHI IoT payload is sent to Azure Databricks for windowing, data fitting, and data scoring. -11. The Azure Databricks requests more patient data from data lake as needed. a. Azure Databricks also sends a copy of the scored data to the data lake. +8. IoT payload is sent to an event hub for distribution to Machine Learning compute and storage. +9. IoT payload is sent to Azure Data Lake Storage Gen 2 for scoring observation over longer time windows. +10. IoT payload is sent to Azure Databricks for windowing, data fitting, and data scoring. +11. The Azure Databricks requests more patient data from data lake as needed. + 1. Azure Databricks also sends a copy of the scored data to the data lake. -**Notification and Care Coordination ΓÇô Steps 12 - 18** +## Notification and Care Coordination: Steps 12 - 18 **Hot path** 12. Azure Databricks sends a payload to an Azure Function (ML Output).-13. RiskAssessment and/or Flag resource submitted to FHIR service. a. For each observation window, a RiskAssessment resource is submitted to the FHIR service. b. For observation windows where the risk assessment is outside the acceptable range a Flag resource should also be submitted to the FHIR service. +13. RiskAssessment and/or Flag resource submitted to FHIR service. + 1. For each observation window, a RiskAssessment resource is submitted to the FHIR service. + 2. For observation windows where the risk assessment is outside the acceptable range a Flag resource should also be submitted to the FHIR service. 14. Scored data sent to data repository for routing to appropriate care team. Azure SQL Server is the data repository used in this design because of its native interaction with Power BI. 15. Power BI Dashboard is updated with Risk Assessment output in under 15 minutes. For an overview of the MedTech service, see > [!div class="nextstepaction"] > [What is the MedTech service?](overview.md) +To learn about the MedTech service device message data transformation, see ++> [!div class="nextstepaction"] +> [Understand the MedTech service device data processing stages](overview-of-device-data-processing-stages.md) ++To learn about methods for deploying the MedTech service, see ++> [!div class="nextstepaction"] +> [Choose a deployment method for the MedTech service](deploy-new-choose.md) + FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Concepts Power Bi | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/concepts-power-bi.md | -In this article, we explore using the MedTech service and Microsoft Power Business Intelligence (BI). +In this article, we explore using the MedTech service and Microsoft Power Business Intelligence (Power BI). ## The MedTech service and Power BI reference architecture This reference architecture shows the basic components of using the Microsoft cloud services to enable Power BI on top of Internet of Things (IoT) and FHIR data. -You can even embed Power BI dashboards inside the Microsoft Teams client to further enhance care team coordination. For more information on embedding Power BI in Teams, visit [here](/power-bi/collaborate-share/service-embed-report-microsoft-teams). +You can even embed Power BI dashboards inside the Microsoft Teams client to further enhance care team coordination. For more information on embedding Power BI in Teams, see [Embed Power BI content in Microsoft Teams](/power-bi/collaborate-share/service-embed-report-microsoft-teams). :::image type="content" source="media/concepts-power-bi/iot-connector-power-bi.png" alt-text="Screenshot of the MedTech service and Power BI." lightbox="media/concepts-power-bi/iot-connector-power-bi.png"::: For an overview of the MedTech service, see > [!div class="nextstepaction"] > [What is the MedTech service?](overview.md) +To learn about the MedTech service device message data transformation, see ++> [!div class="nextstepaction"] +> [Understand the MedTech service device data processing stages](overview-of-device-data-processing-stages.md) ++To learn about methods for deploying the MedTech service, see ++> [!div class="nextstepaction"] +> [Choose a deployment method for the MedTech service](deploy-new-choose.md) + FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Concepts Teams | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/concepts-teams.md | When combining the MedTech service, the FHIR service, and Teams, you can enable The diagram is a MedTech service to Teams notifications conceptual architecture for enabling the MedTech service, the FHIR service, and the Teams Patient App. -You can even embed Power BI Dashboards inside the Microsoft Teams client. For more information on embedding Power BI in Microsoft Team visit [here](/power-bi/collaborate-share/service-embed-report-microsoft-teams). +You can even embed Power BI Dashboards inside the Microsoft Teams client. For more information on embedding Power BI in Microsoft Team, see [Embed Power BI content in Microsoft Teams](/power-bi/collaborate-share/service-embed-report-microsoft-teams). :::image type="content" source="media/concepts-teams/iot-connector-teams.png" alt-text="Screenshot of the MedTech service and Teams." lightbox="media/concepts-teams/iot-connector-teams.png"::: For an overview of the MedTech service, see > [!div class="nextstepaction"] > [What is the MedTech service?](overview.md) +To learn about the MedTech service device message data transformation, see ++> [!div class="nextstepaction"] +> [Understand the MedTech service device data processing stages](overview-of-device-data-processing-stages.md) ++To learn about methods for deploying the MedTech service, see ++> [!div class="nextstepaction"] +> [Choose a deployment method for the MedTech service](deploy-new-choose.md) + FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Frequently Asked Questions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/frequently-asked-questions.md | -### Where is the MedTech service available? +## Where is the MedTech service available? The MedTech service is available in these Azure regions: [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=health-data-services). -### Can I use the MedTech service with a different FHIR service other than the Azure Health Data Services FHIR service? +## Can I use the MedTech service with a different FHIR service other than the Azure Health Data Services FHIR service? No. The MedTech service currently only supports the Azure Health Data Services FHIR service for the persistence of transformed device data. The open-source version of the MedTech service supports the use of different FHIR services. To learn about the MedTech service open-source projects, see [Open-source projects](git-projects.md). -### What versions of FHIR does the MedTech service support? +## What versions of FHIR does the MedTech service support? The MedTech service supports the [HL7 FHIR® R4](https://www.hl7.org/implement/standards/product_brief.cfm?product_id=491) standard. -### Why do I have to provide device and FHIR destination mappings to the MedTech service? +## Why do I have to provide device and FHIR destination mappings to the MedTech service? The MedTech service requires device and FHIR destination mappings to perform normalization and transformation processes on device data. To learn how the MedTech service transforms device data into [FHIR Observations](https://www.hl7.org/fhir/observation.html), see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md). -### Is JsonPathContent still supported by the MedTech service device mapping? +## Is JsonPathContent still supported by the MedTech service device mapping? Yes. JsonPathContent can be used as a template type within [CollectionContent](overview-of-device-mapping.md#collectioncontent). It's recommended that [CalculatedContent](how-to-use-calculatedcontent-templates.md) is used as it supports all of the features of JsonPathContent with extra support for more advanced features. -### How long does it take for device data to show up in the FHIR service? +## How long does it take for device data to show up in the FHIR service? The MedTech service buffers [FHIR Observations](https://www.hl7.org/fhir/observation.html) created during the transformation stage and provides near real-time processing. However, this buffer can potentially delay the persistence of FHIR Observations to the FHIR service up to ~five minutes. To learn how the MedTech service transforms device data into FHIR Observations, see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md). -### Why are the device messages added to the event hub not showing up as FHIR Observations in the FHIR service? +## Why are the device messages added to the event hub not showing up as FHIR Observations in the FHIR service? > [!TIP] > Having access to MedTech service logs is essential for troubleshooting and assessing the overall health and performance of your MedTech service. The MedTech service buffers [FHIR Observations](https://www.hl7.org/fhir/observa \* Reference [Deploy the MedTech service using the Azure portal](deploy-manual-portal.md#configure-the-destination-tab) for a functional description of the MedTech service resolution types (**Create** or **Lookup**). -### Does the MedTech service perform backups of device messages? +## Does the MedTech service perform backups of device messages? No. The MedTech service doesn't back up the device messages that is sent to the event hub. The event hub owner controls the device message retention period within their event hub, which can be from one to 90 days. Event hubs can be deployed in [three different service tiers](../../event-hubs/event-hubs-quotas.md?source=recommendations#basic-vs-standard-vs-premium-vs-dedicated-tiers). Message retention limits are tier-dependent: Basic one day, Standard 1-7 days, Premium 90 days. If the MedTech service successfully processes the device data, it's persisted in the FHIR service, and the FHIR service backup policy applies. To learn more about event hub message retention, see [What is the maximum retention period for events?](/azure/event-hubs/event-hubs-faq#what-is-the-maximum-retention-period-for-events-) -### What are the subscription quota limits for the MedTech service? +## What are the subscription quota limits for the MedTech service? * (25) MedTech services per Azure subscription (not adjustable). * (10) MedTech services per Azure Health Data Services workspace (not adjustable). To learn more about event hub message retention, see [What is the maximum retent \* FHIR destination is a child resource of the MedTech service. -### Can I use the MedTech service with device messages from Apple®, Google®, or Fitbit® devices? +## Can I use the MedTech service with device messages from Apple®, Google®, or Fitbit® devices? Yes. The MedTech service supports device messages from all these vendors through the open-source version of the MedTech service. |
import-export | Storage Import Export Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-requirements.md | To prepare the hard drives using the WAImportExport tool, the following **64-bit ## Supported storage accounts +> [!Note] +> Classic storage accounts will not be supported starting **August 1, 2023**. + Azure Import/Export service supports the following types of storage accounts: - Standard General Purpose v2 storage accounts (recommended for most scenarios) |
internet-peering | Overview Peering Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/overview-peering-service.md | In the figure above each branch office of a global enterprise connects to the ne * Route analytics and statistics - Events for Border Gateway Protocol ([BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol)) route anomalies (leak/hijack detection), and suboptimal routing. ## Peering Service partnership requirements+ * Connectivity to Microsoft Cloud at a location nearest to customer. A partner Service Provider will route user traffic to Microsoft edge closest to user. Similarly, on traffic towards the user, Microsoft will route traffic (using BGP tag) to the edge location closest to the user and Service Provider will deliver the traffic to the user. * Partner will maintain high available, high throughput, and geo-redundant connectivity with Microsoft Global Network. * Partner can utilize their existing peering to support Peering Service if it meets the requirement. ## FAQ-For frequently asked questions, see [Peering Service - FAQ](service-faqs.yml). ++For frequently asked questions, see [Peering Service FAQ](service-faqs.yml). ## Next steps |
internet-peering | Walkthrough Communications Services Partner | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-communications-services-partner.md | Title: Azure Internet peering for Communications Services walkthrough -description: Azure Internet peering for Communications Services walkthrough. + Title: Internet peering for Communications Services walkthrough +description: Learn about Internet peering for Communications Services, its requirements, the steps to establish direct interconnect, and how to register and activate a prefix. Previously updated : 10/10/2022 Last updated : 06/15/2023 -# Azure Internet peering for Communications Services walkthrough +# Internet peering for Communications Services walkthrough -This section explains the steps a Communications Services Provider needs to follow to establish a Direct interconnect with Microsoft. +In this article, you learn steps to establish a Direct interconnect between a Communications Services Provider and Microsoft. -**Communications Services Providers:** Communications Services Providers are the organizations which offer communication services (Communications, messaging, conferencing etc.) and are looking to integrate their communications services infrastructure (SBC/SIP Gateway etc.) with Azure Communication Services and Microsoft Teams. +**Communications Services Providers** are the organizations that offer communication services (messaging, conferencing, and other communications services.) and want to integrate their communications services infrastructure (SBC, SIP gateways, and other infrastructure device) with Azure Communication Services and Microsoft Teams. -Azure Internet peering support Communications Services Providers to establish direct interconnect with Microsoft at any of its edge sites (pop locations). The list of all the public edges sites is available in [PeeringDB](https://www.peeringdb.com/net/694). +Internet peering supports Communications Services Providers to establish direct interconnect with Microsoft at any of its edge sites (POP locations). The list of all the public edges sites is available in [PeeringDB](https://www.peeringdb.com/net/694). -The Azure Internet peering provides highly reliable and QoS (Quality of Service) enabled interconnect for Communications services to ensure high quality and performance centric services. +Internet peering provides highly reliable and QoS (Quality of Service) enabled interconnect for Communications Services to ensure high quality and performance centric services. ## Technical Requirements-The technical requirements to establish direct interconnect for Communication Services are as following: -- The Peer MUST provide own Autonomous System Number (ASN), which MUST be public.++To establish direct interconnect for Communication Services, follow these requirements: ++- The Peer MUST provide its own Autonomous System Number (ASN), which MUST be public. - The peer MUST have redundant Interconnect (PNI) at each interconnect location to ensure local redundancy.-- The Peer MUST have geo redundancy in place to ensure failover in event of site failures in region/ metro.-- The Peer MUST has the BGP sessions as Active- Active to ensure high availability and faster convergence and should not be provisioned as Primary and backup.+- The Peer MUST have geo redundancy in place to ensure failover in the event of site failures in region/metro. +- The Peer MUST has the BGP sessions as Active-Active to ensure high availability and faster convergence and shouldn't be provisioned as Primary and Backup. - The Peer MUST maintain a 1:1 ratio for Peer peering routers to peering circuits and no rate limiting is applied.-- The Peer MUST supply and advertise their own publicly routable IPv4 address space used by PeerΓÇÖs communications service endpoints (e.g. SBC). +- The Peer MUST supply and advertise their own publicly routable IPv4 address space used by Peer's communications service endpoints (for example, SBC). - The Peer MUST supply detail of what class of traffic and endpoints are housed in each advertised subnet. -- The Peer MUST run BGP over Bi-directional Forwarding Detection (BFD) to facilitate sub second route convergence.+- The Peer MUST run BGP over Bidirectional Forwarding Detection (BFD) to facilitate sub second route convergence. - All communications infrastructure prefixes are registered in Azure portal and advertised with community string 8075:8007. - The Peer MUST NOT terminate peering on a device running a stateful firewall. -- Microsoft will configure all the interconnect links as LAG (link bundles) by default, so, peer MUST support LACP (Link Aggregation Control Protocol) on the interconnect links.--## Establishing Direct Interconnect with Microsoft for Communications Services. --To establish a direct interconnect using Azure Internet peering please follow the below steps: --**1. Associate Peer public ASN to the Azure Subscription:** --In case Peer already associated public ASN to Azure subscription, please ignore this step. --[Associate peer ASN to Azure subscription using the portal - Azure | Microsoft Docs](./howto-subscription-association-portal.md) +- Microsoft configures all the interconnect links as LAG (link bundles) by default, so, peer MUST support LACP (Link Aggregation Control Protocol) on the interconnect links. -The next step is to create a Direct peering connection for Peering Service. +## Establish Direct Interconnect with Microsoft for Communications Services -> [!NOTE] -> Once ASN association is approved, email us at peeringservice@microsoft.com with your ASN and subscription ID to associate your subscription with Communications services. +To establish a direct interconnect with Microsoft using Internet peering, follow the following steps: -**2. Create Direct peering connection for Peering Service:** +1. **Associate Peer public ASN to the Azure Subscription:** [Associate peer ASN to Azure subscription using the Azure portal](./howto-subscription-association-portal.md). If the Peer has already associated a public ASN to Azure subscription, go to the next step. -Follow the instructions to [Create or modify a Direct peering using the portal](./howto-direct-portal.md) +2. **Create Direct peering connection for Peering Service:** [Create a Direct peering using the portal](./howto-direct-portal.md), and make sure you meet high-availability.requirement. In the **Configuration** tab of **Create a Peering**, select the following options: -Ensure it meets high-availability requirement. + | Setting | Value | + | | | + | Peering type | Select **Direct**. | + | Microsoft network | Select **8075 (with Voice)**. | + | SKU | Select **Premium Free**. | -Please ensure you are selecting following options on ΓÇ£Create a PeeringΓÇ¥ Page: + In **Direct Peering Connection**, select following options: -Peering Type: **Direct** + | Setting | Value | + | | | + | Session Address provider | Select **Microsoft**. | + | Use for Peering Services | Select **Enabled**. | -Microsoft Network: **8075 (with Voice)** + > [!NOTE] + > When activating Peering Service, ignore the following message: *Do not enable unless you have contacted peering@microsoft.com about becoming a MAPS provider.* -SKU: **Premium Free** +1. **Register your prefixes for Optimized Routing:** For optimized routing for your Communication services infrastructure prefixes, register all your prefixes with your peering interconnects. + Ensure that the registered prefixes are announced over the direct interconnects established in that location. If the same prefix is announced in multiple peering locations, it's sufficient to register them with just one of the peerings in order to retrieve the unique prefix keys after validation. -Under ΓÇ£Direct Peering Connection PageΓÇ¥ select following options: + > [!NOTE] + > The Connection State of your peering connections must be **Active** before registering any prefixes. -Session Address provider: **Microsoft** +## Register the prefix -Use for Peering +1. If you're an Operator Connect Partner, you would be able to see the ΓÇ£Register PrefixΓÇ¥ tab on the left panel of your peering resource page. -> [!NOTE] -> Ignore the following message while selecting for activating for Peering Services. -> *Do not enable unless you have contacted peering@microsoft.com about becoming a MAPS provider.* + :::image type="content" source="./media/walkthrough-communications-services-partner/registered-prefixes-under-direct-peering.png" alt-text="Screenshot of registered prefixes tab under a peering enabled for Peering Service." ::: -**3. Register your prefixes for Optimized Routing** --For optimized routing for your Communication services infrastructure prefixes, you should register all your prefixes with your peering interconnects. --Please ensure that the prefixes registered are being announced over the direct interconnects established in that location. -If the same prefix is announced in multiple peering locations, it is sufficient to register them with just one of the peerings in order to retrieve the unique prefix keys after validation. --> [!NOTE] -> The Connection State of your peering connections must be Active before registering any prefixes. --**Prefix Registration** +2. Register prefixes to access the activation keys. -1. If you are an Operator Connect Partner, you would be able to see the ΓÇ£Register PrefixΓÇ¥ tab on the left panel of your peering resource page. + :::image type="content" source="./media/walkthrough-communications-services-partner/registered-prefixes-blade.png" alt-text="Screenshot of registered prefixes blade with a list of prefixes and keys." ::: - :::image type="content" source="media/registered-prefixes-under-direct-peering.png" alt-text="Screenshot of registered prefixes tab under a peering enabled for Peering Service." ::: + :::image type="content" source="./media/walkthrough-communications-services-partner/registered-prefix-example.png" alt-text="Screenshot showing a sample prefix being registered." ::: -2. Register prefixes to access the activation keys. + :::image type="content" source="./media/walkthrough-communications-services-partner/prefix-after-registration.png" alt-text="Screenshot of registered prefixes blade showing a new prefix added." ::: - :::image type="content" source="media/registered-prefixes-blade.png" alt-text="Screenshot of registered prefixes blade with a list of prefixes and keys." ::: +## Activate the prefix - :::image type="content" source="media/registered-prefix-example.png" alt-text="Screenshot showing a sample prefix being registered." ::: +In the previous section, you registered the prefix and generated the prefix key. The prefix registration DOES NOT activate the prefix for optimized routing (and doesn't accept <\/24 prefixes). Prefix activation, alignment to the right OC partner, and appropriate interconnect location are requirements for optimized routing (to ensure cold potato routing). - :::image type="content" source="media/prefix-after-registration.png" alt-text="Screenshot of registered prefixes blade showing a new prefix added." ::: +In this section, you activate the prefix: -**Prefix Activation** +1. In the search box at the top of the portal, enter *peering service*. Select **Peering Services** in the search results. -In the previous steps, you registered the prefix and generated the prefix key. The prefix registration DOES NOT activate the prefix for optimized routing (and will not even accept <\/24 prefixes) and it requires prefix activation and alignment to the right partner (In this case the OC partner) and the appropriate interconnect location (to ensure cold potato routing). + :::image type="content" source="./media/walkthrough-communications-services-partner/peering-service-portal-search.png" alt-text="Screenshot shows how to search for Peering Service in the Azure portal."::: -Below are the steps to activate the prefix. +1. Select **+ Create** to create a new Peering Service connection. -1. Look for ΓÇ£Peering ServicesΓÇ¥ resource + :::image type="content" source="./media/walkthrough-communications-services-partner/peering-service-list.png" alt-text="Screenshot shows the list of existing Peering Service connections in the Azure portal."::: - :::image type="content" source="media/peering-service-search.png" alt-text="Screenshot on searching for Peering Service on Azure portal." ::: - - :::image type="content" source="media/peering-service-list.png" alt-text="Screenshot of a list of existing peering services." ::: +1. In the **Basics** tab, enter or select your subscription, resource group, and Peering Service connection name. -2. Create a new Peering Service resource + :::image type="content" source="./media/walkthrough-communications-services-partner/peering-service-basics.png" alt-text="Screenshot shows the Basics tab of creating a Peering Service connection in the Azure portal."::: - :::image type="content" source="media/create-peering-service.png" alt-text="Screenshot showing how to create a new peering service." ::: +1. In the **Configuration** tab, provide details on the location, provider and primary and backup interconnect locations. If the backup location is set to **None**, the traffic will fail over to the internet. -3. Provide details on the location, provider and primary and backup interconnect location. If backup location is set to ΓÇ£noneΓÇ¥, the traffic will fail over the internet. + > [!NOTE] + > - If you're an Operator Connect partner, your organization is available as a **Provider**. + > - The prefix key should be the same as the one obtained in the [Register the prefix](#register-the-prefix) step. - If you are an Operator Connect partner, you would be able to see yourself as the provider. - The prefix key should be the same as the one obtained in the "Prefix Registration" step. + :::image type="content" source="./media/walkthrough-communications-services-partner/peering-service-configuration.png" alt-text="Screenshot shows the Configuration tab of creating a Peering Service connection in the Azure portal."::: - :::image type="content" source="media/peering-service-properties.png" alt-text="Screenshot of the fields to be filled to create a peering service." ::: +1. Select **Review + create**. - :::image type="content" source="media/peering-service-deployment.png" alt-text="Screenshot showing the validation of peering service resource before deployment." ::: +1. Review the settings, and then select **Create**. -## FAQs: +## Frequently asked questions (FAQ): **Q.** When will my BGP peer come up? Below are the steps to activate the prefix. **Q.** I have smaller subnets (</24) for my Communications services. Can I get the smaller subnets also routed? -**A.** Yes, Microsoft Azure Peering service supports smaller prefix routing also. Please ensure that you are registering the smaller prefixes for routing and the same are announced over the interconnects. +**A.** Yes, Microsoft Azure Peering service supports smaller prefix routing also. Ensure that you're registering the smaller prefixes for routing and the same are announced over the interconnects. **Q.** What Microsoft routes will we receive over these interconnects? -**A.** Microsoft announces all of Microsoft's public service prefixes over these interconnects. This will ensure not only Communications but other cloud services are accessible from the same interconnect. +**A.** Microsoft announces all of Microsoft's public service prefixes over these interconnects. This ensures not only Communications but other cloud services are accessible from the same interconnect. **Q.** Are there any AS path constraints? -**A.** Yes, a private ASN cannot be in the AS path. For registered prefixes smaller than /24, the AS path must be less than four. +**A.** Yes, a private ASN can't be in the AS path. For registered prefixes smaller than /24, the AS path must be less than four. **Q.** I need to set the prefix limit, how many routes Microsoft would be announcing? Below are the steps to activate the prefix. **Q.** What is the minimum link speed for an interconnect? -**A.** 10Gbps. +**A.** 10 Gbps. **Q.** Is the Peer bound to an SLA? Below are the steps to activate the prefix. **Q.** What is the advantage of this service over current direct peering or express route? -**A.** Settlement free and entire path is optimized for voice traffic over Microsoft WAN and convergence is tuned for sub-second with BFD. +**A.** Settlement free and entire path is optimized for voice traffic over Microsoft WAN and convergence is tuned for subsecond with BFD. **Q.** How does it take to complete the onboarding process? -**A.** Time will be variable depending on number and location of sites, and if Peer is migrating existing private peerings or establishing new cabling. Carrier should plan for 3+ weeks. +**A.** Time is variable depending on number and location of sites, and if Peer is migrating existing private peerings or establishing new cabling. Carrier should plan for 3+ weeks. **Q.** How is progress communicated outside of the portal status? Below are the steps to activate the prefix. **Q.** Can we use APIs for onboarding? -**A.** Currently there is no API support, and configuration must be performed via web portal. +**A.** Currently there's no API support, and configuration must be performed via web portal. |
internet-peering | Walkthrough Device Maintenance Notification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-device-maintenance-notification.md | + + Title: Device maintenance notification walkthrough ++description: Learn how to view current and past peering device maintenance events, and how to create alerts to receive notifications for the future events. ++++ Last updated : 06/15/2023+++++# Azure Peering maintenance notification walkthrough ++In this article, you learn how to see active maintenance events and how to create alerts for future ones. Internet Peering partners and Peering Service customers can create alerts to receive notifications by email, voice, SMS, or the Azure mobile app. ++## View maintenance events ++If you're a partner who has Internet Peering or Peering Service resources in Azure, you receive notifications through the Azure Service Health page. In this section, you learn how to view active maintenance events in the Service Health page. ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. In the search box at the top of the portal, enter *service health*. Select **Service Health** in the search results. ++ :::image type="content" source="./media/walkthrough-device-maintenance-notification/service-health-portal-search.png" alt-text="Screenshot shows how to search for Service Health in the Azure portal." lightbox="./media/walkthrough-device-maintenance-notification/service-health-portal-search.png"::: ++1. Select **Planned maintenance** to see active maintenance events. Select **Azure Service Peering** for **Service** filter to only list maintenance events for Azure Peering Service. ++ :::image type="content" source="./media/walkthrough-device-maintenance-notification/planned-maintenance.png" alt-text="Screenshot shows planned maintenance events for Azure Peering Service in the Service Health page in the Azure portal." lightbox="./media/walkthrough-device-maintenance-notification/service-health-portal-search.png"::: ++ The summary tab gives you information about the affected resource by a maintenance event, such as the Azure subscription, region, and peering location. ++ Once maintenance is completed, a status update is sent. You'll be able to view and review the maintenance event in the **Health history** page after it's completed. ++1. Select **Health history** to see past maintenance events. ++ :::image type="content" source="./media/walkthrough-device-maintenance-notification/health-history.png" alt-text="Screenshot shows how to view past maintenance events in the Azure portal." lightbox="./media/walkthrough-device-maintenance-notification/health-history.png"::: ++> [!NOTE] +> The end time listed for the maintenance is an estimate. Many maintenance events will complete before the end time that is shown in Service Health, but this is not guaranteed. Future developments to our maintenance notification service will allow for more accurate maintenance end times. ++## Create alerts ++Service Health supports forwarding rules, so you can set up your own alerts when maintenance events occur. ++1. To set up a forwarding rule, go to the **Planned maintenance** page, and then select **+ Add service health alert**. ++ :::image type="content" source="./media/walkthrough-device-maintenance-notification/add-service-health-alert.png" alt-text="Screenshot shows how to add an alert."::: ++1. In the **Scope** tab, select the Azure subscription your Internet Peering or Peering Service is associated with. When a maintenance event affects a resource, the alert in Service Health is associated with the Azure subscription ID of the resource. ++ :::image type="content" source="./media/walkthrough-device-maintenance-notification/create-alert-rule-scope.png" alt-text="Screenshot shows how to choose the Azure subscription of the resource."::: ++1. Select the **Condition** tab, or select the **Next: Condition** button at the bottom of the page. ++1. In the **Condition** tab, Select the following information: ++ | Setting | Value | + | | | + | Services | Select **Azure Peering Service**. | + | Regions | Select the Azure region(s) of the resources that you want to get notified whenever they have planned maintenance events. | + | Event types | Select **Planned maintenance**. | ++ :::image type="content" source="./media/walkthrough-device-maintenance-notification/create-alert-rule-condition.png" alt-text="Screenshot shows the Condition tab of creating an alert rule in the Azure portal."::: ++1. Select the **Actions** tab, or select the **Next: Actions** button. ++1. Select **Create action group** to create a new action group. If you previously created an action group, you can use it by selecting **Select action groups**. ++ :::image type="content" source="./media/walkthrough-device-maintenance-notification/create-alert-rule-actions.png" alt-text="Screenshot shows the Actions tab before creating a new action group."::: ++1. In the **Basics** tab of **Create action group**, enter or select the following information: ++ | Setting | Value | + | | | + | **Project Details** | | + | Subscription | Select the Azure subscription that you want to use for the action group. | + | Resource group | Select **Create new**. </br> Enter *myResourceGroup* in **Name**. </br> Select **OK**. </br> If you have an existing resource group that you want to use, select it instead of creating a new one. | + | Regions | Select **Global**. | + | **Instance details** | | + | Action group name | Enter a name for the action group. | + | Display name | Enter a short display name (up to 12 characters). | ++ :::image type="content" source="./media/walkthrough-device-maintenance-notification/create-action-group-basics.png" alt-text="Screenshot shows the Basics tab of creating an action group."::: ++1. Select the **Notifications** tab, or select the **Next: Notifications** button. Then, select **Email/SMS message/Push/Voice** for the **Notification type**, and enter a name for this notification. Enter the contact information for the type of notification that you want. ++ :::image type="content" source="./media/walkthrough-device-maintenance-notification/create-action-group-notifications-email-sms.png" alt-text="Screenshot shows how to add the required contact information for the notifications."::: ++1. Select **Review + create**. ++1. Review the settings, and then select **Create**. ++1. After creating the action group, you return to the **Actions** tab of **Create an alert rule**. Select **PeeringMaintenance** action group to edit it or send test notifications. ++ :::image type="content" source="./media/walkthrough-device-maintenance-notification/create-alert-rule-actions-group.png" alt-text="Screenshot shows the Actions tab after creating a new action group."::: ++1. Select **Test action group** to send test notification(s) to the contact information you previously entered in the action group (to change the contact information, select the pencil icon next to the notification). ++ :::image type="content" source="./media/walkthrough-device-maintenance-notification/edit-action-group.png" alt-text="Screenshot shows how to edit an action group in the Azure portal."::: ++1. In **Test PeeringMaintenance**, select **Resource health alert** for **Select sample type**, and then select **Test**. Select **Done** after you successfully test the notifications. ++ :::image type="content" source="./media/walkthrough-device-maintenance-notification/test-notifications.png" alt-text="Screenshot shows how to send test notifications."::: ++1. Select the **Details** tab, or select the **Next: Details** button. Enter or select the following information: ++ | Setting | Value | + | | | + | **Project Details** | | + | Subscription | Select the Azure subscription that you want to use for the alert rule. | + | Resource group | Select **myResourceGroup**. | + | **Alert rule details** | | + | Alert rule name | Enter a name for the rule. | + | Alert rule description | Enter an optional description. | + | **Advanced options** | Select **Enable alert rule upon creation**. | ++ :::image type="content" source="./media/walkthrough-device-maintenance-notification/create-alert-rule-details.png" alt-text="Screenshot shows the Details tab of creating an alert rule."::: ++1. Select **Review + create**, and finish your alert rule. ++1. Review the settings, and then select **Create**. ++Azure Peering Service notifications are forwarded to you based on your alert rule whenever maintenance events start, and whenever they're resolved. ++For more information on the notification platform of Service Health, see [Create activity log alerts on service notifications using the Azure portal](../service-health/alerts-activity-log-service-notifications-portal.md). ++## Receive notifications for legacy peerings ++Peering partners who haven't onboarded their peerings as Azure resources can't receive notifications in Service Health as they don't have subscriptions associated with their peerings. Instead, these partners receive maintenance notifications via their NOC contact email. Partners with legacy peerings don't have to opt in to receive these email notifications, they're sent automatically. This is an example of a maintenance notification email: +++## Next steps ++- Learn about the [Prerequisites to set up peering with Microsoft](prerequisites.md). |
key-vault | Common Error Codes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/common-error-codes.md | -The error codes listed in the following table may be returned by an operation on Azure key vault +The error codes listed in the following table may be returned by an operation on Azure Key Vault. | Error code | User message | |--|--| |
key-vault | Private Link Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/private-link-service.md | You can create a new key vault with the [Azure portal](../general/quick-create-p After configuring the key vault basics, select the Networking tab and follow these steps: -1. Select the Private Endpoint radio button in the Networking tab. -1. Select the "+ Add" Button to add a private endpoint. +1. Disable public access by toggling off the radio button. +1. Select the "+ Create a private endpoint" Button to add a private endpoint. -  +  1. In the "Location" field of the Create Private Endpoint Blade, select the region in which your virtual network is located. 1. In the "Name" field, create a descriptive name that will allow you to identify this private endpoint. There are four provisioning states: 1. In the search bar, type in "key vaults" 1. Select the key vault that you want to manage. 1. Select the "Networking" tab.-1. If there are any connections that are pending, you will see a connection listed with "Pending" in the provisioning state. +1. If there are any connections that are pending, you'll see a connection listed with "Pending" in the provisioning state. 1. Select the private endpoint you wish to approve 1. Select the approve button. 1. If there are any private endpoint connections you want to reject, whether it's a pending request or existing connection, select the connection and select the "Reject" button. Open the command line and run the following command: nslookup <your-key-vault-name>.vault.azure.net ``` -If you run the ns lookup command to resolve the IP address of a key vault over a public endpoint, you will see a result that looks like this: +If you run the ns lookup command to resolve the IP address of a key vault over a public endpoint, you'll see a result that looks like this: ```console c:\ >nslookup <your-key-vault-name>.vault.azure.net Address: (public IP address) Aliases: <your-key-vault-name>.vault.azure.net ``` -If you run the ns lookup command to resolve the IP address of a key vault over a private endpoint, you will see a result that looks like this: +If you run the ns lookup command to resolve the IP address of a key vault over a private endpoint, you'll see a result that looks like this: ```console c:\ >nslookup your_vault_name.vault.azure.net Aliases: <your-key-vault-name>.vault.azure.net 1. You can check and fix this in Azure portal. Open the Key Vault resource, and select the Networking option. 2. Then select the Private endpoint connections tab. 3. Make sure connection state is Approved and provisioning state is Succeeded. - 4. You may also navigate to the private endpoint resource and review same properties there, and double-check that the virtual network matches the one you are using. + 4. You may also navigate to the private endpoint resource and review same properties there, and double-check that the virtual network matches the one you're using. * Check to make sure you have a Private DNS Zone resource. 1. You must have a Private DNS Zone resource with the exact name: privatelink.vaultcore.azure.net. 2. To learn how to set this up please see the following link. [Private DNS Zones](../../dns/private-dns-privatednszone.md) -* Check to make sure the Private DNS Zone is linked to the Virtual Network. This may be the issue if you are still getting the public IP address returned. - 1. If the Private Zone DNS is not linked to the virtual network, the DNS query originating from the virtual network will return the public IP address of the key vault. +* Check to make sure the Private DNS Zone is linked to the Virtual Network. This may be the issue if you're still getting the public IP address returned. + 1. If the Private Zone DNS isn't linked to the virtual network, the DNS query originating from the virtual network will return the public IP address of the key vault. 2. Navigate to the Private DNS Zone resource in the Azure portal and select the virtual network links option. 4. The virtual network that will perform calls to the key vault must be listed. 5. If it's not there, add it. 6. For detailed steps, see the following document [Link Virtual Network to Private DNS Zone](../../dns/private-dns-getstarted-portal.md#link-the-virtual-network) -* Check to make sure the Private DNS Zone is not missing an A record for the key vault. +* Check to make sure the Private DNS Zone isn't missing an A record for the key vault. 1. Navigate to the Private DNS Zone page. - 2. Select Overview and check if there is an A record with the simple name of your key vault (i.e. fabrikam). Do not specify any suffix. + 2. Select Overview and check if there's an A record with the simple name of your key vault (i.e. fabrikam). Don't specify any suffix. 3. Make sure you check the spelling, and either create or fix the A record. You can use a TTL of 600 (10 mins). 4. Make sure you specify the correct private IP address. Aliases: <your-key-vault-name>.vault.azure.net 4. The link will show the Overview of the NIC resource, which contains the property Private IP address. 5. Verify that this is the correct IP address that is specified in the A record. -* If you are connecting from an on-prem resource to a Key Vault, ensure you have all required conditional forwarders in the on-prem environment enabled. - 1. Review [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration) for the zones needed, and make sure you have conditional forwarders for both `vault.azure.net` and `vaultcore.azure.net` on your on-prem DNS. +* If you're connecting from an on-premises resource to a Key Vault, ensure you have all required conditional forwarders in the on-premises environment enabled. + 1. Review [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration) for the zones needed, and make sure you have conditional forwarders for both `vault.azure.net` and `vaultcore.azure.net` on your on-premises DNS. 2. Ensure that you have conditional forwarders for those zones that route to an [Azure Private DNS Resolver](../../dns/dns-private-resolver-overview.md) or some other DNS platform with access to Azure resolution. ## Limitations and Design Considerations |
key-vault | Rbac Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md | To add role assignments, you must have `Microsoft.Authorization/roleAssignments/ 1. Enable Azure RBAC permissions on new key vault: -  +  2. Enable Azure RBAC permissions on existing key vault: -  +  > [!IMPORTANT] > Setting Azure RBAC permission model invalidates all access policies permissions. It can cause outages when equivalent Azure roles aren't assigned. |
kubernetes-fleet | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/overview.md | Title: "Overview of Azure Kubernetes Fleet Manager (preview)" Previously updated : 08/29/2022 Last updated : 06/12/2023 Fleet supports the following scenarios: * Create Kubernetes resource objects on the Fleet resource's cluster and control their propagation to all or a subset of all member clusters. -* Export a service from one member cluster to the Fleet resource. Once successfully exported, the service and its endpoints are synced to the hub, which other member clusters (or any Fleet resource-scoped load balancer) can consume. +* Load balance incoming L4 traffic across service endpoints on multiple clusters ++* Orchestrate Kubernetes version and node image upgrades across multiple clusters by using update runs, stages, and groups. [!INCLUDE [preview features note](./includes/preview/preview-callout.md)] ## Next steps -[Create an Azure Kubernetes Fleet Manager resource and group multiple AKS clusters as member clusters of the fleet](./quickstart-create-fleet-and-members.md). +[Create an Azure Kubernetes Fleet Manager resource and group multiple AKS clusters as member clusters of the fleet](./quickstart-create-fleet-and-members.md). |
load-balancer | Cross Region Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-region-overview.md | Cross-region load balancer routes the traffic to the appropriate regional load b * NAT64 translation isn't supported at this time. The frontend and backend IPs must be of the same type (v4 or v6). -* UDP traffic isn't supported on Cross-region Load Balancer. +* UDP traffic isn't supported on Cross-region Load Balancer. +* Outbound rules aren't support on Cross-region Load Balancer. For outbound connections please utilize [outbound rules](./outbound-rules.md) on the regional load balancer or [NAT gateway](https://learn.microsoft.com/azure/nat-gateway/nat-overview). |
logic-apps | Logic Apps Using Sap Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-sap-connector.md | The preview SAP built-in connector trigger named **Register SAP RFC server for t > When you use a Premium-level ISE, use the ISE-native SAP connector, not the SAP managed connector, > which doesn't natively run in an ISE. For more information, review the [ISE prerequisites](#ise-prerequisites). +* By default, the preview SAP built-in connector operations are stateless. To run these operations in stateful mode, see [Enable stateful mode for stateless built-in connectors](../connectors/enable-stateful-affinity-built-in-connectors.md). + * To use either the SAP managed connector trigger named **When a message is received from SAP** or the SAP built-in trigger named **Register SAP RFC server for trigger**, complete the following tasks: * Set up your SAP gateway security permissions or Access Control List (ACL). In the **Gateway Monitor** (T-Code SMGW) dialog box, which shows the **secinfo** and **reginfo** files, open the **Goto** menu, and select **Expert Functions** > **External Security** > **Maintenance of ACL Files**. For a Standard workflow in single-tenant Azure Logic Apps, use the preview SAP * - **sapnco.dll** - **sapnco_utils.dll** -1. To SNC from SAP, you need to download the following files and have them ready to upload to your logic app resource. For more information, see [SNC prerequisites](#snc-prerequisites-standard): +1. For SNC from SAP, you need to download the following files and have them ready to upload to your logic app resource. For more information, see [SNC prerequisites](#snc-prerequisites-standard): - **sapcrypto.dll** - **sapgenpse.exe** |
machine-learning | Concept Manage Ml Pitfalls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-manage-ml-pitfalls.md | Title: Avoid overfitting & imbalanced data with AutoML + Title: Avoid overfitting & imbalanced data with Automated machine learning -description: Identify and manage common pitfalls of ML models with Azure Machine Learning's automated machine learning solutions. +description: Identify and manage common pitfalls of ML models with Azure Machine Learning's Automated ML solutions. -# Prevent overfitting and imbalanced data with automated machine learning +# Prevent overfitting and imbalanced data with Automated ML -Overfitting and imbalanced data are common pitfalls when you build machine learning models. By default, Azure Machine Learning's automated machine learning provides charts and metrics to help you identify these risks, and implements best practices to help mitigate them. +Overfitting and imbalanced data are common pitfalls when you build machine learning models. By default, Azure Machine Learning's Automated ML provides charts and metrics to help you identify these risks, and implements best practices to help mitigate them. ## Identify overfitting -Overfitting in machine learning occurs when a model fits the training data too well, and as a result can't accurately predict on unseen test data. In other words, the model has simply memorized specific patterns and noise in the training data, but is not flexible enough to make predictions on real data. +Overfitting in machine learning occurs when a model fits the training data too well, and as a result can't accurately predict on unseen test data. In other words, the model has memorized specific patterns and noise in the training data, but is not flexible enough to make predictions on real data. Consider the following trained models and their corresponding train and test accuracies. Consider the following trained models and their corresponding train and test acc | B | 87% | 87% | | C | 99.9% | 45% | -Considering model **A**, there is a common misconception that if test accuracy on unseen data is lower than training accuracy, the model is overfitted. However, test accuracy should always be less than training accuracy, and the distinction for overfit vs. appropriately fit comes down to *how much* less accurate. +Consider model **A**, there is a common misconception that if test accuracy on unseen data is lower than training accuracy, the model is overfitted. However, test accuracy should always be less than training accuracy, and the distinction for overfit vs. appropriately fit comes down to *how much* less accurate. -When comparing models **A** and **B**, model **A** is a better model because it has higher test accuracy, and although the test accuracy is slightly lower at 95%, it is not a significant difference that suggests overfitting is present. You wouldn't choose model **B** simply because the train and test accuracies are closer together. +Compare models **A** and **B**, model **A** is a better model because it has higher test accuracy, and although the test accuracy is slightly lower at 95%, it is not a significant difference that suggests overfitting is present. You wouldn't choose model **B** because the train and test accuracies are closer together. -Model **C** represents a clear case of overfitting; the training accuracy is very high but the test accuracy isn't anywhere near as high. This distinction is subjective, but comes from knowledge of your problem and data, and what magnitudes of error are acceptable. +Model **C** represents a clear case of overfitting; the training accuracy is high but the test accuracy isn't anywhere near as high. This distinction is subjective, but comes from knowledge of your problem and data, and what magnitudes of error are acceptable. ## Prevent overfitting -In the most egregious cases, an overfitted model assumes that the feature value combinations seen during training will always result in the exact same output for the target. +In the most egregious cases, an overfitted model assumes that the feature value combinations seen during training always results in the exact same output for the target. -The best way to prevent overfitting is to follow ML best-practices including: +The best way to prevent overfitting is to follow ML best practices including: * Using more training data, and eliminating statistical bias * Preventing target leakage The best way to prevent overfitting is to follow ML best-practices including: * **Model complexity limitations** * **Cross-validation** -In the context of automated ML, the first three items above are **best-practices you implement**. The last three bolded items are **best-practices automated ML implements** by default to protect against overfitting. In settings other than automated ML, all six best-practices are worth following to avoid overfitting models. +In the context of Automated ML, the first three ways lists best practices you implement. The last three bolded items are **best practices Automated ML implements** by default to protect against overfitting. In settings other than Automated ML, all six best practices are worth following to avoid overfitting models. ## Best practices you implement ### Use more data -Using **more data** is the simplest and best possible way to prevent overfitting, and as an added bonus typically increases accuracy. When you use more data, it becomes harder for the model to memorize exact patterns, and it is forced to reach solutions that are more flexible to accommodate more conditions. It's also important to recognize **statistical bias**, to ensure your training data doesn't include isolated patterns that won't exist in live-prediction data. This scenario can be difficult to solve, because there may not be overfitting between your train and test sets, but there may be overfitting present when compared to live test data. +Using more data is the simplest and best possible way to prevent overfitting, and as an added bonus typically increases accuracy. When you use more data, it becomes harder for the model to memorize exact patterns, and it is forced to reach solutions that are more flexible to accommodate more conditions. It's also important to recognize statistical bias, to ensure your training data doesn't include isolated patterns that don't exist in live-prediction data. This scenario can be difficult to solve, because there could be overfitting present when compared to live test data. ### Prevent target leakage -**Target leakage** is a similar issue, where you may not see overfitting between train/test sets, but rather it appears at prediction-time. Target leakage occurs when your model "cheats" during training by having access to data that it shouldn't normally have at prediction-time. For example, if your problem is to predict on Monday what a commodity price will be on Friday, but one of your features accidentally included data from Thursdays, that would be data the model won't have at prediction-time since it cannot see into the future. Target leakage is an easy mistake to miss, but is often characterized by abnormally high accuracy for your problem. If you are attempting to predict stock price and trained a model at 95% accuracy, there is likely target leakage somewhere in your features. +Target leakage is a similar issue, where you may not see overfitting between train/test sets, but rather it appears at prediction-time. Target leakage occurs when your model "cheats" during training by having access to data that it shouldn't normally have at prediction-time. For example, to predict on Monday what a commodity price will be on Friday, if your features accidentally included data from Thursdays, that would be data the model won't have at prediction-time since it can't see into the future. Target leakage is an easy mistake to miss, but is often characterized by abnormally high accuracy for your problem. If you're attempting to predict stock price and trained a model at 95% accuracy, there's likely target leakage somewhere in your features. ### Use fewer features -**Removing features** can also help with overfitting by preventing the model from having too many fields to use to memorize specific patterns, thus causing it to be more flexible. It can be difficult to measure quantitatively, but if you can remove features and retain the same accuracy, you have likely made the model more flexible and have reduced the risk of overfitting. +Removing features can also help with overfitting by preventing the model from having too many fields to use to memorize specific patterns, thus causing it to be more flexible. It can be difficult to measure quantitatively, but if you can remove features and retain the same accuracy, you have likely made the model more flexible and have reduced the risk of overfitting. -## Best practices automated ML implements +## Best practices Automated ML implements ### Regularization and hyperparameter tuning -**Regularization** is the process of minimizing a cost function to penalize complex and overfitted models. There are different types of regularization functions, but in general they all penalize model coefficient size, variance, and complexity. Automated ML uses L1 (Lasso), L2 (Ridge), and ElasticNet (L1 and L2 simultaneously) in different combinations with different model hyperparameter settings that control overfitting. In simple terms, automated ML will vary how much a model is regulated and choose the best result. +**Regularization** is the process of minimizing a cost function to penalize complex and overfitted models. There's different types of regularization functions, but in general they all penalize model coefficient size, variance, and complexity. Automated ML uses L1 (Lasso), L2 (Ridge), and ElasticNet (L1 and L2 simultaneously) in different combinations with different model hyperparameter settings that control overfitting. Automated ML varies how much a model is regulated and choose the best result. ### Model complexity limitations -Automated ML also implements explicit **model complexity limitations** to prevent overfitting. In most cases this implementation is specifically for decision tree or forest algorithms, where individual tree max-depth is limited, and the total number of trees used in forest or ensemble techniques are limited. +Automated ML also implements explicit model complexity limitations to prevent overfitting. In most cases, this implementation is specifically for decision tree or forest algorithms, where individual tree max-depth is limited, and the total number of trees used in forest or ensemble techniques are limited. ### Cross-validation -**Cross-validation (CV)** is the process of taking many subsets of your full training data and training a model on each subset. The idea is that a model could get "lucky" and have great accuracy with one subset, but by using many subsets the model won't achieve this high accuracy every time. When doing CV, you provide a validation holdout dataset, specify your CV folds (number of subsets) and automated ML will train your model and tune hyperparameters to minimize error on your validation set. One CV fold could be overfitted, but by using many of them it reduces the probability that your final model is overfitted. The tradeoff is that CV does result in longer training times and thus greater cost, because instead of training a model once, you train it once for each *n* CV subsets. +Cross-validation (CV) is the process of taking many subsets of your full training data and training a model on each subset. The idea is that a model could get "lucky" and have great accuracy with one subset, but by using many subsets the model won't achieve this high accuracy every time. When doing CV, you provide a validation holdout dataset, specify your CV folds (number of subsets) and Automated ML trains your model and tune hyperparameters to minimize error on your validation set. One CV fold could be overfitted, but by using many of them it reduces the probability that your final model is overfitted. The tradeoff is that CV results in longer training times and greater cost, because you train a model once for each *n* in the CV subsets. > [!NOTE]-> Cross-validation is not enabled by default; it must be configured in automated ML settings. However, after cross-validation is configured and a validation data set has been provided, the process is automated for you. Learn more about [cross validation configuration in Auto ML (SDK v1)](./v1/how-to-configure-cross-validation-data-splits.md?view=azureml-api-1&preserve-view=true) +> Cross-validation isn't enabled by default; it must be configured in Automated machine learning settings. However, after cross-validation is configured and a validation data set has been provided, the process is automated for you. <a name="imbalance"></a> Automated ML also implements explicit **model complexity limitations** to preven Imbalanced data is commonly found in data for machine learning classification scenarios, and refers to data that contains a disproportionate ratio of observations in each class. This imbalance can lead to a falsely perceived positive effect of a model's accuracy, because the input data has bias towards one class, which results in the trained model to mimic that bias. -In addition, automated ML jobs generate the following charts automatically, which can help you understand the correctness of the classifications of your model, and identify models potentially impacted by imbalanced data. +In addition, Automated ML jobs generate the following charts automatically. These charts help you understand the correctness of the classifications of your model, and identify models potentially impacted by imbalanced data. Chart| Description | Chart| Description ## Handle imbalanced data -As part of its goal of simplifying the machine learning workflow, **automated ML has built in capabilities** to help deal with imbalanced data such as, +As part of its goal of simplifying the machine learning workflow, Automated ML has built in capabilities to help deal with imbalanced data such as, -- A **weight column**: automated ML will create a column of weights as input to cause rows in the data to be weighted up or down, which can be used to make a class more or less "important".+- A weight column: Automated ML creates a column of weights as input to cause rows in the data to be weighted up or down, which can be used to make a class more or less "important." -- The algorithms used by automated ML detect imbalance when the number of samples in the minority class is equal to or fewer than 20% of the number of samples in the majority class, where minority class refers to the one with fewest samples and majority class refers to the one with most samples. Subsequently, AutoML will run an experiment with sub-sampled data to check if using class weights would remedy this problem and improve performance. If it ascertains a better performance through this experiment, then this remedy is applied.+- The algorithms used by Automated ML detect imbalance when the number of samples in the minority class is equal to or fewer than 20% of the number of samples in the majority class, where minority class refers to the one with fewest samples and majority class refers to the one with most samples. Subsequently, automated machine learning will run an experiment with subsampled data to check if using class weights would remedy this problem and improve performance. If it ascertains a better performance through this experiment, then this remedy is applied. - Use a performance metric that deals better with imbalanced data. For example, the AUC_weighted is a primary metric that calculates the contribution of every class based on the relative number of samples representing that class, hence is more robust against imbalance. -The following techniques are additional options to handle imbalanced data **outside of automated ML**. +The following techniques are additional options to handle imbalanced data outside of Automated ML. - Resampling to even the class imbalance, either by up-sampling the smaller classes or down-sampling the larger classes. These methods require expertise to process and analyze. The following techniques are additional options to handle imbalanced data **outs ## Next steps -See examples and learn how to build models using automated machine learning: +See examples and learn how to build models using Automated ML: -+ Follow the [Tutorial: Train an object detection model with AutoML and Python](tutorial-auto-train-image-models.md). ++ Follow the [Tutorial: Train an object detection model with automated machine learning and Python](tutorial-auto-train-image-models.md). + Configure the settings for automatic training experiment: + In Azure Machine Learning studio, [use these steps](how-to-use-automated-ml-for-ml-models.md). |
machine-learning | Concept Soft Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-soft-delete.md | -monikerRange: 'azureml-api-2' +monikerRange: 'azureml-api-2 || azureml-api-1' #Customer intent: As an IT pro, understand how to enable data protection capabilities, to protect against accidental deletion. During the retention period, soft deleted workspaces can be recovered or permane The default deletion behavior when deleting a workspace is soft delete. Optionally, you may override the soft delete behavior by permanently deleting your workspace. Permanently deleting a workspace ensures workspace data is immediately deleted. Use this option to meet related compliance requirements, or whenever you require a workspace name to be reused immediately after deletion. This may be useful in dev/test scenarios where you want to create and later delete a workspace. -When deleting a workspace from the Azure Portal, check __Delete the workspace permanently__. You can permanently delete only one workspace at a time, and not using a batch operation. +When deleting a workspace from the Azure portal, check __Delete the workspace permanently__. You can permanently delete only one workspace at a time, and not using a batch operation. :::image type="content" source="./media/concept-soft-delete/soft-delete-permanently-delete.png" alt-text="Screenshot of the delete workspace form in the portal."::: -If you are using the [Azure Machine Learning SDK or CLI](https://learn.microsoft.com/python/api/azure-ai-ml/azure.ai.ml.operations.workspaceoperations#azure-ai-ml-operations-workspaceoperations-begin-delete), you can set the `permanently_delete` flag. +> [!TIP] +> The v1 SDK and CLI don't provide functionality to override the default soft-delete behavior. To override the default behavior from SDK or CLI, use the the v2 versions. For more information, see the [CLI & SDK v2](concept-v2.md) article or the [v2 version of this article](concept-soft-delete.md?view=azureml-api-2&preserve-view=true#deleting-a-workspace). ++If you are using the [Azure Machine Learning SDK or CLI](/python/api/azure-ai-ml/azure.ai.ml.operations.workspaceoperations#azure-ai-ml-operations-workspaceoperations-begin-delete), you can set the `permanently_delete` flag. ```python from azure.ai.ml import MLClient result = ml_client.workspaces.begin_delete( print(result) ```+ Once permanently deleted, workspace data can no longer be recovered. Permanent deletion of workspace data is also triggered when the soft delete retention period expires. ## Manage soft deleted workspaces |
machine-learning | Designer Accessibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/designer-accessibility.md | The following keyboard actions help you navigate a pipeline graph: - Tab: Move to first node > each port of the node > next node. - Up/down arrow keys: Move to next or previous node by its position in the graph. - Ctrl+G when focus is on a port: Go to the connected port. When there's more than one connection from one port, open a list view to select the target. Use the Esc key to go to the selected target.+- Ctrl + Shift + H to focus on the canvas. ## Edit the pipeline graph |
machine-learning | How To Auto Train Nlp Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md | Last updated 03/15/2022 In this article, you learn how to train natural language processing (NLP) models with [automated ML](concept-automated-ml.md) in Azure Machine Learning. You can create NLP models with automated ML via the Azure Machine Learning Python SDK v2 or the Azure Machine Learning CLI v2. -Automated ML supports NLP which allows ML professionals and data scientists to bring their own text data and build custom models for tasks such as, multi-class text classification, multi-label text classification, and named entity recognition (NER). +Automated ML supports NLP which allows ML professionals and data scientists to bring their own text data and build custom models for NLP tasks. NLP tasks include multi-class text classification, multi-label text classification, and named entity recognition (NER). -You can seamlessly integrate with the [Azure Machine Learning data labeling](how-to-create-text-labeling-projects.md) capability to label your text data or bring your existing labeled data. Automated ML provides the option to use distributed training on multi-GPU compute clusters for faster model training. The resulting model can be operationalized at scale by leveraging Azure Machine Learning's MLOps capabilities. +You can seamlessly integrate with the [Azure Machine Learning data labeling](how-to-create-text-labeling-projects.md) capability to label your text data or bring your existing labeled data. Automated ML provides the option to use distributed training on multi-GPU compute clusters for faster model training. The resulting model can be operationalized at scale using Azure Machine Learning's MLOps capabilities. ## Prerequisites You can seamlessly integrate with the [Azure Machine Learning data labeling](how * Azure subscription. If you don't have an Azure subscription, sign up to try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today. -* An Azure Machine Learning workspace with a GPU training compute. To create the workspace, see [Create workspace resources](quickstart-create-resources.md). See [GPU optimized virtual machine sizes](../virtual-machines/sizes-gpu.md) for more details of GPU instances provided by Azure. +* An Azure Machine Learning workspace with a GPU training compute. To create the workspace, see [Create workspace resources](quickstart-create-resources.md). For more information, see [GPU optimized virtual machine sizes](../virtual-machines/sizes-gpu.md) for more details of GPU instances provided by Azure. > [!WARNING] > Support for multilingual models and the use of models with longer max sequence length is necessary for several NLP use cases, such as non-english datasets and longer range documents. As a result, these scenarios may require higher GPU memory for model training to succeed, such as the NC_v3 series or the ND series. You can seamlessly integrate with the [Azure Machine Learning data labeling](how * Azure subscription. If you don't have an Azure subscription, sign up to try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today. -* An Azure Machine Learning workspace with a GPU training compute. To create the workspace, see [Create workspace resources](quickstart-create-resources.md). See [GPU optimized virtual machine sizes](../virtual-machines/sizes-gpu.md) for more details of GPU instances provided by Azure. +* An Azure Machine Learning workspace with a GPU training compute. To create the workspace, see [Create workspace resources](quickstart-create-resources.md). For more information, see [GPU optimized virtual machine sizes](../virtual-machines/sizes-gpu.md) for more details of GPU instances provided by Azure. > [!WARNING] > Support for multilingual models and the use of models with longer max sequence length is necessary for several NLP use cases, such as non-english datasets and longer range documents. As a result, these scenarios may require higher GPU memory for model training to succeed, such as the NC_v3 series or the ND series. You can seamlessly integrate with the [Azure Machine Learning data labeling](how * The Azure Machine Learning Python SDK v2 installed. To install the SDK you can either, - * Create a compute instance, which automatically installs the SDK and is pre-configured for ML workflows. See [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md) for more information. + * Create a compute instance, which automatically installs the SDK and is preconfigured for ML workflows. See [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md) for more information. * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK. Determine what NLP task you want to accomplish. Currently, automated ML supports Task |AutoML job syntax| Description -|-|-Multi-class text classification | CLI v2: `text_classification` <br> SDK v2: `text_classification()`| There are multiple possible classes and each sample can be classified as exactly one class. The task is to predict the correct class for each sample. <br> <br> For example, classifying a movie script as "Comedy" or "Romantic". -Multi-label text classification | CLI v2: `text_classification_multilabel` <br> SDK v2: `text_classification_multilabel()`| There are multiple possible classes and each sample can be assigned any number of classes. The task is to predict all the classes for each sample<br> <br> For example, classifying a movie script as "Comedy", or "Romantic", or "Comedy and Romantic". +Multi-class text classification | CLI v2: `text_classification` <br> SDK v2: `text_classification()`| There are multiple possible classes and each sample can be classified as exactly one class. The task is to predict the correct class for each sample. <br> <br> For example, classifying a movie script as "Comedy," or "Romantic". +Multi-label text classification | CLI v2: `text_classification_multilabel` <br> SDK v2: `text_classification_multilabel()`| There are multiple possible classes and each sample can be assigned any number of classes. The task is to predict all the classes for each sample<br> <br> For example, classifying a movie script as "Comedy," or "Romantic," or "Comedy and Romantic". Named Entity Recognition (NER)| CLI v2:`text_ner` <br> SDK v2: `text_ner()`| There are multiple possible tags for tokens in sequences. The task is to predict the tags for all the tokens for each sequence. <br> <br> For example, extracting domain-specific entities from unstructured text, such as contracts or financial documents. ## Thresholding -Thresholding is the multi-label feature that allows users to pick the threshold above which the predicted probabilities will lead to a positive label. Lower values allow for more labels, which is better when users care more about recall, but this option could lead to more false positives. Higher values allow fewer labels and hence better for users who care about precision, but this option could lead to more false negatives. +Thresholding is the multi-label feature that allows users to pick the threshold which the predicted probabilities will lead to a positive label. Lower values allow for more labels, which is better when users care more about recall, but this option could lead to more false positives. Higher values allow fewer labels and hence better for users who care about precision, but this option could lead to more false negatives. ## Preparing data -For NLP experiments in automated ML, you can bring your data in `.csv` format for multi-class and multi-label classification tasks. For NER tasks, two-column `.txt` files that use a space as the separator and adhere to the CoNLL format are supported. The following sections provide additional detail for the data format accepted for each task. +For NLP experiments in automated ML, you can bring your data in `.csv` format for multi-class and multi-label classification tasks. For NER tasks, two-column `.txt` files that use a space as the separator and adhere to the CoNLL format are supported. The following sections provides details for the data format accepted for each task. ### Multi-class rings O ### Data validation -Before training, automated ML applies data validation checks on the input data to ensure that the data can be preprocessed correctly. If any of these checks fail, the run fails with the relevant error message. The following are the requirements to pass data validation checks for each task. +Before a model trains, automated ML applies data validation checks on the input data to ensure that the data can be preprocessed correctly. If any of these checks fail, the run fails with the relevant error message. The following are the requirements to pass data validation checks for each task. > [!Note] > Some data validation checks are applicable to both the training and the validation set, whereas others are applicable only to the training set. If the test dataset could not pass the data validation, that means that automated ML couldn't capture it and there is a possibility of model inference failure, or a decline in model performance. Task | Data validation check All tasks | At least 50 training samples are required Multi-class and Multi-label | The training data and validation data must have <br> - The same set of columns <br>- The same order of columns from left to right <br>- The same data type for columns with the same name <br>- At least two unique labels <br> - Unique column names within each dataset (For example, the training set can't have multiple columns named **Age**) Multi-class only | None-Multi-label only | - The label column format must be in [accepted format](#multi-label) <br> - At least one sample should have 0 or 2+ labels, otherwise it should be a `multiclass` task <br> - All labels should be in `str` or `int` format, with no overlapping. You should not have both label `1` and label `'1'` -NER only | - The file should not start with an empty line <br> - Each line must be an empty line, or follow format `{token} {label}`, where there is exactly one space between the token and the label and no white space after the label <br> - All labels must start with `I-`, `B-`, or be exactly `O`. Case sensitive <br> - Exactly one empty line between two samples <br> - Exactly one empty line at the end of the file +Multi-label only | - The label column format must be in [accepted format](#multi-label) <br> - At least one sample should have 0 or 2+ labels, otherwise it should be a `multiclass` task <br> - All labels should be in `str` or `int` format, with no overlapping. You shouldn't have both label `1` and label `'1'` +NER only | - The file shouldn't start with an empty line <br> - Each line must be an empty line, or follow format `{token} {label}`, where there's exactly one space between the token and the label and no white space after the label <br> - All labels must start with `I-`, `B-`, or be exactly `O`. Case sensitive <br> - Exactly one empty line between two samples <br> - Exactly one empty line at the end of the file ## Configure experiment Automated ML's NLP capability is triggered through task specific `automl` type jobs, which is the same workflow for submitting automated ML experiments for classification, regression and forecasting tasks. You would set parameters as you would for those experiments, such as `experiment_name`, `compute_name` and data inputs. However, there are key differences: -* You can ignore `primary_metric`, as it is only for reporting purposes. Currently, automated ML only trains one model per run for NLP and there is no model selection. +* You can ignore `primary_metric`, as it's only for reporting purposes. Currently, automated ML only trains one model per run for NLP and there is no model selection. * The `label_column_name` parameter is only required for multi-class and multi-label text classification tasks. * If more than 10% of the samples in your dataset contain more than 128 tokens, it's considered long range. - * In order to use the long range text feature, you should use a NC6 or higher/better SKUs for GPU such as: [NCv3](../virtual-machines/ncv3-series.md) series or [ND](../virtual-machines/nd-series.md) series. + * In order to use the long range text feature, you should use an NC6 or higher/better SKUs for GPU such as: [NCv3](../virtual-machines/ncv3-series.md) series or [ND](../virtual-machines/nd-series.md) series. # [Azure CLI](#tab/cli) [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] -For CLI v2 AutoML jobs you configure your experiment in a YAML file like the following. +For CLI v2 automated ml jobs, you configure your experiment in a YAML file like the following. For CLI v2 AutoML jobs you configure your experiment in a YAML file like the fol [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] -For AutoML jobs via the SDK, you configure the job with the specific NLP task function. The following example demonstrates the configuration for `text_classification`. +For Automated ML jobs via the SDK, you configure the job with the specific NLP task function. The following example demonstrates the configuration for `text_classification`. ```Python # general job parameters compute_name = "gpu-cluster" All the pre-trained text DNN models currently available in AutoML NLP for fine-t * xlnet_base_cased * xlnet_large_cased -Note that the large models are significantly larger than their base counterparts. They are typically more performant, but they take up more GPU memory and time for training. As such, their SKU requirements are more stringent: we recommend running on ND-series VMs for the best results. +Note that the large models are larger than their base counterparts. They are typically more performant, but they take up more GPU memory and time for training. As such, their SKU requirements are more stringent: we recommend running on ND-series VMs for the best results. ## Supported hyperparameters The following table describes the hyperparameters that AutoML NLP supports. | Parameter name | Description | Syntax | |-||| -| gradient_accumulation_steps | The number of backward operations whose gradients are to be summed up before performing one step of gradient descent by calling the optimizer's step function. <br><br> This is leveraged to use an effective batch size which is gradient_accumulation_steps times larger than the maximum size that fits the GPU. | Must be a positive integer. +| gradient_accumulation_steps | The number of backward operations whose gradients are to be summed up before performing one step of gradient descent by calling the optimizer's step function. <br><br> This is to use an effective batch size, which is gradient_accumulation_steps times larger than the maximum size that fits the GPU. | Must be a positive integer. | learning_rate | Initial learning rate. | Must be a float in the range (0, 1). | | learning_rate_scheduler |Type of learning rate scheduler. | Must choose from `linear, cosine, cosine_with_restarts, polynomial, constant, constant_with_warmup`. | | model_name | Name of one of the supported models. | Must choose from `bert_base_cased, bert_base_uncased, bert_base_multilingual_cased, bert_base_german_cased, bert_large_cased, bert_large_uncased, distilbert_base_cased, distilbert_base_uncased, roberta_base, roberta_large, distilroberta_base, xlm_roberta_base, xlm_roberta_large, xlnet_base_cased, xlnet_large_cased`. | All discrete hyperparameters only allow choice distributions, such as the intege ## Configure your sweep settings -You can configure all the sweep-related parameters. Multiple model subspaces can be constructed with hyperparameters conditional to the respective model, as seen below in each example. +You can configure all the sweep-related parameters. Multiple model subspaces can be constructed with hyperparameters conditional to the respective model, as seen in each hyperparameter tuning example. The same discrete and continuous distribution options that are available for general HyperDrive jobs are supported here. See all nine options in [Hyperparameter tuning a model](how-to-tune-hyperparameters.md#define-the-search-space) When sweeping hyperparameters, you need to specify the sampling method to use fo You can optionally specify the experiment budget for your AutoML NLP training job using the `timeout_minutes` parameter in the `limits` - the amount of time in minutes before the experiment terminates. If none specified, the default experiment timeout is seven days (maximum 60 days). -AutoML NLP also supports `trial_timeout_minutes`, the maximum amount of time in minutes an individual trial can run before being terminated, and `max_nodes`, the maximum number of nodes from the backing compute cluster to leverage for the job. These parameters also belong to the `limits` section. +AutoML NLP also supports `trial_timeout_minutes`, the maximum amount of time in minutes an individual trial can run before being terminated, and `max_nodes`, the maximum number of nodes from the backing compute cluster to use for the job. These parameters also belong to the `limits` section. Parameter | Detail `max_trials` | Parameter for maximum number of configurations to sweep. Must be an integer between 1 and 1000. When exploring just the default hyperparameters for a given model algorithm, set this parameter to 1. The default value is 1. `max_concurrent_trials`| Maximum number of runs that can run concurrently. If specified, must be an integer between 1 and 100. The default value is 1. <br><br> **NOTE:** <li> The number of concurrent runs is gated on the resources available in the specified compute target. Ensure that the compute target has the available resources for the desired concurrency. <li> `max_concurrent_trials` is capped at `max_trials` internally. For example, if user sets `max_concurrent_trials=4`, `max_trials=2`, values would be internally updated as `max_concurrent_trials=2`, `max_trials=2`. -You can configure all the sweep related parameters as shown in the example below. +You can configure all the sweep related parameters as shown in this example. [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] sweep: ## Known Issues -Dealing with very low scores, or higher loss values: +Dealing with low scores, or higher loss values: -For certain datasets, regardless of the NLP task, the scores produced may be very low, sometimes even zero. This would be accompanied by higher loss values implying that the neural network failed to converge. This can happen more frequently on certain GPU SKUs. +For certain datasets, regardless of the NLP task, the scores produced may be very low, sometimes even zero. This score is accompanied by higher loss values implying that the neural network failed to converge. These scores can happen more frequently on certain GPU SKUs. -While such cases are uncommon, they're possible and the best way to handle it is to leverage hyperparameter tuning and provide a wider range of values, especially for hyperparameters like learning rates. Until our hyperparameter tuning capability is available in production we recommend users, who face such issues, to leverage the NC6 or ND6 compute clusters, where we've found training outcomes to be fairly stable. +While such cases are uncommon, they're possible and the best way to handle it's to leverage hyperparameter tuning and provide a wider range of values, especially for hyperparameters like learning rates. Until our hyperparameter tuning capability is available in production we recommend users experiencing these issues, to use the NC6 or ND6 compute clusters. These clusters typically have training outcomes that are fairly stable. ## Next steps + [Deploy AutoML models to an online (real-time inference) endpoint](how-to-deploy-automl-endpoint.md)-+ [Troubleshoot automated ML experiments (SDK v1)](./v1/how-to-troubleshoot-auto-ml.md?view=azureml-api-1&preserve-view=true) ++ [Hyperparameter tuning a model](how-to-tune-hyperparameters.md) |
machine-learning | How To Deploy Online Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoints.md | For supported general-purpose and GPU instance types, see [Managed online endpoi # [ARM template](#tab/arm) -The preceding registration of the environment specifies a non-GPU docker image `mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1` by passing the value to the `environment-version.json` template using the `dockerImage` parameter. For a GPU compute, provide a value for a GPU docker image to the template (using the `dockerImage` parameter) and provide a GPU compute type SKU to the `online-endpoint-deployment.json` template (using the `skuName` parameter). +The preceding registration of the environment specifies a non-GPU docker image `mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04` by passing the value to the `environment-version.json` template using the `dockerImage` parameter. For a GPU compute, provide a value for a GPU docker image to the template (using the `dockerImage` parameter) and provide a GPU compute type SKU to the `online-endpoint-deployment.json` template (using the `skuName` parameter). For supported general-purpose and GPU instance types, see [Managed online endpoints supported VM SKUs](reference-managed-online-endpoints-vm-sku-list.md). For a list of Azure Machine Learning CPU and GPU base images, see [Azure Machine Learning base images](https://github.com/Azure/AzureML-Containers). |
machine-learning | How To Enable Studio Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-enable-studio-virtual-network.md | In this article, you learn how to: > [!TIP] > This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series: >-> * [Virtual network overview](how-to-network-security-overview.md) :::moniker range="azureml-api-2"+> * [Virtual network overview](how-to-network-security-overview.md) > * [Secure the workspace resources](how-to-secure-workspace-vnet.md) > * [Secure the training environment](how-to-secure-training-vnet.md) > * [Secure the inference environment](how-to-secure-inferencing-vnet.md)+> * [Use custom DNS](how-to-custom-dns.md) +> * [Use a firewall](how-to-access-azureml-behind-firewall.md) :::moniker-end :::moniker range="azureml-api-1"+> * [Virtual network overview](how-to-network-security-overview.md) > * [Secure the workspace resources](./v1/how-to-secure-workspace-vnet.md) > * [Secure the training environment](./v1/how-to-secure-training-vnet.md) > * [Secure the inference environment](./v1/how-to-secure-inferencing-vnet.md) > * [Use custom DNS](how-to-custom-dns.md) > * [Use a firewall](how-to-access-azureml-behind-firewall.md) > > For a tutorial on creating a secure workspace, see [Tutorial: Create a secure workspace](tutorial-create-secure-workspace.md) or [Tutorial: Create a secure workspace using a template](tutorial-create-secure-workspace-template.md). Some storage services, such as Azure Storage Account, have firewall settings tha This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series: -* [Virtual network overview](how-to-network-security-overview.md) :::moniker range="azureml-api-2"+* [Virtual network overview](how-to-network-security-overview.md) * [Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Secure the training environment](how-to-secure-training-vnet.md) * [Secure the inference environment](how-to-secure-inferencing-vnet.md)+* [Use custom DNS](how-to-custom-dns.md) +* [Use a firewall](how-to-access-azureml-behind-firewall.md) :::moniker-end :::moniker range="azureml-api-1"+* [Virtual network overview](how-to-network-security-overview.md) * [Secure the workspace resources](./v1/how-to-secure-workspace-vnet.md) * [Secure the training environment](./v1/how-to-secure-training-vnet.md) * [Secure the inference environment](./v1/how-to-secure-inferencing-vnet.md) * [Use custom DNS](how-to-custom-dns.md) * [Use a firewall](how-to-access-azureml-behind-firewall.md)+ |
machine-learning | How To Inference Server Http | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-server-http.md | The following table contains the parameters accepted by the server: | appinsights_instrumentation_key | False | N/A | The instrumentation key to the application insights where the logs will be published. | | access_control_allow_origins | False | N/A | Enable CORS for the specified origins. Separate multiple origins with ",". <br> Example: "microsoft.com, bing.com" | -> [!TIP] -> CORS (Cross-origin resource sharing) is a way to allow resources on a webpage to be requested from another domain. CORS works via HTTP headers sent with the client request and returned with the service response. For more information on CORS and valid headers, see [Cross-origin resource sharing](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing) in Wikipedia. See [here](v1/how-to-deploy-advanced-entry-script.md#cross-origin-resource-sharing-cors) for an example of the scoring script. ## Request flow |
machine-learning | How To Interactive Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-interactive-jobs.md | By specifying interactive applications at job creation, you can connect directly > [!NOTE] > If you use `sleep infinity`, you will need to manually [cancel the job](./how-to-interactive-jobs.md#end-job) to let go of the compute resource (and stop billing). -5. Select the training applications you want to use to interact with the job. +5. Select at least one training application you want to use to interact with the job. If you do not select an application, the debug feature will not be available. :::image type="content" source="./media/interactive-jobs/select-training-apps.png" alt-text="Screenshot of selecting a training application for the user to use for a job."::: |
machine-learning | How To Log View Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-view-metrics.md | Logs can help you diagnose errors and warnings, or track performance metrics lik > [!TIP] > This article shows you how to monitor the model training process. If you're interested in monitoring resource usage and events from Azure Machine Learning, such as quotas, completed training jobs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md). -> [!TIP] -> For information on logging metrics in Azure Machine Learning designer, see [How to log metrics in the designer](./v1/how-to-track-designer-experiments.md). - ## Prerequisites * You must have an Azure Machine Learning workspace. [Create one if you don't have any](quickstart-create-resources.md). |
machine-learning | How To Machine Learning Interpretability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-machine-learning-interpretability.md | You can run the explanation remotely on Azure Machine Learning Compute and log t * Learn how to generate the Responsible AI dashboard via [CLI v2 and SDK v2](how-to-responsible-ai-dashboard-sdk-cli.md) or the [Azure Machine Learning studio UI](how-to-responsible-ai-dashboard-ui.md). * Explore the [supported interpretability visualizations](how-to-responsible-ai-dashboard.md#feature-importances-model-explanations) of the Responsible AI dashboard. * Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.-* Learn how to enable [interpretability for automated machine learning models](./v1/how-to-machine-learning-interpretability-automl.md). +* Learn how to enable [interpretability for automated machine learning models (SDK v1)](./v1/how-to-machine-learning-interpretability-automl.md). |
machine-learning | How To Manage Kubernetes Instance Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-kubernetes-instance-types.md | code_configuration: instance_type: <instance type name> environment: conda_file: file:./model/conda.yml- image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1 + image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest ``` #### [Python SDK](#tab/select-instancetype-to-modeldeployment-with-sdk) from azure.ai.ml import KubernetesOnlineDeployment,Model,Environment,CodeConfigu model = Model(path="./model/sklearn_mnist_model.pkl") env = Environment( conda_file="./model/conda.yml",- image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1", + image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest", ) # define the deployment code_configuration: scoring_script: score.py environment: conda_file: file:./model/conda.yml- image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1 + image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest resources: requests: cpu: "0.1" from azure.ai.ml import ( model = Model(path="./model/sklearn_mnist_model.pkl") env = Environment( conda_file="./model/conda.yml",- image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1", + image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest", ) requests = ResourceSettings(cpu="0.1", memory="0.2G") |
machine-learning | How To Manage Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models.md | |
machine-learning | How To Manage Workspace Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-cli.md | az group delete -g <resource-group-name> For more information, see the [az ml workspace delete](/cli/azure/ml/workspace#az-ml-workspace-delete) documentation. -If you accidentally deleted your workspace, are still able to retrieve your notebooks. For more information, see the [workspace deletion](./v1/how-to-high-availability-machine-learning.md#workspace-deletion) section of the disaster recovery article. +> [!TIP] +> The default behavior for Azure Machine Learning is to _soft delete_ the workspace. This means that the workspace is not immediately deleted, but instead is marked for deletion. For more information, see [Soft delete](./concept-soft-delete.md). ## Troubleshooting |
machine-learning | How To Manage Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md | When you no longer need a workspace, delete it. [!INCLUDE [machine-learning-delete-workspace](../../includes/machine-learning-delete-workspace.md)] -If you accidentally deleted your workspace, you may still be able to retrieve your notebooks. For details, see [Failover for business continuity and disaster recovery](./v1/how-to-high-availability-machine-learning.md#workspace-deletion). +> [!TIP] +> The default behavior for Azure Machine Learning is to _soft delete_ the workspace. This means that the workspace is not immediately deleted, but instead is marked for deletion. For more information, see [Soft delete](./concept-soft-delete.md). # [Python SDK](#tab/python) |
machine-learning | How To Use Foundation Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-foundation-models.md | Title: How to use Open Source Foundation Models curated by Azure Machine Learning (preview) + Title: How to use Open Source foundation models curated by Azure Machine Learning (preview) -description: Learn how to discover, evaluate, fine-tune and deploy Open Source Foundation Models in Azure Machine Learning +description: Learn how to discover, evaluate, fine-tune and deploy Open Source foundation models in Azure Machine Learning Previously updated : 04/25/2023 Last updated : 06/15/2023 -# How to use Open Source Foundation Models curated by Azure Machine Learning (preview) +# How to use Open Source foundation models curated by Azure Machine Learning (preview) > [!IMPORTANT] > Items marked (preview) in this article are currently in public preview. > The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). -In this article, you learn how to access and evaluate Foundation Models using Azure Machine Learning automated ML in the [Azure Machine Learning studio](overview-what-is-azure-machine-learning.md#studio). Additionally, you learn how to fine-tune each model and how to deploy the model at scale. +In this article, you learn how to access and evaluate foundation models using Azure Machine Learning automated ML in the [Azure Machine Learning studio](overview-what-is-azure-machine-learning.md#studio). Additionally, you learn how to fine-tune each model and how to deploy the model at scale. -Foundation Models are machine learning models that have been pre-trained on vast amounts of data, and that can be fine tuned for specific tasks with relatively small amount of domain specific data. These models serve as a starting point for custom models and accelerate the model building process for a variety of tasks including natural language processing, computer vision, speech and generative AI tasks. Azure Machine Learning provides the capability to easily integrate these pre-trained Foundation Models into your applications. **Foundation Models in Azure Machine Learning** provides Azure Machine Learning native capabilities that enable customers to discover, evaluate, fine tune, deploy and operationalize open-source Foundation Models at scale. +Foundation models are machine learning models that have been pre-trained on vast amounts of data, and that can be fine tuned for specific tasks with relatively small amount of domain specific data. These models serve as a starting point for custom models and accelerate the model building process for a variety of tasks including natural language processing, computer vision, speech and generative AI tasks. Azure Machine Learning provides the capability to easily integrate these pre-trained foundation models into your applications. **foundation models in Azure Machine Learning** provides Azure Machine Learning native capabilities that enable customers to discover, evaluate, fine tune, deploy and operationalize open-source foundation models at scale. -## How to access Foundation Models in Azure Machine Learning +## How to access foundation models in Azure Machine Learning -The 'Model catalog' (preview) in Azure Machine Learning Studio is a hub for discovering Foundation Models. The Open Source Models collection is a repository of the most popular open source Foundation Models curated by Azure Machine Learning. These models are packaged for out of the box usage and are optimized for use in Azure Machine Learning. Currently, it includes the top open source large language models, with support for other tasks coming soon. You can view the complete list of supported open source Foundation Models in the [Model catalog](https://ml.azure.com/model/catalog), under the `Open Source Models` collection. +The 'Model catalog' (preview) in Azure Machine Learning studio is a hub for discovering foundation models. The Open Source Models collection is a repository of the most popular open source foundation models curated by Azure Machine Learning. These models are packaged for out of the box usage and are optimized for use in Azure Machine Learning. Currently, it includes the top open source large language models, with support for other tasks coming soon. You can view the complete list of supported open source foundation models in the [model catalog](https://ml.azure.com/model/catalog), under the `Open Source Models` collection. :::image type="content" source="./media/how-to-use-foundation-models/model-catalog.png" lightbox="./media/how-to-use-foundation-models/model-catalog.png" alt-text="Screenshot showing the model catalog section in Azure Machine Learning studio." ::: -You can filter the list of models in the Model catalog by Task, or by license. Select a specific model name and the see a model card for the selected model, which lists detailed information about the model. For example: +You can filter the list of models in the model catalog by Task, or by license. Select a specific model name and the see a model card for the selected model, which lists detailed information about the model. For example: :::image type="content" source="./media/how-to-use-foundation-models\model-card.png" lightbox="./media/how-to-use-foundation-models\model-card.png" alt-text="Screenshot showing the model card for gpt2 in Azure Machine Learning studio. The model card shows a description of the model and samples of what the model outputs. "::: You can filter the list of models in the Model catalog by Task, or by license. S You can quickly test out any pre-trained model using the Sample Inference widget on the model card, providing your own sample input to test the result. Additionally, the model card for each model includes a brief description of the model and links to samples for code based inferencing, finetuning and evaluation of the model. > [!NOTE]->If you are using a private workspace, your virtual network needs to allow outbound access in order to use Foundation Models in Azure Machine Learning +>If you are using a private workspace, your virtual network needs to allow outbound access in order to use foundation models in Azure Machine Learning -## How to evaluate Foundation Models using your own test data +## How to evaluate foundation models using your own test data You can evaluate a Foundation Model against your test dataset, using either the Evaluate UI wizard or by using the code based samples, linked from the model card. -### Evaluating using UI wizard +### Evaluating using the studio -You can invoke the Evaluate UI wizard by clicking on the 'Evaluate' button on the model card for any foundation model. +You can invoke the Evaluate model form by clicking on the 'Evaluate' button on the model card for any foundation model. -An image of the Evaluation Settings wizard: +An image of the Evaluation Settings form: Each model can be evaluated for the specific inference task that the model can be used for. Each model can be evaluated for the specific inference task that the model can b 1. Pass in the test data you would like to use to evaluate your model. You can choose to either upload a local file (in JSONL format) or select an existing registered dataset from your workspace. 1. Once you've selected the dataset, you need to map the columns from your input data, based on the schema needed for the task. For example, map the column names that correspond to the 'sentence' and 'label' keys for Text Classification **Compute:** 1. Provide the Azure Machine Learning Compute cluster you would like to use for finetuning the model. Evaluation needs to run on GPU compute. Ensure that you have sufficient compute quota for the compute SKUs you wish to use. -1. Select 'Finish' in the Evaluate wizard to submit your evaluation job. Once the job completes, you can view evaluation metrics for the model. Based on the evaluation metrics, you might decide if you would like to finetune the model using your own training data. Additionally, you can decide if you would like to register the model and deploy it to an endpoint. +1. Select **Finish** in the Evaluate wizard to submit your evaluation job. Once the job completes, you can view evaluation metrics for the model. Based on the evaluation metrics, you might decide if you would like to finetune the model using your own training data. Additionally, you can decide if you would like to register the model and deploy it to an endpoint. **Advanced Evaluation Parameters:** Each model can be evaluated for the specific inference task that the model can b ### Evaluating using code based samples -To enable users to get started with model evaluation, we have published samples (both Python notebooks and CLI examples) in the [Evaluation samples in azureml-examples git repo](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/system/evaluation). Each model card also links to Evaluation samples for corresponding tasks +To enable users to get started with model evaluation, we have published samples (both Python notebooks and CLI examples) in the [Evaluation samples in azureml-examples git repo](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/system/evaluation). Each model card also links to evaluation samples for corresponding tasks -## How to finetune Foundation Models using your own training data +## How to finetune foundation models using your own training data -In order to improve model performance in your workload, you might want to fine tune a foundation model using your own training data. You can easily finetune these Foundation Models by using either the Finetune UI wizard or by using the code based samples linked from the model card. +In order to improve model performance in your workload, you might want to fine tune a foundation model using your own training data. You can easily finetune these foundation models by using either the finetune settings in the studio or by using the code based samples linked from the model card. -### Finetuning using the UI wizard +### Finetune using the studio +You can invoke the finetune settings form by selecting on the **Finetune** button on the model card for any foundation model. -You can invoke the Finetune UI wizard by clicking on the 'Finetune' button on the model card for any foundation model. +**Finetune Settings:** -**Finetuning Settings:** - **Finetuning task type** You can invoke the Finetune UI wizard by clicking on the 'Finetune' button on th 1. Once you've selected the dataset, you need to map the columns from your input data, based on the schema needed for the task. For example: map the column names that correspond to the 'sentence' and 'label' keys for Text Classification -* Validation data: Pass in the data you would like to use to validate your model. Selecting 'Automatic split' reserves an automatic split of training data for validation. Alternatively, you can provide a different validation dataset. -* Test data: Pass in the test data you would like to use to evaluate your finetuned model. Selecting 'Automatic split' reserves an automatic split of training data for test. -* Compute: Provide the Azure Machine Learning Compute cluster you would like to use for finetuning the model. Fine tuning needs to run on GPU compute. We recommend using compute SKUs with A100 / V100 GPUs when fine tuning. Ensure that you have sufficient compute quota for the compute SKUs you wish to use. +* Validation data: Pass in the data you would like to use to validate your model. Selecting **Automatic split** reserves an automatic split of training data for validation. Alternatively, you can provide a different validation dataset. +* Test data: Pass in the test data you would like to use to evaluate your finetuned model. Selecting **Automatic split** reserves an automatic split of training data for test. +* Compute: Provide the Azure Machine Learning Compute cluster you would like to use for finetuning the model. Finetuning needs to run on GPU compute. We recommend using compute SKUs with A100 / V100 GPUs when fine tuning. Ensure that you have sufficient compute quota for the compute SKUs you wish to use. -3. Select 'Finish' in the Finetune Wizard to submit your finetuning job. Once the job completes, you can view evaluation metrics for the finetuned model. You can then go ahead and register the finetuned model output by the finetuning job and deploy this model to an endpoint for inferencing. +3. Select **Finish** in the finetune form to submit your finetuning job. Once the job completes, you can view evaluation metrics for the finetuned model. You can then register the finetuned model output by the finetuning job and deploy this model to an endpoint for inferencing. -**Advanced Finetuning Parameters:** +**Advanced finetuning parameters:** -The Finetuning UI wizard, allows you to perform basic finetuning by providing your own training data. Additionally, there are several advanced finetuning parameters, such as learning rate, epochs, batch size, etc., described in the Readme file for each task [here](https://github.com/Azure/azureml-assets/tree/main/training/finetune_acft_hf_nlp/components/finetune). Each of these settings has default values, but can be customized via code based samples, if needed. +The finetuning feature, allows you to perform basic finetuning by providing your own training data. Additionally, there are several advanced finetuning parameters, such as learning rate, epochs, batch size, etc., described in the Readme file for each task [here](https://github.com/Azure/azureml-assets/tree/main/training/finetune_acft_hf_nlp/components/finetune). Each of these settings has default values, but can be customized via code based samples, if needed. ### Finetuning using code based samples Currently, Azure Machine Learning supports finetuning models for the following l * Summarization * Translation -To enable users to quickly get started with fine tuning, we have published samples (both Python notebooks and CLI examples) for each task in the [azureml-examples git repo Finetune samples](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/system/finetune). Each model card also links to Finetuning samples for supported finetuning tasks. +To enable users to quickly get started with finetuning, we have published samples (both Python notebooks and CLI examples) for each task in the [azureml-examples git repo Finetune samples](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/system/finetune). Each model card also links to Finetuning samples for supported finetuning tasks. -## Deploying Foundation Models to endpoints for inferencing +## Deploying foundation models to endpoints for inferencing -You can deploy Foundation Models (both pre-trained models from the model catalog, and finetuned models, once they're registered to your workspace) to an endpoint that can then be used for inferencing. Deployment to both real time endpoints and batch endpoints is supported. You can deploy these models by using either the Deploy UI wizard or by using the code based samples linked from the model card. +You can deploy foundation models (both pre-trained models from the model catalog, and finetuned models, once they're registered to your workspace) to an endpoint that can then be used for inferencing. Deployment to both real time endpoints and batch endpoints is supported. You can deploy these models by using either the Deploy UI wizard or by using the code based samples linked from the model card. -### Deploying using the UI wizard +### Deploying using the studio You can invoke the Deploy UI wizard by clicking on the 'Deploy' button on the model card for any foundation model, and selecting either Real-time endpoint or Batch endpoint Since the scoring script and environment are automatically included with the fou To enable users to quickly get started with deployment and inferencing, we have published samples in the [Inference samples in the azureml-examples git repo](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/system/inference). The published samples include Python notebooks and CLI examples. Each model card also links to Inference samples for Real time and Batch inferencing. -## Import Foundation Models +## Import foundation models -If you're looking to use an open source model that isn't included in the Model Catalog, you can import the model from Hugging Face into your Azure Machine Learning workspace. Hugging Face is an open-source library for natural language processing (NLP) that provides pre-trained models for popular NLP tasks. Currently, model import supports importing models for the following tasks, as long as the model meets the requirements listed in the Model Import Notebook: +If you're looking to use an open source model that isn't included in the model catalog, you can import the model from Hugging Face into your Azure Machine Learning workspace. Hugging Face is an open-source library for natural language processing (NLP) that provides pre-trained models for popular NLP tasks. Currently, model import supports importing models for the following tasks, as long as the model meets the requirements listed in the Model Import Notebook: * fill-mask * token-classification If you're looking to use an open source model that isn't included in the Model C > [!NOTE] >Models from Hugging Face are subject to third-party license terms available on the Hugging Face model details page. It is your responsibility to comply with the model's license terms. -You can select the "Import" button on the top-right of the Model Catalog to use the Model Import Notebook. +You can select the "Import" button on the top-right of the model catalog to use the Model Import Notebook. :::image type="content" source="./media/how-to-use-foundation-models/model-import.png" alt-text="Screenshot showing the model import button as it's displayed in the top right corner on the foundation model catalog."::: You need to provide compute for the Model import to run. Running the Model Impor ## Next Steps -To learn about how foundation model compares to other methods of training, visit [Foundation Models.](./concept-foundation-models.md) +To learn about how foundation model compares to other methods of training, visit [foundation models.](./concept-foundation-models.md) |
machine-learning | Reference Machine Learning Cloud Parity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-machine-learning-cloud-parity.md | In the list of global Azure regions, there are several regions that serve specif Azure Machine Learning is still in development in air-gap Regions. The information in the rest of this document provides information on what features of Azure Machine Learning are available in these regions, along with region-specific information on using these features.-## Azure Government +## Azure Government | Feature | Public cloud status | US-Virginia | US-Arizona| |-|:-:|:--:|:-:| The information in the rest of this document provides information on what featur | [Azure Stack Edge with FPGA (SDK/CLI v1)](./v1/how-to-deploy-fpga-web-service.md#deploy-to-a-local-edge-server) | Public Preview | NO | NO | | **Other** | | | | | [Open Datasets](../open-datasets/samples.md) | Public Preview | YES | YES |-| [Custom Cognitive Search](how-to-deploy-model-cognitive-search.md) | Public Preview | YES | YES | +| [Custom Cognitive Search](./v1/how-to-deploy-model-cognitive-search.md) | Public Preview | YES | YES | ### Azure Government scenarios The information in the rest of this document provides information on what featur | Scenario | US-Virginia | US-Arizona| Limitations | |-|:-:|:--:|-| | **General security setup** | | | |-| Disable/control internet access (inbound and outbound) and specific VNet | PARTIAL| PARTIAL | | +| Disable/control internet access (inbound and outbound) and specific VNet | PARTIAL| PARTIAL | | | Placement for all associated resources/services | YES | YES | | | Encryption at-rest and in-transit. | YES | YES | | | Root and SSH access to compute resources. | YES | YES | |-| Maintain the security of deployed systems (instances, endpoints, etc.), including endpoint protection, patching, and logging | PARTIAL| PARTIAL |ACI behind VNet currently not available | -| Control (disable/limit/restrict) the use of ACI/AKS integration | PARTIAL| PARTIAL |ACI behind VNet currently not available| +| Maintain the security of deployed systems (instances, endpoints, etc.), including endpoint protection, patching, and logging | PARTIAL| PARTIAL |ACI behind VNet currently not available | +| Control (disable/limit/restrict) the use of ACI/AKS integration | PARTIAL| PARTIAL |ACI behind VNet currently not available| | Azure role-based access control (Azure RBAC) - Custom Role Creations | YES | YES | |-| Control access to ACR images used by ML Service (Azure provided/maintained versus custom) |PARTIAL| PARTIAL | | +| Control access to ACR images used by ML Service (Azure provided/maintained versus custom) |PARTIAL| PARTIAL | | | **General Machine Learning Service Usage** | | | | | Ability to have a development environment to build a model, train that model, host it as an endpoint, and consume it via a webapp | YES | YES | | | Ability to pull data from ADLS (Data Lake Storage) |YES | YES | | The information in the rest of this document provides information on what featur * For both: `graph.windows.net` -## Azure China 21Vianet +## Azure China 21Vianet | Feature | Public cloud status | CH-East-2 | CH-North-3 | |-|::|:--:|:-:| |
machine-learning | How To Deploy Model Cognitive Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-model-cognitive-search.md | + + Title: Deploy a model for use with Cognitive Search ++description: Learn how to use Azure Machine Learning to deploy a model for use with Cognitive Search. The model is used as a custom skill to enrich the search experience. +++++++ Last updated : 03/11/2021++monikerRange: 'azureml-api-1' ++++# Deploy a model for use with Cognitive Search +++This article teaches you how to use Azure Machine Learning to deploy a model for use with [Azure Cognitive Search](/azure/search/search-what-is-azure-search). ++Cognitive Search performs content processing over heterogenous content, to make it queryable by humans or applications. This process can be enhanced by using a model deployed from Azure Machine Learning. ++Azure Machine Learning can deploy a trained model as a web service. The web service is then embedded in a Cognitive Search _skill_, which becomes part of the processing pipeline. ++> [!IMPORTANT] +> The information in this article is specific to the deployment of the model. It provides information on the supported deployment configurations that allow the model to be used by Cognitive Search. +> +> For information on how to configure Cognitive Search to use the deployed model, see the [Build and deploy a custom skill with Azure Machine Learning](/azure/search/cognitive-search-tutorial-aml-custom-skill) tutorial. ++When deploying a model for use with Azure Cognitive Search, the deployment must meet the following requirements: ++* Use Azure Kubernetes Service to host the model for inference. +* Enable transport layer security (TLS) for the Azure Kubernetes Service. TLS is used to secure HTTPS communications between Cognitive Search and the deployed model. +* The entry script must use the `inference_schema` package to generate an OpenAPI (Swagger) schema for the service. +* The entry script must also accept JSON data as input, and generate JSON as output. +++## Prerequisites ++* An Azure Machine Learning workspace. For more information, see [Create workspace resources](../quickstart-create-resources.md). ++* A Python development environment with the Azure Machine Learning SDK installed. For more information, see [Azure Machine Learning SDK](/python/api/overview/azure/ml/install). ++* A registered model. ++* A general understanding of [How and where to deploy models](how-to-deploy-and-where.md). ++## Connect to your workspace ++An Azure Machine Learning workspace provides a centralized place to work with all the artifacts you create when you use Azure Machine Learning. The workspace keeps a history of all training jobs, including logs, metrics, output, and a snapshot of your scripts. ++To connect to an existing workspace, use the following code: ++> [!IMPORTANT] +> This code snippet expects the workspace configuration to be saved in the current directory or its parent. For more information, see [Create and manage Azure Machine Learning workspaces](how-to-manage-workspace.md). For more information on saving the configuration to file, see [Create a workspace configuration file](how-to-configure-environment.md). ++```python +from azureml.core import Workspace ++try: + # Load the workspace configuration from local cached inffo + ws = Workspace.from_config() + print(ws.name, ws.location, ws.resource_group, ws.location, sep='\t') + print('Library configuration succeeded') +except: + print('Workspace not found') +``` ++## Create a Kubernetes cluster ++**Time estimate**: Approximately 20 minutes. ++A Kubernetes cluster is a set of virtual machine instances (called nodes) that are used for running containerized applications. ++When you deploy a model from Azure Machine Learning to Azure Kubernetes Service, the model and all the assets needed to host it as a web service are packaged into a Docker container. This container is then deployed onto the cluster. ++The following code demonstrates how to create a new Azure Kubernetes Service (AKS) cluster for your workspace: ++> [!TIP] +> You can also attach an existing Azure Kubernetes Service to your Azure Machine Learning workspace. For more information, see [How to deploy models to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md). ++> [!IMPORTANT] +> Notice that the code uses the `enable_ssl()` method to enable transport layer security (TLS) for the cluster. This is required when you plan on using the deployed model from Cognitive Search. ++```python +from azureml.core.compute import AksCompute, ComputeTarget +# Create or attach to an AKS inferencing cluster ++# Create the provisioning configuration with defaults +prov_config = AksCompute.provisioning_configuration() ++# Enable TLS (sometimes called SSL) communications +# Leaf domain label generates a name using the formula +# "<leaf-domain-label>######.<azure-region>.cloudapp.azure.com" +# where "######" is a random series of characters +prov_config.enable_ssl(leaf_domain_label = "contoso") ++cluster_name = 'amlskills' +# Try to use an existing compute target by that name. +# If one doesn't exist, create one. +try: + + aks_target = ComputeTarget(ws, cluster_name) + print("Attaching to existing cluster") +except Exception as e: + print("Creating new cluster") + aks_target = ComputeTarget.create(workspace = ws, + name = cluster_name, + provisioning_configuration = prov_config) + # Wait for the create process to complete + aks_target.wait_for_completion(show_output = True) +``` ++> [!IMPORTANT] +> Azure will bill you as long as the AKS cluster exists. Make sure to delete your AKS cluster when you're done with it. ++For more information on using AKS with Azure Machine Learning, see [How to deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md). ++## Write the entry script ++The entry script receives data submitted to the web service, passes it to the model, and returns the scoring results. The following script loads the model on startup, and then uses the model to score data. This file is sometimes called `score.py`. ++> [!TIP] +> The entry script is specific to your model. For example, the script must know the framework to use with your model, data formats, etc. ++> [!IMPORTANT] +> When you plan on using the deployed model from Azure Cognitive Search you must use the `inference_schema` package to enable schema generation for the deployment. This package provides decorators that allow you to define the input and output data format for the web service that performs inference using the model. ++```python +from azureml.core.model import Model +from nlp_architect.models.absa.inference.inference import SentimentInference +from spacy.cli.download import download as spacy_download +import traceback +import json +# Inference schema for schema discovery +from inference_schema.schema_decorators import input_schema, output_schema +from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType +from inference_schema.parameter_types.standard_py_parameter_type import StandardPythonParameterType ++def init(): + """ + Set up the ABSA model for Inference + """ + global SentInference + spacy_download('en') + aspect_lex = Model.get_model_path('hotel_aspect_lex') + opinion_lex = Model.get_model_path('hotel_opinion_lex') + SentInference = SentimentInference(aspect_lex, opinion_lex) ++# Use inference schema decorators and sample input/output to +# build the OpenAPI (Swagger) schema for the deployment +standard_sample_input = {'text': 'a sample input record containing some text' } +standard_sample_output = {"sentiment": {"sentence": "This place makes false booking prices, when you get there, they say they do not have the reservation for that day.", + "terms": [{"text": "hotels", "type": "AS", "polarity": "POS", "score": 1.0, "start": 300, "len": 6}, + {"text": "nice", "type": "OP", "polarity": "POS", "score": 1.0, "start": 295, "len": 4}]}} +@input_schema('raw_data', StandardPythonParameterType(standard_sample_input)) +@output_schema(StandardPythonParameterType(standard_sample_output)) +def run(raw_data): + try: + # Get the value of the 'text' field from the JSON input and perform inference + input_txt = raw_data["text"] + doc = SentInference.run(doc=input_txt) + if doc is None: + return None + sentences = doc._sentences + result = {"sentence": doc._doc_text} + terms = [] + for sentence in sentences: + for event in sentence._events: + for x in event: + term = {"text": x._text, "type":x._type.value, "polarity": x._polarity.value, "score": x._score,"start": x._start,"len": x._len } + terms.append(term) + result["terms"] = terms + print("Success!") + # Return the results to the client as a JSON document + return {"sentiment": result} + except Exception as e: + result = str(e) + # return error message back to the client + print("Failure!") + print(traceback.format_exc()) + return json.dumps({"error": result, "tb": traceback.format_exc()}) +``` ++For more information on entry scripts, see [How and where to deploy](how-to-deploy-and-where.md). ++## Define the software environment ++The environment class is used to define the Python dependencies for the service. It includes dependencies required by both the model and the entry script. In this example, it installs packages from the regular pypi index, as well as from a GitHub repo. ++```python +from azureml.core.conda_dependencies import CondaDependencies +from azureml.core import Environment ++conda = None +pip = ["azureml-defaults", "azureml-monitoring", + "git+https://github.com/NervanaSystems/nlp-architect.git@absa", 'nlp-architect', 'inference-schema', + "spacy==2.0.18"] ++conda_deps = CondaDependencies.create(conda_packages=None, pip_packages=pip) ++myenv = Environment(name='myenv') +myenv.python.conda_dependencies = conda_deps +``` ++For more information on environments, see [Create and manage environments for training and deployment](how-to-use-environments.md). ++## Define the deployment configuration ++The deployment configuration defines the Azure Kubernetes Service hosting environment used to run the web service. ++> [!TIP] +> If you aren't sure about the memory, CPU, or GPU needs of your deployment, you can use profiling to learn these. For more information, see [How and where to deploy a model](how-to-deploy-and-where.md). ++```python +from azureml.core.model import Model +from azureml.core.webservice import Webservice +from azureml.core.image import ContainerImage +from azureml.core.webservice import AksWebservice, Webservice ++# If deploying to a cluster configured for dev/test, ensure that it was created with enough +# cores and memory to handle this deployment configuration. Note that memory is also used by +# things such as dependencies and Azure Machine Learning components. ++aks_config = AksWebservice.deploy_configuration(autoscale_enabled=True, + autoscale_min_replicas=1, + autoscale_max_replicas=3, + autoscale_refresh_seconds=10, + autoscale_target_utilization=70, + auth_enabled=True, + cpu_cores=1, memory_gb=2, + scoring_timeout_ms=5000, + replica_max_concurrent_requests=2, + max_request_wait_time=5000) +``` ++For more information, see the reference documentation for [AksService.deploy_configuration](/python/api/azureml-core/azureml.core.webservice.akswebservice#deploy-configuration-autoscale-enabled-none--autoscale-min-replicas-none--autoscale-max-replicas-none--autoscale-refresh-seconds-none--autoscale-target-utilization-none--collect-model-data-none--auth-enabled-none--cpu-cores-none--memory-gb-none--enable-app-insights-none--scoring-timeout-ms-none--replica-max-concurrent-requests-none--max-request-wait-time-none--num-replicas-none--primary-key-none--secondary-key-none--tags-none--properties-none--description-none--gpu-cores-none--period-seconds-none--initial-delay-seconds-none--timeout-seconds-none--success-threshold-none--failure-threshold-none--namespace-none--token-auth-enabled-none--compute-target-name-none-). ++## Define the inference configuration ++The inference configuration points to the entry script and the environment object: ++```python +from azureml.core.model import InferenceConfig +inf_config = InferenceConfig(entry_script='score.py', environment=myenv) +``` ++For more information, see the reference documentation for [InferenceConfig](/python/api/azureml-core/azureml.core.model.inferenceconfig). ++## Deploy the model ++Deploy the model to your AKS cluster and wait for it to create your service. In this example, two registered models are loaded from the registry and deployed to AKS. After deployment, the `score.py` file in the deployment loads these models and uses them to perform inference. ++```python +from azureml.core.webservice import AksWebservice, Webservice ++c_aspect_lex = Model(ws, 'hotel_aspect_lex') +c_opinion_lex = Model(ws, 'hotel_opinion_lex') +service_name = "hotel-absa-v2" ++aks_service = Model.deploy(workspace=ws, + name=service_name, + models=[c_aspect_lex, c_opinion_lex], + inference_config=inf_config, + deployment_config=aks_config, + deployment_target=aks_target, + overwrite=True) ++aks_service.wait_for_deployment(show_output = True) +print(aks_service.state) +``` ++For more information, see the reference documentation for [Model](/python/api/azureml-core/azureml.core.model.model). ++## Issue a sample query to your service ++The following example uses the deployment information stored in the `aks_service` variable by the previous code section. It uses this variable to retrieve the scoring URL and authentication token needed to communicate with the service: ++```python +import requests +import json ++primary, secondary = aks_service.get_keys() ++# Test data +input_data = '{"raw_data": {"text": "This is a nice place for a relaxing evening out with friends. The owners seem pretty nice, too. I have been there a few times including last night. Recommend."}}' ++# Since authentication was enabled for the deployment, set the authorization header. +headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ primary)} ++# Send the request and display the results +resp = requests.post(aks_service.scoring_uri, input_data, headers=headers) +print(resp.text) +``` ++The result returned from the service is similar to the following JSON: ++```json +{"sentiment": {"sentence": "This is a nice place for a relaxing evening out with friends. The owners seem pretty nice, too. I have been there a few times including last night. Recommend.", "terms": [{"text": "place", "type": "AS", "polarity": "POS", "score": 1.0, "start": 15, "len": 5}, {"text": "nice", "type": "OP", "polarity": "POS", "score": 1.0, "start": 10, "len": 4}]}} +``` ++## Connect to Cognitive Search ++For information on using this model from Cognitive Search, see the [Build and deploy a custom skill with Azure Machine Learning](/azure/search/cognitive-search-tutorial-aml-custom-skill) tutorial. ++## Clean up the resources ++If you created the AKS cluster specifically for this example, delete your resources after you're done testing it with Cognitive Search. ++> [!IMPORTANT] +> Azure bills you based on how long the AKS cluster is deployed. Make sure to clean it up after you are done with it. ++```python +aks_service.delete() +aks_target.delete() +``` ++## Next steps ++* [Build and deploy a custom skill with Azure Machine Learning](/azure/search/cognitive-search-tutorial-aml-custom-skill) |
machine-learning | How To Manage Workspace Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-manage-workspace-cli.md | You can also delete the resource group, which deletes the workspace and all othe az group delete -g <resource-group-name> ``` -If you accidentally deleted your workspace, are still able to retrieve your notebooks. For more information, see the [workspace deletion](how-to-high-availability-machine-learning.md#workspace-deletion) section of the disaster recovery article. +> [!TIP] +> The default behavior for Azure Machine Learning is to _soft delete_ the workspace. This means that the workspace is not immediately deleted, but instead is marked for deletion. For more information, see [Soft delete](../concept-soft-delete.md). ## Troubleshooting |
machine-learning | How To Manage Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-manage-workspace.md | When you no longer need a workspace, delete it. [!INCLUDE [machine-learning-delete-workspace](../../../includes/machine-learning-delete-workspace.md)] -If you accidentally deleted your workspace, you may still be able to retrieve your notebooks. For details, see [Failover for business continuity and disaster recovery](how-to-high-availability-machine-learning.md#workspace-deletion). +> [!TIP] +> The default behavior for Azure Machine Learning is to _soft delete_ the workspace. This means that the workspace is not immediately deleted, but instead is marked for deletion. For more information, see [Soft delete](../concept-soft-delete.md). Delete the workspace `ws`: |
network-watcher | Supported Region Traffic Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/supported-region-traffic-analytics.md | Title: Traffic analytics supported regions -description: This article provides the list of Azure Network Watcher traffic analytics supported regions. +description: Learn about the regions that support enabling traffic analytics on NSG flow logs and the Log Analytics workspaces that you can use. - Previously updated : 06/15/2022 Last updated : 06/15/2023 -# Azure Network Watcher traffic analytics supported regions +# Traffic analytics supported regions -This article provides the list of regions supported by Traffic Analytics. You can view the list of supported regions of both NSG and Log Analytics Workspaces below. +In this article, you learn about Azure regions that support enabling [traffic analytics](traffic-analytics.md) for NSG flow logs. -## Supported regions: NSG +## Supported regions: network security groups ++You can enable traffic analytics for NSG flow logs for network security groups that exist in any of the following Azure regions: -You can use traffic analytics for NSGs in any of the following supported regions: :::row::: :::column span=""::: Australia Central You can use traffic analytics for NSGs in any of the following supported regions :::column-end::: :::row-end::: -## Supported regions: Log Analytics Workspaces +## Supported regions: Log Analytics workspaces ++The Log Analytics workspace that you use for traffic analytics must exist in one of the following Azure regions: -The Log Analytics workspace must exist in the following regions: :::row::: :::column span=""::: Australia Central The Log Analytics workspace must exist in the following regions: :::row-end::: > [!NOTE]-> If NSGs support a region, but the log analytics workspace does not support that region for traffic analytics as per above lists, then you can use log analytics workspace of any other supported region as a workaround. +> If a network security group is supported for flow logging in a region, but Log Analytics workspace isn't supported in that region for traffic analytics, you can use a Log Analytics workspace from any other supported region as a workaround. ## Next steps -- Learn how to [enable flow log settings](enable-network-watcher-flow-log-settings.md).-- Learn the ways to [use traffic analytics](usage-scenarios-traffic-analytics.md).+- Learn more about [Traffic analytics](traffic-analytics.md). +- Learn about [Usage scenarios of traffic analytics](usage-scenarios-traffic-analytics.md). |
operator-nexus | Howto Baremetal Bmc Ssh | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-bmc-ssh.md | The BMCs support a maximum number of 12 users. Users are defined on a per Cluste - The users added must be part of an Azure Active Directory (Azure AD) group. For more information, see [How to Manage Groups](../active-directory/fundamentals/how-to-manage-groups.md). - To restrict access for managing keysets, create a custom role. For more information, see [Azure Custom Roles](../role-based-access-control/custom-roles.md). In this instance, add or exclude permissions for `Microsoft.NetworkCloud/clusters/bmcKeySets`. The options are `/read`, `/write`, and `/delete`. +> [!NOTE] +> When BMC access is created, modified or deleted via the commands described in this +> article, a background process delivers those changes to the machines. This process is paused during +> Operator Nexus software upgrades. If an upgrade is known to be in progress, you can use the `--no-wait` +> option with the command to prevent the command prompt from waiting for the process to complete. + ## Creating a BMC keyset The `bmckeyset create` command creates SSH access to the bare metal machine in a Cluster for a group of users. |
operator-nexus | Howto Baremetal Bmm Ssh | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-bmm-ssh.md | There's no limit to the number of users in a group. - The added users must be part of an Azure Active Directory (Azure AD) group. For more information, see [How to Manage Groups](../active-directory/fundamentals/how-to-manage-groups.md). - To restrict access for managing keysets, create a custom role. For more information, see [Azure Custom Roles](../role-based-access-control/custom-roles.md). In this instance, add or exclude permissions for `Microsoft.NetworkCloud/clusters/bareMetalMachineKeySets`. The options are `/read`, `/write`, and `/delete`. +> [!NOTE] +> When bare metal machine access is created, modified or deleted via the commands described in this +> article, a background process delivers those changes to the machines. This process is paused during +> Operator Nexus software upgrades. If an upgrade is known to be in progress, you can use the `--no-wait` +> option with the command to prevent the command prompt from waiting for the process to complete. + ## Creating a bare metal machine keyset The `baremetalmachinekeyset create` command creates SSH access to the bare metal machine in a Cluster for a group of users. |
postgresql | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md | One advantage of running your workload in Azure is global reach. The flexible se | South Central US | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | South India | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: | | Southeast Asia | :heavy_check_mark:(v3/v4 only) | :x: $ | :heavy_check_mark: | :heavy_check_mark: |-| Sweden Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: | +| Sweden Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Switzerland North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Switzerland West | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: | | UAE North | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x: | |
private-multi-access-edge-compute-mec | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-multi-access-edge-compute-mec/overview.md | For more information, see [Azure Private 5G Core](../private-5g-core/private-5g- **Azure Digital Twins**: Azure Digital Twins enables device sensors to be modeled in their business context considering spatial relationships, usage patterns, and other business context that turns a fleet of devices into a digital replica of a physical asset or environment. For more information, see [Azure Digital Twins](https://azure.microsoft.com/services/digital-twins/). ## Next steps+- Learn more about [Azure Private 5G Core](/azure/private-5g-core/private-5g-core-overview) +- Learn more about [Azure Network Function Manager](/azure/network-function-manager/overview) +- Learn more about [Azure Kubernetes Service (AKS) hybrid deployment](/azure/aks/hybrid/) +- Learn more about [Azure Stack Edge](/azure/databox-online/) - Learn more about [Affirmed Private Network Service](affirmed-private-network-service-overview.md) |
purview | Catalog Private Link Faqs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-private-link-faqs.md | Use a Managed IR if: Use a self-hosted integration runtime if: - You are planning to scan data sources in Azure IaaS, SaaS services behind private network or in your on-premises network. - Managed VNet is not available in the region where your Microsoft Purview account is deployed.+- You are planning to scan any sources that are not listed under [Managed VNet IR supported sources](catalog-managed-vnet.md#supported-data-sources). ### Can I use both self-hosted integration runtime and Managed IR inside a Microsoft Purview account? |
sap | Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/providers.md | -In the context of *Azure Monitor for SAP solutions*, a *provider* contains the connection information for a corresponding component and helps to collect data from there. There are multiple provider types. For example, an SAP HANA provider is configured for a specific component within the SAP landscape, like an SAP HANA database. You can configure an Azure Monitor for SAP solutions resource (also known as SAP monitor resource) with multiple providers of the same type or multiple providers of multiple types. +In the context of Azure Monitor for SAP solutions, a *provider* contains the connection information for a corresponding component and helps to collect data from there. There are multiple provider types. For example, an SAP HANA provider is configured for a specific component within the SAP landscape, like an SAP HANA database. You can configure an Azure Monitor for SAP solutions resource (also known as an SAP monitor resource) with multiple providers of the same type or multiple providers of multiple types. -You can choose to configure different provider types for data collection from the corresponding component in their SAP landscape. For example, you can configure one provider for the SAP HANA provider type, another provider for high availability cluster provider type, and so on. +You can choose to configure different provider types for data collection from the corresponding component in their SAP landscape. For example, you can configure one provider for the SAP HANA provider type, another provider for the high-availability cluster provider type, and so on. You can also configure multiple providers of a specific provider type to reuse the same SAP monitor resource and associated managed group. For more information, see [Manage Azure Resource Manager resource groups by using the Azure portal](../../azure-resource-manager/management/manage-resource-groups-portal.md). - + -It's recommended to configure at least one provider when you deploy an Azure Monitor for SAP solutions resource. By configuring a provider, you start data collection from the corresponding component for which the provider is configured. +We recommend that you configure at least one provider when you deploy an Azure Monitor for SAP solutions resource. By configuring a provider, you start data collection from the corresponding component for which the provider is configured. -If you don't configure any providers at the time of deployment, the Azure Monitor for SAP solutions resource is still deployed, but no data is collected. You can add providers after deployment through the SAP monitor resource within the Azure portal. You can add or delete providers from the SAP monitor resource at any time. +If you don't configure any providers at the time of deployment, the Azure Monitor for SAP solutions resource is still deployed, but no data is collected. You can add providers after deployment through the SAP monitor resource in the Azure portal. You can add or delete providers from the SAP monitor resource at any time. ## Provider type: SAP NetWeaver -You can configure one or more providers of provider type SAP NetWeaver to enable data collection from SAP NetWeaver layer. Azure Monitor for SAP solutions NetWeaver provider uses the existing -- [**SAPControl** Web service](https://www.sap.com/documents/2016/09/0a40e60d-8b7c-0010-82c7-eda71af511fa.html) interface to retrieve the appropriate information.-- SAP RFC - ability to collect additional information from the SAP system using Standard SAP RFC.--You can get the following data with the SAP NetWeaver provider: --- SAP system and application server availability (e.g Instance process availability of dispatcher,ICM,Gateway,Message server,Enqueue Server,IGS Watchdog) (SAPOsControl)-- Work process usage statistics and trends (SAPOsControl)-- Enqueue Lock statistics and trends (SAPOsControl)-- Queue usage statistics and trends (SAPOsControl)-- SMON Metrics (**Tcode - /SDF/SMON**) (RFC)-- SWNC Workload, Memory, Transaction, User, RFC Usage (**Tcode - St03n**) (RFC)-- Short Dumps (**Tcode - ST22**) (RFC)-- Object Lock (**Tcode - SM12**) (RFC)-- Failed Updates (**Tcode - SM13**) (RFC)-- System Logs Analysis (**Tcode - SM21**) (RFC)-- Batch Jobs Statistics (**Tcode - SM37**) (RFC)-- Outbound Queues (**Tcode - SMQ1**) (RFC)-- Inbound Queues (**Tcode - SMQ2**) (RFC)-- Transactional RFC (**Tcode - SM59**) (RFC)-- STMS Change Transport System Metrics (**Tcode - STMS**) (RFC)+You can configure one or more providers of the provider type SAP NetWeaver to enable data collection from the SAP NetWeaver layer. The Azure Monitor for SAP solutions NetWeaver provider uses the existing: ++- [SAPControl Web service](https://www.sap.com/documents/2016/09/0a40e60d-8b7c-0010-82c7-eda71af511fa.html) interface to retrieve the appropriate information. +- SAP RFC ability to collect more information from the SAP system by using Standard SAP RFC. ++With the SAP NetWeaver provider, you can get the: ++- SAP system and application server availability (for example, instance process availability of Dispatcher, ICM, Gateway, Message Server, Enqueue Server, IGS Watchdog) (SAPOsControl). +- Work process usage statistics and trends (SAPOsControl). +- Enqueue lock statistics and trends (SAPOsControl). +- Queue usage statistics and trends (SAPOsControl). +- SMON metrics (**Tcode - /SDF/SMON**) (RFC). +- SWNC workload, memory, transaction, user, RFC usage (**Tcode - St03n**) (RFC). +- Short dumps (**Tcode - ST22**) (RFC). +- Object lock (**Tcode - SM12**) (RFC). +- Failed updates (**Tcode - SM13**) (RFC). +- System logs analysis (**Tcode - SM21**) (RFC). +- Batch jobs statistics (**Tcode - SM37**) (RFC). +- Outbound queues (**Tcode - SMQ1**) (RFC). +- Inbound queues (**Tcode - SMQ2**) (RFC). +- Transactional RFC (**Tcode - SM59**) (RFC). +- STMS Change Transport System metrics (**Tcode - STMS**) (RFC). Configuring the SAP NetWeaver provider requires: -For SOAP Web Methods: - - Fully Qualified Domain Name of SAP Web dispatcher OR SAP Application server. - - SAP System ID, Instance no. - - Host file entries of all SAP application servers that get listed via SAPcontrol "GetSystemInstanceList" web method. +For SOAP web methods: + - Fully qualified domain name (FQDN) of the SAP Web Dispatcher or the SAP application server. + - SAP system ID, Instance no. + - Host file entries of all SAP application servers that get listed via the SAPcontrol `GetSystemInstanceList` web method. For SOAP+RFC:- - Fully Qualified Domain Name of SAP Web dispatcher OR SAP Application server. - - SAP System ID, Instance no. - - SAP Client ID, HTTP port, SAP Username and Password for login. - - Host file entries of all SAP application servers that get listed via SAPcontrol "GetSystemInstanceList" web method. + - FQDN of the SAP Web Dispatcher or the SAP application server. + - SAP system ID, Instance no. + - SAP client ID, HTTP port, SAP username and password for login. + - Host file entries of all SAP application servers that get listed via the SAPcontrol `GetSystemInstanceList` web method. -Check [SAP NetWeaver provider](provider-netweaver.md) creation for more detail steps. +For more information, see [Configure SAP NetWeaver for Azure Monitor for SAP solutions](provider-netweaver.md). - + ## Provider type: SAP HANA -You can configure one or more providers of provider type *SAP HANA* to enable data collection from SAP HANA database. The SAP HANA provider connects to the SAP HANA database over SQL port, pulls data from the database, and pushes it to the Log Analytics workspace in your subscription. The SAP HANA provider collects data every 1 minute from the SAP HANA database. +You can configure one or more providers of the provider type **SAP HANA** to enable data collection from the SAP HANA database. The SAP HANA provider connects to the SAP HANA database over the SQL port. The provider pulls data from the database and pushes it to the Log Analytics workspace in your subscription. The SAP HANA provider collects data every minute from the SAP HANA database. -You can see the following data with the SAP HANA provider: +With the SAP HANA provider, you can see the: -- Underlying infrastructure usage-- SAP HANA host status-- SAP HANA system replication-- SAP HANA Backup data-- Fetching Services-- Network throughput between the nodes in a scaleout system-- SAP HANA Long Idling Cursors-- SAP HANA Long Running Transactions-- Checks for configuration parameter values-- SAP HANA Uncommitted Write Transactions-- SAP HANA Disk Fragmentation-- SAP HANA Statistics Server Health-- SAP HANA High Memory Usage Service-- SAP HANA Blocking Transactions+- Underlying infrastructure usage. +- SAP HANA host status. +- SAP HANA system replication. +- SAP HANA backup data. +- Fetching services. +- Network throughput between the nodes in a scaleout system. +- SAP HANA long-idling cursors. +- SAP HANA long-running transactions. +- Checks for configuration parameter values. +- SAP HANA uncommitted write transactions. +- SAP HANA disk fragmentation. +- SAP HANA statistics server health. +- SAP HANA high memory usage service. +- SAP HANA blocking transactions. +Configuring the SAP HANA provider requires the: +- Host IP address. +- HANA SQL port number. +- SYSTEMDB username and password. -Configuring the SAP HANA provider requires: -- The host IP address,-- HANA SQL port number-- **SYSTEMDB** username and password+We recommend that you configure the SAP HANA provider against SYSTEMDB. However, you can configure more providers against other database tenants. -It's recommended to configure the SAP HANA provider against **SYSTEMDB**. However, more providers can be configured against other database tenants. +For more information, see [Configure SAP HANA provider for Azure Monitor for SAP solutions](provider-hana.md). -Check [SAP HANA provider](provider-hana.md) creation for more detail steps. + - +## Provider type: SQL Server -## Provider type: Microsoft SQL server +You can configure one or more SQL Server providers to enable data collection from [SQL Server on virtual machines](https://azure.microsoft.com/services/virtual-machines/sql-server/). The SQL Server provider connects to SQL Server over the SQL port. It then pulls data from the database and pushes it to the Log Analytics workspace in your subscription. Configure SQL Server for SQL authentication and for signing in with the SQL Server username and password. Set the SAP database as the default database for the provider. The SQL Server provider collects data every 60 seconds up to every hour from the SQL Server. -You can configure one or more Microsoft SQL Server providers to enable data collection from [SQL Server on Virtual Machines](https://azure.microsoft.com/services/virtual-machines/sql-server/). The SQL Server provider connects to Microsoft SQL Server over the SQL port. It then pulls data from the database and pushes it to the Log Analytics workspace in your subscription. Configure SQL Server for SQL authentication and for signing in with the SQL Server username and password. Set the SAP database as the default database for the provider. The SQL Server provider collects data from every 60 seconds up to every hour from the SQL server. +With the SQL Server provider, you can get the: +- Underlying infrastructure usage. +- Top SQL statements. +- Top largest table. +- Problems recorded in the SQL Server error log. +- Blocking processes and others. -You can get the following data with the SQL Server provider: -- Underlying infrastructure usage-- Top SQL statements-- Top largest table-- Problems recorded in the SQL Server error log-- Blocking processes and others+Configuring SQL Server provider requires the: +- SAP system ID. +- Host IP address. +- SQL Server port number. +- SQL Server username and password. -Configuring Microsoft SQL Server provider requires: -- The SAP System ID-- The Host IP address-- The SQL Server port number-- The SQL Server username and password+ For more information, see [Configure SQL Server for Azure Monitor for SAP solutions](provider-sql-server.md). -Check [SQL Database provider](provider-sql-server.md) creation for more detail steps. -- + ## Provider type: High-availability cluster -You can configure one or more providers of provider type *High-availability cluster* to enable data collection from Pacemaker cluster within the SAP landscape. The High-availability cluster provider connects to Pacemaker using the [ha_cluster_exporter](https://github.com/ClusterLabs/ha_cluster_exporter) for **SUSE** based clusters and by using [Performance co-pilot](https://access.redhat.com/articles/6139852) for **RHEL** based clusters. Azure Monitor for SAP solutions then pulls data from cluster and pushes it to Log Analytics workspace in your subscription. The High-availability cluster provider collects data every 60 seconds from Pacemaker. +You can configure one or more providers of the provider type *high-availability cluster* to enable data collection from the Pacemaker cluster within the SAP landscape. The high-availability cluster provider connects to Pacemaker by using the [ha_cluster_exporter](https://github.com/ClusterLabs/ha_cluster_exporter) for **SUSE**-based clusters and by using [Performance co-pilot](https://access.redhat.com/articles/6139852) for **RHEL**-based clusters. Azure Monitor for SAP solutions then pulls data from the cluster and pushes it to the Log Analytics workspace in your subscription. The high-availability cluster provider collects data every 60 seconds from Pacemaker. -You can get the following data with the High-availability cluster provider: +With the high-availability cluster provider, you can get the: + - Cluster status represented as a roll-up of node and resource status. + - Location constraints. + - Trends. + - [Others](https://github.com/ClusterLabs/ha_cluster_exporter/blob/master/doc/metrics.md). - + -To configure a High-availability cluster provider, two primary steps are involved: +To configure a high-availability cluster provider, two primary steps are involved: 1. Install [ha_cluster_exporter](provider-ha-pacemaker-cluster.md) in *each* node within the Pacemaker cluster. - You have two options for installing ha_cluster_exporter: + You have two options for installing `ha_cluster_exporter`: - Use Azure Automation scripts to deploy a high-availability cluster. The scripts install [ha_cluster_exporter](https://github.com/ClusterLabs/ha_cluster_exporter) on each cluster node. - Do a [manual installation](https://github.com/ClusterLabs/ha_cluster_exporter#manual-clone--build). -2. Configure a High-availability cluster provider for *each* node within the Pacemaker cluster. +1. Configure a high-availability cluster provider for *each* node within the Pacemaker cluster. - To configure the High-availability cluster provider, the following information is required: + To configure the high-availability cluster provider, the following information is required: - - **Name**. A name for this provider. It should be unique for this Azure Monitor for SAP solutions instance. - - **Prometheus Endpoint**. `http://<servername or ip address>:9664/metrics`. - - **SID**. For SAP systems, use the SAP SID. For other systems (for example, NFS clusters), use a three-character name for the cluster. The SID must be distinct from other clusters that are monitored. - - **Cluster name**. The cluster name used when creating the cluster. The cluster name can be found in the cluster property `cluster-name`. - - **Hostname**. The Linux hostname of the virtual machine (VM). + - **Name**: A name for this provider. It should be unique for this Azure Monitor for SAP solutions instance. + - **Prometheus endpoint**: `http://<servername or ip address>:9664/metrics`. + - **SID**: For SAP systems, use the SAP SID. For other systems (for example, NFS clusters), use a three-character name for the cluster. The SID must be distinct from other clusters that are monitored. + - **Cluster name**: The cluster name used when you're creating the cluster. You can find the cluster name in the cluster property `cluster-name`. + - **Hostname**: The Linux hostname of the virtual machine (VM). - Check [High Availability Cluster provider](provider-ha-pacemaker-cluster.md) creation for more detail steps. + For more information, see [Create a high-availability cluster provider for Azure Monitor for SAP solutions](provider-ha-pacemaker-cluster.md). ## Provider type: OS (Linux) -You can configure one or more providers of provider type OS (Linux) to enable data collection from a BareMetal or VM node. The OS (Linux) provider connects to BareMetal or VM nodes using the [Node_Exporter](https://github.com/prometheus/node_exporter) endpoint. It then pulls data from the nodes and pushes it to Log Analytics workspace in your subscription. The OS (Linux) provider collects data every 60 seconds for most of the metrics from the nodes. +You can configure one or more providers of the provider type OS (Linux) to enable data collection from a BareMetal or VM node. The OS (Linux) provider connects to BareMetal or VM nodes by using the [Node_Exporter](https://github.com/prometheus/node_exporter) endpoint. It then pulls data from the nodes and pushes it to the Log Analytics workspace in your subscription. The OS (Linux) provider collects data every 60 seconds for most of the metrics from the nodes. -You can get the following data with the OS (Linux) provider: +With the OS (Linux) provider, you can get the: - - CPU usage, CPU usage by process - - Disk usage, I/O read & write - - Memory distribution, memory usage, swap memory usage - - Network usage, network inbound & outbound traffic details + - CPU usage and CPU usage by process. + - Disk usage and I/O read and write. + - Memory distribution, memory usage, and swap memory usage. + - Network usage and the network inbound and outbound traffic details. To configure an OS (Linux) provider, two primary steps are involved: 1. Install [Node_Exporter](https://github.com/prometheus/node_exporter) on each BareMetal or VM node.- You have two options for installing [Node_exporter](https://github.com/prometheus/node_exporter): - - For automated installation with Ansible, use [Node_Exporter](https://github.com/prometheus/node_exporter) on each BareMetal or VM node to install the OS (Linux) Provider. + You have two options for installing [Node_Exporter](https://github.com/prometheus/node_exporter): + - For automated installation with Ansible, use [Node_Exporter](https://github.com/prometheus/node_exporter) on each BareMetal or VM node to install the OS (Linux) provider. - Do a [manual installation](https://prometheus.io/docs/guides/node-exporter/). 1. Configure an OS (Linux) provider for each BareMetal or VM node instance in your environment. To configure the OS (Linux) provider, the following information is required:- - **Name**: a name for this provider, unique to the Azure Monitor for SAP solutions instance. - - **Node Exporter endpoint**: usually `http://<servername or ip address>:9100/metrics`. + - **Name**: A name for this provider that's unique to the Azure Monitor for SAP solutions instance. + - **Node Exporter endpoint**: Usually `http://<servername or ip address>:9100/metrics`. -Port 9100 is exposed for the **Node_Exporter** endpoint. +Port 9100 is exposed for the `Node_Exporter` endpoint. -Check [Operating System provider](provider-linux.md) creation for more detail steps. +For more information, see [Configure Linux provider for Azure Monitor for SAP solutions](provider-linux.md). > [!Warning]-> Make sure **Node-Exporter** keeps running after the node reboot. +> Make sure `Node-Exporter` keeps running after the node reboot. ## Provider type: IBM Db2 -You can configure one or more IBM Db2 providers to enable data collection from IBM Db2 servers. The Db2 Server provider connects to database over given port. It then pulls data from the database and pushes it to the Log Analytics workspace in your subscription. The Db2 Server provider collects data from every 60 seconds up to every hour from the DB2 server. +You can configure one or more IBM Db2 providers to enable data collection from IBM Db2 servers. The Db2 Server provider connects to the database over a specific port. It then pulls data from the database and pushes it to the Log Analytics workspace in your subscription. The Db2 Server provider collects data every 60 seconds up to every hour from the Db2 Server. -You can get the following data with the IBM Db2 provider: +With the IBM Db2 provider, you can get the: -- Database availability-- Number of connections-- Logical and physical reads-- Waits and current locks-- Top 20 runtime and executions+- Database availability. +- Number of connections. +- Logical and physical reads. +- Waits and current locks. +- Top 20 runtime and executions. -Configuring IBM Db2 provider requires: -- The SAP System ID-- The Host IP address-- The Database Name-- The Port number of the DB2 Server to connect to-- The Db2 Server username and password+Configuring the IBM Db2 provider requires the: +- SAP system ID. +- Host IP address. +- Database name. +- Port number of the Db2 Server to connect to. +- Db2 Server username and password. -Check [IBM Db2 provider](provider-ibm-db2.md) creation for more detail steps. +For more information, see [Create IBM Db2 provider for Azure Monitor for SAP solutions](provider-ibm-db2.md). - + ## Next steps |
sap | Set Up Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/set-up-network.md | Title: Set up network fo |