Updates from: 06/17/2023 01:12:56
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner Xid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-xid.md
Get the custom policy starter packs from GitHub, then update the XML files in th
<Domain>X-ID</Domain> <DisplayName>X-ID</DisplayName> <TechnicalProfiles>
- <TechnicalProfile Id="X-ID-Oauth2">
+ <TechnicalProfile Id="X-ID-OIDC">
<DisplayName>X-ID</DisplayName> <Description>Login with your X-ID account</Description>
- <Protocol Name="OAuth2" />
+ <Protocol Name="OpenIdConnect" />
<Metadata> <Item Key="METADATA">https://oidc-uat.x-id.io/.well-known/openid-configuration</Item> <!-- Update the Client ID below to the X-ID Application ID -->
Add the new identity provider to the user journey.
3. Set the value of **TargetClaimsExchangeId** to a friendly name. 4. Add a **ClaimsExchange** element. 5. Set the **ID** to the value of the target claims exchange ID. This change links the xID button to `X-IDExchange` action.
-6. Update the **TechnicalProfileReferenceId** value to the technical profile ID you created (`X-ID-Oauth2`).
+6. Update the **TechnicalProfileReferenceId** value to the technical profile ID you created (`X-ID-OIDC`).
7. Add an Orchestration step to call xID UserInfo endpoint to return claims about the authenticated user `X-ID-Userdata`. The following XML demonstrates the user journey orchestration with xID identity provider.
The following XML demonstrates the user journey orchestration with xID identity
<OrchestrationStep Order="2" Type="ClaimsExchange"> <ClaimsExchanges>
- <ClaimsExchange Id="X-IDExchange" TechnicalProfileReferenceId="X-ID-Oauth2" />
+ <ClaimsExchange Id="X-IDExchange" TechnicalProfileReferenceId="X-ID-OIDC" />
</ClaimsExchanges> </OrchestrationStep>
active-directory Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-tenant.md
Previously updated : 04/24/2023 Last updated : 06/16/2023
-# Enable Permissions Management in your organization
+# Enable Microsoft Entra Permissions Management in your organization
-This article describes how to enable Permissions Management in your organization. Once you've enabled Permissions Management, you can connect it to your Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) platforms.
+This article describes how to enable Microsoft Entra Permissions Management in your organization. Once you've enabled Permissions Management, you can connect it to your Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) platforms.
> [!NOTE] > To complete this task, you must have *Microsoft Entra Permissions Management Administrator* permissions. You can't enable Permissions Management as a user from another tenant who has signed in via B2B or via Azure Lighthouse.
active-directory Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-gcp.md
There are several moving parts across GCP and Azure, which are required to be co
> 1. Return to the Permissions Management window, and in the **Permissions Management Onboarding - Azure AD OIDC App Creation**, select **Next**. ### 2. Set up a GCP OIDC project.
-1. In the **Permissions Management Onboarding - GCP OIDC Account Details & IDP Access** page, enter the **OIDC Project ID** and **OIDC Project Number** of the GCP project in which the OIDC provider and pool will be created. You can change the role name to your requirements.
+1. In the **Permissions Management Onboarding - GCP OIDC Account Details & IDP Access** page, enter the **OIDC Project Number** and **OIDC Project ID**of the GCP project in which the OIDC provider and pool will be created. You can change the role name to your requirements.
> [!NOTE] > You can find the **Project number** and **Project ID** of your GCP project on the GCP **Dashboard** page of your project in the **Project info** panel.
There are several moving parts across GCP and Azure, which are required to be co
Optionally, specify **G-Suite IDP Secret Name** and **G-Suite IDP User Email** to enable G-Suite integration.
- You can either download and run the script at this point or you can do it in the Google Cloud Shell.
-1. Select **Next**.
+1. You can either download and run the script at this point or you can run it in the Google Cloud Shell.
+
+1. Select **Next** after sucessfully running the setup script.
Choose from 3 options to manage GCP projects.
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/overview.md
Previously updated : 04/20/2022 Last updated : 06/16/2023
-# What's Permissions Management?
+# What's Microsoft Entra Permissions Management?
## Overview
-Permissions Management is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multicloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
+Microsoft Entra Permissions Management is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multicloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
Permissions Management detects, automatically right-sizes, and continuously monitors unused and excessive permissions.
Once your organization has explored and implemented the discover, remediation an
## Next steps -- For information on how to onboard Permissions Management for your organization, see [Enable Permissions Management in your organization](onboard-enable-tenant.md).
+- Deepen your learning with the [Introduction to Microsoft Entra Permissions Management](https://go.microsoft.com/fwlink/?linkid=2240016) learn module.
+- Sign up for a [45-day free trial](https://aka.ms/TryPermissionsManagement) of Permissions Management.
- For a list of frequently asked questions (FAQs) about Permissions Management, see [FAQs](faqs.md).
active-directory Concept Token Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-token-protection.md
This preview supports the following configurations:
- The following Windows client devices aren't supported: - Windows Server - Surface Hub
+ - Windows-based Microsoft Teams Rooms (MTR) systems
## Deployment
active-directory Scenario Web App Call Api Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-acquire-token.md
public ModelAndView getUserFromGraph(HttpServletRequest httpRequest, HttpServlet
// Code omitted here ```
+# [Node.js](#tab/nodejs)
+
+In the Node.js sample, the code that acquires a token is in the *acquireToken* method of the **AuthProvider** class.
++
+This access token is then used to handle requests to the `/profile` endpoint:
++ # [Python](#tab/python) In the Python sample, the code that calls the API is in `app.py`.
Move on to the next article in this scenario,
Move on to the next article in this scenario, [Call a web API](scenario-web-app-call-api-call-api.md?tabs=java).
+# [Node.js](#tab/nodejs)
+
+Move on to the next article in this scenario,
+[Call a web API](scenario-web-app-call-api-call-api.md?tabs=nodejs).
+ # [Python](#tab/python) Move on to the next article in this scenario,
active-directory Scenario Web App Call Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-app-configuration.md
Code examples in this article and the following one are extracted from the [ASP.
Code examples in this article and the following one are extracted from the [Java web application that calls Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-webapp), a web-app sample that uses MSAL for Java. The sample currently lets MSAL for Java produce the authorization-code URL and handles the navigation to the authorization endpoint for the Microsoft identity platform. It's also possible to use Sprint security to sign the user in. You might want to refer to the sample for full implementation details.
+# [Node.js](#tab/nodejs)
+
+Code examples in this article and the following one are extracted from the [Node.js & Express.js web application that calls Microsoft Graph](https://github.com/Azure-Samples/ms-identity-node), a web app sample that uses MSAL Node.
+
+The sample currently lets MSAL Node produce the authorization-code URL and handles the navigation to the authorization endpoint for the Microsoft identity platform. This is shown below:
++ # [Python](#tab/python) Code snippets in this article and the following are extracted from the [Python web application calling Microsoft graph](https://github.com/Azure-Samples/ms-identity-python-webapp) sample using the [identity package](https://pypi.org/project/identity/) (a wrapper around MSAL Python).
Microsoft.Identity.Web simplifies your code by setting the correct OpenID Connec
*Microsoft.Identity.Web.OWIN* simplifies your code by setting the correct OpenID Connect settings, subscribing to the code received event, and redeeming the code. No extra code is required to redeem the authorization code. See [Microsoft.Identity.Web source code](https://github.com/AzureAD/microsoft-identity-web/blob/9fdcf15c66819b31b1049955eed5d3e5391656f5/src/Microsoft.Identity.Web.OWIN/AppBuilderExtension.cs#L95) for details on how this works.
+# [Node.js](#tab/nodejs)
+
+The *handleRedirect* method in **AuthProvider** class processes the authorization code received from Azure AD. This is shown below:
++ # [Java](#tab/java) See [Web app that signs in users: Code configuration](scenario-web-app-sign-user-app-configuration.md?tabs=java#initialization-code) to understand how the Java sample gets the authorization code. After the app receives the code, the [AuthFilter.java#L51-L56](https://github.com/Azure-Samples/ms-identity-java-webapp/blob/d55ee4ac0ce2c43378f2c99fd6e6856d41bdf144/src/main/java/com/microsoft/azure/msalwebsample/AuthFilter.java#L51-L56):
IAuthenticationResult getAuthResultBySilentFlow(HttpServletRequest httpRequest,
The detail of the `SessionManagementHelper` class is provided in the [MSAL sample for Java](https://github.com/Azure-Samples/ms-identity-java-webapp/blob/d55ee4ac0ce2c43378f2c99fd6e6856d41bdf144/src/main/java/com/microsoft/azure/msalwebsample/SessionManagementHelper.java).
+# [Node.js](#tab/nodejs)
+
+In the Node.js sample, the application session is used to store the token cache. Using MSAL Node cache methods, the token cache in session is read before a token request is made, and then updated once the token request is successfully completed. This is shown below:
++ # [Python](#tab/python) In the Python sample, the identity package takes care of the token cache, using the global `session` object for storage.
Move on to the next article in this scenario,
Move on to the next article in this scenario, [Remove accounts from the cache on global sign out](scenario-web-app-call-api-sign-in.md?tabs=java).
+# [Node.js](#tab/nodejs)
+
+Move on to the next article in this scenario,
+[Remove accounts from the cache on global sign out](scenario-web-app-call-api-sign-in.md?tabs=nodejs).
+ # [Python](#tab/python) Move on to the next article in this scenario,
active-directory Scenario Web App Call Api Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-call-api.md
private String getUserInfoFromGraph(String accessToken) throws Exception {
} ```
+# [Node.js](#tab/nodejs)
+
+After successfully retrieving a token, the code uses the **axios** package to query the API endpoint and retrieve a JSON result.
++ # [Python](#tab/python) After successfully retrieving a token, the code uses the requests package to query the API endpoint and retrieve a JSON result.
active-directory Scenario Web App Call Api Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-sign-in.md
The ASP.NET sample doesn't remove accounts from the cache on global sign-out.
The Java sample doesn't remove accounts from the cache on global sign-out.
+# [Node.js](#tab/nodejs)
+
+The Node sample doesn't remove accounts from the cache on global sign-out.
+ # [Python](#tab/python) The Python sample doesn't remove accounts from the cache on global sign-out.
Move on to the next article in this scenario,
Move on to the next article in this scenario, [Acquire a token for the web app](./scenario-web-app-call-api-acquire-token.md?tabs=java).
+# [Node.js](#tab/nodejs)
+
+Move on to the next article in this scenario,
+[Acquire a token for the web app](./scenario-web-app-call-api-acquire-token.md?tabs=nodejs).
+ # [Python](#tab/python) Move on to the next article in this scenario,
active-directory Scenario Web App Sign User App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-app-configuration.md
In the Azure portal, the reply URIs that you register on the **Authentication**
# [Node.js](#tab/nodejs)
-Here, the configuration parameters reside in *.env* as environment variables:
+Here, the configuration parameters reside in *.env.dev* as environment variables:
These parameters are used to create a configuration object in *authConfig.js* file, which will eventually be used to initialize MSAL Node:
active-directory Scenario Web App Sign User Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-sign-in.md
public class AuthPageController {
When the user selects the **Sign in** link, which triggers the `/auth/signin` route, the sign-in controller takes over to authenticate the user with Microsoft identity platform. # [Python](#tab/python)
In Java, sign-out is handled by calling the Microsoft identity platform `logout`
When the user selects the **Sign out** button, the app triggers the `/signout` route, which destroys the session and redirects the browser to Microsoft identity platform sign-out endpoint. # [Python](#tab/python)
active-directory Tutorial V2 Nodejs Webapp Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-webapp-msal.md
The web app sample in this tutorial uses the [express-session](https://www.npmjs
## Add app registration details
-1. Create an *.env* file in the root of your project folder. Then add the following code:
+1. Create an *.env.dev* file in the root of your project folder. Then add the following code:
Fill in these details with the values you obtain from Azure app registration portal:
Fill in these details with the values you obtain from Azure app registration por
## Add code for user sign-in and token acquisition
-1. Create a new file named *auth.js* under the *routes* folder and add the following code there:
+1. Create a new folder named *auth*, and add a new file named *AuthProvider.js* under it. This will contain the **AuthProvider** class, which encapsulates the necessary authentication logic using MSAL Node. Add the following code there:
++
+1. Next, create a new file named *auth.js* under the *routes* folder and add the following code there:
:::code language="js" source="~/ms-identity-node/App/routes/auth.js":::
-2. Next, update the *index.js* route by replacing the existing code with the following code snippet:
+2. Update the *index.js* route by replacing the existing code with the following code snippet:
:::code language="js" source="~/ms-identity-node/App/routes/index.js":::
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
To connect to the remote computer:
> [!IMPORTANT] > Remote connection to VMs that are joined to Azure AD is allowed only from Windows 10 or later PCs that are either Azure AD registered (minimum required build is 20H1) or Azure AD joined or hybrid Azure AD joined to the *same* directory as the VM. Additionally, to RDP by using Azure AD credentials, users must belong to one of the two Azure roles, Virtual Machine Administrator Login or Virtual Machine User Login. >
-> If you're using an Azure AD-registered Windows 10 or later PC, you must enter credentials in the `AzureAD\UPN` format (for example, `AzureAD\john@contoso.com`). At this time, you can use Azure Bastion to log in with Azure AD authentication [via the Azure CLI and the native RDP client mstsc](../../bastion/connect-native-client-windows.md).
+> If you're using an Azure AD-registered Windows 10 or later PC, you must enter credentials in the `AzureAD\UPN` format (for example, `AzureAD\john@contoso.com`). At this time, you can use Azure Bastion to log in with Azure AD authentication [via the Azure CLI and the native RDP client mstsc](../../bastion/native-client.md).
To log in to your Windows Server 2019 virtual machine by using Azure AD:
active-directory Monitor Sign In Health For Resilience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/monitor-sign-in-health-for-resilience.md
Previously updated : 08/20/2022 Last updated : 06/16/2023
# Monitoring application sign-in health for resilience
-To increase infrastructure resilience, set up monitoring of application sign-in health for your critical applications so that you receive an alert if an impacting incident occurs. To assist you in this effort, you can configure alerts based on the sign-in health workbook.
+To increase infrastructure resilience, set up monitoring of application sign-in health for your critical applications. You can receive an alert when an impacting incident occurs. This article walks through setting up the App sign-in health workbook to monitor for disruptions to your users' sign-ins.
-This workbook enables administrators to monitor authentication requests for applications in your tenant. It provides these key capabilities:
+You can configure alerts based on the App sign-in health workbook. This workbook enables administrators to monitor authentication requests for applications in their tenants. It provides these key capabilities:
-* Configure the workbook to monitor all or individual apps with near real-time data.
-
-* Configure alerts to notify you when authentication patterns change so that you can investigate and take action.
-
-* Compare trends over a period, for example week over week, which is the workbookΓÇÖs default setting.
+- Configure the workbook to monitor all or individual apps with near real-time data.
+- Configure alerts for authentication pattern changes so that you can investigate and respond.
+- Compare trends over a period of time. Week over week is the workbook's default setting.
> [!NOTE]
-> To see all available workbooks, and the prerequisites for using them, please see [How to use Azure Monitor workbooks for reports](../reports-monitoring/howto-use-azure-monitor-workbooks.md).
+> See all available workbooks and the prerequisites for using them in [How to use Azure Monitor workbooks for reports](../reports-monitoring/howto-use-azure-monitor-workbooks.md).
During an impacting event, two things may happen:
-* The number of sign-ins for an application may drop precipitously because users can't sign in.
-
-* The number of sign-in failures can increase.
-
-This article walks through setting up the sign-in health workbook to monitor for disruptions to your usersΓÇÖ sign-ins.
+- The number of sign-ins for an application may abruptly drop when users can't sign in.
+- The number of sign-in failures may increase.
## Prerequisites
-* An Azure AD tenant.
-
-* A user with global administrator or security administrator role for the Azure AD tenant.
-
-* A Log Analytics workspace in your Azure subscription to send logs to Azure Monitor logs.
-
- * Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md)
-
-* Azure AD logs integrated with Azure Monitor logs
-
- * Learn how to [Integrate Azure AD Sign- in Logs with Azure Monitor Stream.](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
+- An Azure AD tenant.
+- A user with global administrator or security administrator role for the Azure AD tenant.
+- A Log Analytics workspace in your Azure subscription to send logs to Azure Monitor logs. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md).
+- Azure AD logs integrated with Azure Monitor logs. Learn how to [Integrate Azure AD Sign- in Logs with Azure Monitor Stream.](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
-## Configure the App sign in health workbook
+## Configure the App sign-in health workbook
-To access workbooks, open the **Azure portal**, select **Azure Active Directory**, and then select **Workbooks**.
+To access workbooks in the **Azure portal**, select **Azure Active Directory**, select **Workbooks**. The following screenshot shows the Workbooks Gallery in the Azure portal.
-You'll see workbooks under Usage, Conditional Access, and Troubleshoot. The App sign in health workbook appears in the usage section.
-Once you use a workbook, it may appear in the Recently modified workbooks section.
+Workbooks appear under **Usage**, **Conditional Access**, and **Troubleshoot**. The App sign in health workbook appears in the **Health** section. After you use a workbook, it may appear in the **Recently modified workbooks** section.
-![Screenshot showing the workbooks gallery in the Azure portal.](./media/monitor-sign-in-health-for-resilience/sign-in-health-workbook.png)
+You can use the App sign-in health workbook to visualize what is happening with your sign-ins. As shown in the following screenshot, the workbook presents two graphs.
-The App sign in health workbook enables you to visualize what is happening with your sign-ins.
+In the preceding screenshot, there are two graphs:
-By default the workbook presents two graphs. These graphs compare what is happening to your app(s) now, versus the same period a week ago. The blue lines are current, and the orange lines are the previous week.
-
-![Screenshot showing sign in health graphs.](./media/monitor-sign-in-health-for-resilience/sign-in-health-graphs.png)
-
-**The first graph is Hourly usage (number of successful users)**. Comparing your current number of successful users to a typical usage period helps you to spot a drop in usage that may require investigation. A drop in successful usage rate can help detect performance and utilization issues that the failure rate can't. For example if users can't reach your application to attempt to sign in, there would be no failures, only a drop in usage. A sample query for this data can be found in the following section.
-
-**The second graph is hourly failure rate**. A spike in failure rate may indicate an issue with your authentication mechanisms. Failure rate can only be measured if users can attempt to authenticate. If users Can't gain access to make the attempt, failures Won't show.
-
-You can configure an alert that notifies a specific group when the usage or failure rate exceeds a specified threshold. A sample query for this data can be found in the following section.
+- **Hourly usage (number of successful users)**. Comparing your current number of successful users to a typical usage period helps you to spot a drop in usage that may require investigation. A drop-in successful usage rate can help detect performance and utilization issues that the failure rate can't detect. For example, when users can't reach your application to attempt to sign in, there's a drop in usage but no failures. See the sample query for this data in the next section of this article.
+- **Hourly failure rate**. A spike in failure rate may indicate an issue with your authentication mechanisms. Failure rate measures only appear when users can attempt to authenticate. When users can't gain access to make the attempt, there are no failures.
## Configure the query and alerts
-You create alert rules in Azure Monitor and can automatically run saved queries or custom log searches at regular intervals.
-
-Use the following instructions to create email alerts based on the queries reflected in the graphs. Sample scripts below will send an email notification when
-
-* the successful usage drops by 90% from the same hour two days ago, as in the hourly usage graph in the previous section.
-
-* the failure rate increases by 90% from the same hour two days ago, as in the hourly failure rate graph in the previous section.
-
- To configure the underlying query and set alerts, complete the following steps. You'll use the Sample Query as the basis for your configuration. An explanation of the query structure appears at the end of this section.
+You create alert rules in Azure Monitor and can automatically run saved queries or custom log searches at regular intervals. You can configure an alert that notifies a specific group when the usage or failure rate exceeds a specified threshold.
-For more information on how to create, view, and manage log alerts using Azure Monitor see [Manage log alerts](../../azure-monitor/alerts/alerts-log.md).
+Use the following instructions to create email alerts based on the queries reflected in the graphs. The sample scripts send an email notification when:
-1. In the workbook, select **Edit**, then select the **query icon** just above the right-hand side of the graph.
+- The successful usage drops by 90% from the same hour two days ago, as shown in the preceding hourly usage graph example.
+- The failure rate increases by 90% from the same hour two days ago, as shown in the preceding hourly failure rate graph example.
- [![Screenshot showing edit workbook.](./media/monitor-sign-in-health-for-resilience/edit-workbook.png)](./media/monitor-sign-in-health-for-resilience/edit-workbook.png)
+To configure the underlying query and set alerts, complete the following steps using the sample query as the basis for your configuration. The query structure description appears at the end of this section. Learn how to create, view, and manage log alerts using Azure Monitor in [Manage log alerts](../../azure-monitor/alerts/alerts-log.md).
- The query log opens.
+1. In the workbook, select **Edit** as shown in the following screenshot. Select the **query icon** in the upper right corner of the graph.
- [![Screenshot showing the query log.](./media/monitor-sign-in-health-for-resilience/query-log.png)](./media/monitor-sign-in-health-for-resilience/query-log.png)
-ΓÇÄ
+ :::image type="content" source="./media/monitor-sign-in-health-for-resilience/edit-workbook.png" alt-text="Screenshot showing edit workbook.":::
-2. Copy one of the sample scripts for a new Kusto query.
- * [Kusto query for increase in failure rate](#kusto-query-for-increase-in-failure-rate)
- * [Kusto query for drop in usage](#kusto-query-for-drop-in-usage)
+2. View the query log as shown in the following screenshot.
-3. Paste the query in the window and select **Run**. Ensure you see the Completed message shown in the image below, and results below that message.
+ :::image type="content" source="./media/monitor-sign-in-health-for-resilience/query-log.png" alt-text="Screenshot showing the query log.":::
- [![Screenshot showing the run query results.](./media/monitor-sign-in-health-for-resilience/run-query.png)](./media/monitor-sign-in-health-for-resilience/run-query.png)
+3. Copy one of the following sample scripts for a new Kusto query.
-4. Highlight the query, and select + **New alert rule**.
-
- [![Screenshot showing the new alert rule screen.](./media/monitor-sign-in-health-for-resilience/new-alert-rule.png)](./media/monitor-sign-in-health-for-resilience/new-alert-rule.png)
+ - [Kusto query for increase in failure rate](#kusto-query-for-increase-in-failure-rate)
+ - [Kusto query for drop in usage](#kusto-query-for-drop-in-usage)
+4. Paste the query in the window. Select **Run**. Look for the **Completed** message and the query results as shown in the following screenshot.
-5. Configure alert conditions.
-ΓÇÄIn the Condition section, select the link **Whenever the average custom log search is greater than logic defined count**. In the configure signal logic pane, scroll to Alert logic
+ :::image type="content" source="./media/monitor-sign-in-health-for-resilience/run-query.png" alt-text="Screenshot showing the run query results.":::
- [![Screenshot showing configure alerts screen.](./media/monitor-sign-in-health-for-resilience/configure-alerts.png)](./media/monitor-sign-in-health-for-resilience/configure-alerts.png)
+5. Highlight the query. Select **+ New alert rule**.
- * **Threshold value**: 0. This value will alert on any results.
-
- * **Evaluation period (in minutes)**: 2880. This value looks at an hour of time
-
- * **Frequency (in minutes)**: 60. This value sets the evaluation period to once per hour for the previous hour.
-
- * Select **Done**.
-
-6. In the **Actions** section, configure these settings:
+ :::image type="content" source="./media/monitor-sign-in-health-for-resilience/new-alert-rule.png" alt-text="Screenshot showing the new alert rule screen.":::
- [![Screenshot showing the Create alert rule page.](./media/monitor-sign-in-health-for-resilience/create-alert-rule.png)](./media/monitor-sign-in-health-for-resilience/create-alert-rule.png)
+6. Configure alert conditions. As shown in the following example screenshot, in the **Condition** section, under **Measurement**, select **Table rows** for **Measure**. Select **Count** for **Aggregation type**. Select **2 days** for **Aggregation granularity**.
- * Under **Actions**, choose **Select action group**, and add the group you want to be notified of alerts.
+ :::image type="content" source="./media/monitor-sign-in-health-for-resilience/configure-alerts.png" alt-text="Screenshot showing configure alerts screen.":::
+
+ - **Table rows**. You can use the number of rows returned to work with events such as Windows event logs, Syslog, and application exceptions.
+ - **Aggregation type**. Data points applied with Count.
+ - **Aggregation granularity**. This value defines the period that works with **Frequency of evaluation**.
- * Under **Customize actions** select **Email alerts**.
+7. In **Alert Logic**, configure the parameters as shown in the example screenshot.
- * Add a **subject line**.
+ :::image type="content" source="./media/monitor-sign-in-health-for-resilience/alert-logic.png" alt-text="Screenshot showing alert logic screen.":::
+
+ - **Threshold value**: 0. This value alerts on any results.
+ - **Frequency of evaluation**: 1 hour. This value sets the evaluation period to once per hour for the previous hour.
-7. Under **Alert rule details**, configure these settings:
+8. In the **Actions** section, configure settings as shown in the example screenshot.
- * Add a descriptive name and a description.
+ :::image type="content" source="./media/monitor-sign-in-health-for-resilience/create-alert-rule.png" alt-text="Screenshot showing the Create an alert rule screen.":::
+
+ - Select **Select action group** and add the group for which you want alert notifications.
+ - Under **Customize actions**, select **Email alerts**.
+ - Add a **subject line**.
- * Select the **resource group** to which to add the alert.
+9. In the **Details** section, configure settings as shown in the example screenshot.
- * Select the default **severity** of the alert.
+ :::image type="content" source="./media/monitor-sign-in-health-for-resilience/details-section.png" alt-text="Screenshot showing the Details section.":::
+
+ - Add a **Subscription** name and a description.
+ - Select the **Resource group** to which you want to add the alert.
+ - Select the default **Severity**.
+ - Select **Enable upon creation** if you want it to immediately go live. Otherwise, select **Mute actions**.
- * Select **Enable alert rule upon creation** if you want it live immediately, else select **Suppress alerts**.
+10. In the **Review + create** section, configure settings as shown in the example screenshot.
-8. Select **Create alert rule**.
+ :::image type="content" source="./media/monitor-sign-in-health-for-resilience/review-create.png" alt-text="Screenshot showing the Review + create section.":::
-9. Select **Save**, enter a name for the query, **Save as a Query with a category of Alert**. Then select **Save** again.
+11. Select **Save**. Enter a name for the query. For **Save as**, select **Query**. For **Category**, select **Alert**. Again, select **Save**.
- [![Screenshot showing the save query button.](./media/monitor-sign-in-health-for-resilience/save-query.png)](./media/monitor-sign-in-health-for-resilience/save-query.png)
+ :::image type="content" source="./media/monitor-sign-in-health-for-resilience/save-query.png" alt-text="Screenshot showing the save query button.":::
### Refine your queries and alerts
-Modify your queries and alerts for maximum effectiveness.
+To modify your queries and alerts for maximum effectiveness:
-* Be sure to test your alerts.
-
-* Modify alert sensitivity and frequency so that you get important notifications. Admins can become desensitized to alerts if they get too many and miss something important.
-
-* Ensure the email from which alerts come in your administratorΓÇÖs email clients is added to allowed senders list. Otherwise you may miss notifications due to a spam filter on your email client.
-
-* Alerts query in Azure Monitor can only include results from past 48 hours. [This is a current limitation by design](https://github.com/MicrosoftDocs/azure-docs/issues/22637).
+- Always test alerts.
+- Modify alert sensitivity and frequency to receive important notifications. Admins can become desensitized to alerts and miss something important if they get too many.
+- In administrator's email clients, add the email from which alerts come to the allowed senders list. This approach prevents missed notifications due to a spam filter on their email clients.
+- [By design](https://github.com/MicrosoftDocs/azure-docs/issues/22637), alert queries in Azure Monitor can only include results from the past 48 hours.
## Sample scripts ### Kusto query for increase in failure rate
- The ratio at the bottom can be adjusted as necessary and represents the percent change in traffic in the last hour as compared to the same time yesterday. 0.5 means that there is a 50% difference in the traffic.
+In the following query, we detect increasing failure rates. As necessary, you can adjust the ratio at the bottom. It represents the percent change in traffic in the last hour as compared to yesterday's traffic at same time. A 0.5 result indicates a 50% difference in the traffic.
```kusto- let today = SigninLogs | where TimeGenerated > ago(1h) // Query failure rate in the last hour | project TimeGenerated, UserPrincipalName, AppDisplayName, status = case(Status.errorCode == "0", "success", "failure")
let today = SigninLogs
| sort by TimeGenerated desc | serialize rowNumber = row_number(); let yesterday = SigninLogs
-| where TimeGenerated between((ago(1h) - totimespan(1d))..(now() - totimespan(1d))) // Query failure rate at the same time yesterday
+| where TimeGenerated between((ago(1h) ΓÇô totimespan(1d))..(now() ΓÇô totimespan(1d))) // Query failure rate at the same time yesterday
| project TimeGenerated, UserPrincipalName, AppDisplayName, status = case(Status.errorCode == "0", "success", "failure") // Optionally filter by a specific application //| where AppDisplayName == **APP NAME**
today
| where day != time(6.00:00:00) // exclude Sat | where day != time(0.00:00:00) // exclude Sun | where day != time(1.00:00:00) // exclude Mon
-| where abs(failureRate - failureRateYesterday) > 0.5
-
+| where abs(failureRate ΓÇô failureRateYesterday) > 0.5
```- ### Kusto query for drop in usage
-In the following query, we are comparing traffic in the last hour to the same time yesterday.
-We are excluding Saturday, Sunday, and Monday because itΓÇÖs expected on those days that there would be large variability in the traffic at the same time the previous day.
+In the following query, we compare traffic in the last hour to yesterday's traffic at the same time. We exclude Saturday, Sunday, and Monday because we expect large variability in the previous day's traffic at the same time.
-The ratio at the bottom can be adjusted as necessary and represents the percent change in traffic in the last hour as compared to the same time yesterday. 0.5 means that there is a 50% difference in the traffic.
+As necessary, you can adjust the ratio at the bottom. It represents the percent change in traffic in the last hour as compared to yesterday's traffic at same time. A 0.5 result indicates a 50% difference in the traffic. Adjust these values to fit your business operation model.
-*You should adjust these values to fit your business operation model*.
-
-```Kusto
- let today = SigninLogs // Query traffic in the last hour
+```kusto
+Let today = SigninLogs // Query traffic in the last hour
| where TimeGenerated > ago(1h) | project TimeGenerated, AppDisplayName, UserPrincipalName // Optionally filter by AppDisplayName to scope query to a single application
The ratio at the bottom can be adjusted as necessary and represents the percent
| sort by TimeGenerated desc | serialize rn = row_number(); let yesterday = SigninLogs // Query traffic at the same hour yesterday
-| where TimeGenerated between((ago(1h) - totimespan(1d))..(now() - totimespan(1d))) // Count distinct users in the same hour yesterday
+| where TimeGenerated between((ago(1h) ΓÇô totimespan(1d))..(now() ΓÇô totimespan(1d))) // Count distinct users in the same hour yesterday
| project TimeGenerated, AppDisplayName, UserPrincipalName // Optionally filter by AppDisplayName to scope query to a single application //| where AppDisplayName contains "Office 365 Exchange Online"
yesterday
) on rn // Calculate the difference in number of users in the last hour compared to the same time yesterday
-| project TimeGenerated, users, usersYesterday, difference = abs(users - usersYesterday), max = max_of(users, usersYesterday)
+| project TimeGenerated, users, usersYesterday, difference = abs(users ΓÇô usersYesterday), max = max_of(users, usersYesterday)
| extend ratio = (difference * 1.0) / max // Ratio is the percent difference in traffic in the last hour as compared to the same time yesterday // Day variable is the number of days since the previous Sunday. Optionally ignore results on Sat, Sun, and Mon because large variability in traffic is expected. | extend day = dayofweek(now())
on rn
| where day != time(0.00:00:00) // exclude Sun | where day != time(1.00:00:00) // exclude Mon | where ratio > 0.7 // Threshold percent difference in sign-in traffic as compared to same hour yesterday- ``` ## Create processes to manage alerts
-Once you have set up the query and alerts, create business processes to manage the alerts.
-
-* Who will monitor the workbook and when?
-
-* When an alert is generated, who will investigate?
-
-* What are the communication needs? Who will create the communications and who will receive them?
+After you set up queries and alerts, create business processes to manage the alerts.
-* If an outage occurs, what business processes need to be triggered?
+- Who monitors the workbook and when?
+- When alerts occur, who investigates them?
+- What are the communication needs? Who creates the communications and who receives them?
+- When an outage occurs, what business processes apply?
## Next steps
active-directory Entitlement Management External Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-external-users.md
To ensure people outside of your organization can request access packages and ge
> [!NOTE] > If you create a connected organization for an Azure AD tenant from a different Microsoft cloud, you also need to configure cross-tenant access settings appropriately. For more information on how to configure these settings, see [Configure cross-tenant access settings](../external-identities/cross-cloud-settings.md).
-### Review your Conditional Access policies (Preview)
+### Review your Conditional Access policies
- Make sure to exclude the Entitlement Management app from any Conditional Access policies that impact guest users. Otherwise, a conditional access policy could block them from accessing MyAccess or being able to sign in to your directory. For example, guests likely don't have a registered device, aren't in a known location, and don't want to re-register for multi-factor authentication (MFA), so adding these requirements in a Conditional Access policy will block guests from using entitlement management. For more information, see [What are conditions in Azure Active Directory Conditional Access?](../conditional-access/concept-conditional-access-conditions.md). -- A common policy for Entitlement Management customers is to block all apps from guests except Entitlement Management for guests. This policy allows guests to enter MyAccess and request an access package. This package should contain a group (it is called Guests from MyAccess in the example below), which should be excluded from the block all apps policy. Once the package is approved, the guest will be in the directory. Given that the end user has the access package assignment and is part of the group, the end user will be able to access all other apps. Other common policies include excluding Entitlement Management app from MFA and compliant device.
+- A common policy for Entitlement Management customers is to block all apps from guests except Entitlement Management for guests. This policy allows guests to enter My Access and request an access package. This package should contain a group (it is called Guests from My Access in the example below), which should be excluded from the block all apps policy. Once the package is approved, the guest will be in the directory. Given that the end user has the access package assignment and is part of the group, the end user will be able to access all other apps. Other common policies include excluding Entitlement Management app from MFA and compliant device.
:::image type="content" source="media/entitlement-management-external-users/exclude-app-guests.png" alt-text="Screenshot of exclude app options.":::
active-directory Dagster Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dagster-cloud-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Dagster Cloud for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Dagster Cloud.
++
+writer: twimmers
+
+ms.assetid: bb2db717-b16a-45f9-a76d-502bfc077e95
++++ Last updated : 06/16/2023+++
+# Tutorial: Configure Dagster Cloud for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Dagster Cloud and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Dagster Cloud](https://dagster.io/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Dagster Cloud.
+> * Remove users in Dagster Cloud when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Dagster Cloud.
+> * Provision groups and group memberships in Dagster Cloud.
+> * [Single sign-on](dagster-cloud-tutorial.md) to Dagster Cloud (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Dagster Cloud with Admin permissions.
++
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Dagster Cloud](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Dagster Cloud to support provisioning with Azure AD
+Contact Dagster Cloud support to configure Dagster Cloud to support provisioning with Azure AD.
+
+## Step 3. Add Dagster Cloud from the Azure AD application gallery
+
+Add Dagster Cloud from the Azure AD application gallery to start managing provisioning to Dagster Cloud. If you have previously setup Dagster Cloud for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Dagster Cloud
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Dagster Cloud in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Dagster Cloud**.
+
+ ![Screenshot of the Dagster Cloud link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Dagster Cloud Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Dagster Cloud.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Dagster Cloud**.
+
+1. Review the user attributes that are synchronized from Azure AD to Dagster Cloud in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Dagster Cloud for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Dagster Cloud API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Dagster Cloud|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||
+ |displayName|String||
+ |emails[type eq "work"].value|String||
+ |name.givenName|String||
+ |name.familyName|String||
+ |externalId|String||
+
+1. If you'd like to synchronize Azure AD groups to Dagster Cloud then under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Dagster Cloud**.
+
+1. Review the group attributes that are synchronized from Azure AD to Dagster Cloud in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Dagster Cloud for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Dagster Cloud|
+ |||||
+ |displayName|String|&check;|&check;
+ |externalId|String||
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Dagster Cloud, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Dagster Cloud by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Vault Platform Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vault-platform-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
|emails[type eq "work"].value|String||&check; |name.givenName|String||&check; |name.familyName|String||&check;
+ |addresses[type eq "work"].locality|String||&check;
|addresses[type eq "work"].country|String||&check; |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String||&check; |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||&check;
active-directory Wats Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wats-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
1. Determine what data to [map between Azure AD and WATS](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure WATS to support provisioning with Azure AD
-Contact WATS support to configure WATS to support provisioning with Azure AD.
+Please refer to the [WATS Provisioning](https://support.virinco.com/hc/en-us/articles/7978299009948-WATS-Provisioning-SCIM-) article to set up any necessary requirements for provisioning through Azure AD.
-## Step 3. Add WATS from the Azure AD application gallery
-
-Add WATS from the Azure AD application gallery to start managing provisioning to WATS. If you have previously setup WATS for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
-
-## Step 4. Define who will be in scope for provisioning
+## Step 3. Define who will be in scope for provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
The Azure AD provisioning service allows you to scope who will be provisioned ba
* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
-## Step 5. Configure automatic user provisioning to WATS
+## Step 4. Configure automatic user provisioning to WATS
This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD.
This section guides you through the steps to configure the Azure AD provisioning
This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
-## Step 6. Monitor your deployment
+## Step 5. Monitor your deployment
Once you've configured provisioning, use the following resources to monitor your deployment: * Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
active-directory Admin Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/admin-api.md
example:
| Property | Type | Description | | -- | -- | -- |
-|`uri`| string (uri) | uri of the logo (optional if image is specified) |
+|`uri`| string (uri) | uri of the logo |
|`description` | string | the description of the logo |
-|`image` | string | the base-64 encoded image (optional if uri is specified) |
#### displayConsent type
active-directory Issuance Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/issuance-request-api.md
The payload contains the following properties:
| `registration` | [RequestRegistration](#requestregistration-type)| Provides information about the issuer that can be displayed in the authenticator app. | | `type` | string | The verifiable credential type. Should match the type as defined in the verifiable credential manifest. For example: `VerifiedCredentialExpert`. For more information, see [Create the verified credential expert card in Azure](verifiable-credentials-configure-issuer.md). | | `manifest` | string| The URL of the verifiable credential manifest document. For more information, see [Gather credentials and environment details to set up your sample application](verifiable-credentials-configure-issuer.md).|
-| `claims` | string| Optional. Used for the `ID token hint` flow to include a collection of assertions made about the subject in the verifiable credential. For PIN code flow, it's important that you provide the user's first name and last name. For more information, see [Verifiable credential names](verifiable-credentials-configure-issuer.md#verifiable-credential-names). |
+| `claims` | string| Optional. Can only be used for the [ID token hint](rules-and-display-definitions-model.md#idtokenhintattestation-type) attestation flow to include a collection of assertions made about the subject in the verifiable credential. |
| `pin` | [PIN](#pin-type)| Optional. PIN code can only be used with the [ID token hint](rules-and-display-definitions-model.md#idtokenhintattestation-type) attestation flow. A PIN number to provide extra security during issuance. You generate a PIN code, and present it to the user in your app. The user must provide the PIN code that you generated. | There are currently four claims attestation types that you can send in the payload. Microsoft Entra Verified ID uses four ways to insert claims into a verifiable credential and attest to that information with the issuer's DID. The following are the four types:
active-directory Rules And Display Definitions Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/rules-and-display-definitions-model.md
When you want the user to enter information themselves. This type is also called
| Property | Type | Description | | -- | -- | -- |
-|`uri`| string (url) | url of the logo (optional if image is specified) |
+|`uri`| string (url) | url of the logo. |
|`description` | string | the description of the logo |
-|`image` | string | the base-64 encoded image (optional if url is specified) |
### displayConsent type
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
After you create your key vault, Verifiable Credentials generates a set of keys
1. In **Add access policies**, under **USER**, select the account you use to follow this tutorial.
-1. For **Key permissions**, verify that the following permissions are selected: **Create**, **Delete**, and **Sign**. By default, **Create** and **Delete** are already enabled. **Sign** should be the only key permission you need to update.
+1. For **Key permissions**, verify that the following permissions are selected: **Get**, **Create**, **Delete**, and **Sign**. By default, **Create** and **Delete** are already enabled. **Sign** should be the only key permission you need to update.
:::image type="content" source="media/verifiable-credentials-configure-tenant/set-key-vault-admin-access-policy.png" alt-text="Screenshot that shows how to configure the admin access policy." border="false":::
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
This article lists the latest features, improvements, and changes in the Microsoft Entra Verified ID service.
+## May 2023
+
+- Wallet Library was announced at Build 2023 in session [Reduce fraud and improve engagement using Digital Wallets](https://build.microsoft.com/en-US/sessions/4ca41843-1b3f-4ee6-955e-9e2326733be8). The Wallet Library enables customers to add verifiable credentials technology to their own mobile apps. The libraries are available for [Android](https://github.com/microsoft/entra-verifiedid-wallet-library-android/tree/dev) and [iOS](https://github.com/microsoft/entra-verifiedid-wallet-library-ios/tree/dev).
+ ## March 2023 - Admin API now supports [application access tokens](admin-api.md#authentication) and in addition to user bearer tokens.
Microsoft Entra Verified ID is now generally available (GA) as the new member of
## June 2022 -- We're adding support for the [did:web](https://w3c-ccg.github.io/did-method-web/) method. Any new tenant that starts using the Verifiable Credentials Service after June 14, 2022 will have Web as a new, default, trust system when [onboarding](verifiable-credentials-configure-tenant.md#set-up-verified-id). VC Administrators can still choose to use ION when setting a tenant. If you want to use did:web instead of ION or viceversa, you'll need to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service).
+- We're adding support for the [did:web](https://w3c-ccg.github.io/did-method-web/) method. Any new tenant that starts using the Verifiable Credentials Service after June 14, 2022 will have Web as a new, default, trust system when [onboarding](verifiable-credentials-configure-tenant.md#set-up-verified-id). VC Administrators can still choose to use ION when setting a tenant. If you want to use did:web instead of ION or viceversa, you need to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service).
- We're rolling out several features to improve the overall experience of creating verifiable credentials in the Entra Verified ID platform: - Introducing Managed Credentials, which are verifiable credentials that no longer use Azure Storage to store the [display & rules JSON definitions](rules-and-display-definitions-model.md). Their display and rule definitions are different from earlier versions. - Create Managed Credentials using the [new quickstart experience](how-to-use-quickstart.md).
Applications that use the Microsoft Entra Verified ID service must use the Reque
| Europe | `https://beta.eu.did.msidentity.com/v1.0/{tenantID}/verifiablecredentials/request` | | Non-EU | `https://beta.did.msidentity.com/v1.0/{tenantID}/verifiablecredentials/request` |
-To confirm which endpoint you should use, we recommend checking your Azure AD tenant's region as described above. If the Azure AD tenant is in the EU, you should use the Europe endpoint.
+To confirm which endpoint you should use, we recommend checking your Azure AD tenant's region as described previously. If the Azure AD tenant is in the EU, you should use the Europe endpoint.
### Credential Revocation with Enhanced Privacy
Sample contract file:
### Microsoft Authenticator DID Generation Update
-We're making protocol updates in Microsoft Authenticator to support Single Long Form DID, thus deprecating the use of pairwise. With this update, your DID in Microsoft Authenticator will be used of every issuer and relaying party exchange. Holders of verifiable credentials using Microsoft Authenticator must get their verifiable credentials reissued as any previous credentials aren't going to continue working.
+We're making protocol updates in Microsoft Authenticator to support Single Long Form DID, thus deprecating the use of pairwise. With this update, your DID in Microsoft Authenticator is used for every issuer and relaying party exchange. Holders of verifiable credentials using Microsoft Authenticator must get their verifiable credentials reissued as any previous credentials aren't going to continue working.
## December 2021
aks Node Auto Repair https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-auto-repair.md
If AKS identifies an unhealthy node that remains unhealthy for *five* minutes, A
AKS engineers investigate alternative remediations if auto-repair is unsuccessful.
-If you want the remediator to reimage the node, you can add the `nodeCondition "customerMarkedAsUnhealthy": true`.
- ## Node auto-drain [Scheduled events][scheduled-events] can occur on the underlying VMs in any of your node pools. For [spot node pools][spot-node-pools], scheduled events may cause a *preempt* node event for the node. Certain node events, such as *preempt*, cause AKS node auto-drain to attempt a cordon and drain of the affected node. This process enables rescheduling for any affected workloads on that node. You might notice the node receives a taint with `"remediator.aks.microsoft.com/unschedulable"`, because of `"kubernetes.azure.com/scalesetpriority: spot"`.
aks Quickstart Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-event-grid.md
Title: Subscribe to Azure Kubernetes Service events with Azure Event Grid
-description: Use Azure Event Grid to subscribe to Azure Kubernetes Service events
+ Title: Subscribe to Azure Kubernetes Service (AKS) events with Azure Event Grid
+description: Learn how to use Azure Event Grid to subscribe to Azure Kubernetes Service (AKS) events.
Previously updated : 07/12/2021 Last updated : 06/16/2023 # Quickstart: Subscribe to Azure Kubernetes Service (AKS) events with Azure Event Grid Azure Event Grid is a fully managed event routing service that provides uniform event consumption using a publish-subscribe model.
-In this quickstart, you'll create an AKS cluster and subscribe to AKS events.
+In this quickstart, you create an Azure Kubernetes Service (AKS) cluster and subscribe to AKS events with Azure Event Grid.
## Prerequisites
In this quickstart, you'll create an AKS cluster and subscribe to AKS events.
### [Azure CLI](#tab/azure-cli)
-Create an AKS cluster using the [az aks create][az-aks-create] command. The following example creates a resource group *MyResourceGroup* and a cluster named *MyAKS* with one node in the *MyResourceGroup* resource group:
+1. Create an Azure resource group using the [`az group create`][az-group-create] command.
-```azurecli-interactive
-az group create --name MyResourceGroup --location eastus
-az aks create -g MyResourceGroup -n MyAKS --location eastus --node-count 1 --generate-ssh-keys
-```
+ ```azurecli-interactive
+ az group create --name myResourceGroup --location eastus
+ ```
+
+2. Create an AKS cluster using the [`az aks create`][az-aks-create] command.
+
+ ```azurecli-interactive
+ az aks create -g myResourceGroup -n myManagedCluster --location eastus --node-count 1 --generate-ssh-keys
+ ```
### [Azure PowerShell](#tab/azure-powershell)
-Create an AKS cluster using the [New-AzAksCluster][new-azakscluster] command. The following example creates a resource group *MyResourceGroup* and a cluster named *MyAKS* with one node in the *MyResourceGroup* resource group:
+1. Create an Azure resource group using the [`New-AzResourceGroup`][new-azresourcegroup] cmdlet.
+
+ ```azurepowershell-interactive
+ New-AzResourceGroup -Name myResourceGroup -Location eastus
+ ```
+
+2. Create an AKS cluster using the [`New-AzAksCluster`][new-azakscluster] cmdlet.
-```azurepowershell-interactive
-New-AzResourceGroup -Name MyResourceGroup -Location eastus
-New-AzAksCluster -ResourceGroupName MyResourceGroup -Name MyAKS -Location eastus -NodeCount 1 -GenerateSshKey
-```
+ ```azurepowershell-interactive
+ New-AzAksCluster -ResourceGroupName MyResourceGroup -Name MyAKS -Location eastus -NodeCount 1 -GenerateSshKey
+ ```
New-AzAksCluster -ResourceGroupName MyResourceGroup -Name MyAKS -Location eastus
### [Azure CLI](#tab/azure-cli)
-Create a namespace and event hub using [az eventhubs namespace create][az-eventhubs-namespace-create] and [az eventhubs eventhub create][az-eventhubs-eventhub-create]. The following example creates a namespace *MyNamespace* and an event hub *MyEventGridHub* in *MyNamespace*, both in the *MyResourceGroup* resource group.
-
-```azurecli-interactive
-az eventhubs namespace create --location eastus --name MyNamespace -g MyResourceGroup
-az eventhubs eventhub create --name MyEventGridHub --namespace-name MyNamespace -g MyResourceGroup
-```
-
-> [!NOTE]
-> The *name* of your namespace must be unique.
-
-Subscribe to the AKS events using [az eventgrid event-subscription create][az-eventgrid-event-subscription-create]:
-
-```azurecli-interactive
-SOURCE_RESOURCE_ID=$(az aks show -g MyResourceGroup -n MyAKS --query id --output tsv)
-ENDPOINT=$(az eventhubs eventhub show -g MyResourceGroup -n MyEventGridHub --namespace-name MyNamespace --query id --output tsv)
-az eventgrid event-subscription create --name MyEventGridSubscription \
source-resource-id $SOURCE_RESOURCE_ID \endpoint-type eventhub \endpoint $ENDPOINT
-```
-
-Verify your subscription to AKS events using `az eventgrid event-subscription list`:
-
-```azurecli-interactive
-az eventgrid event-subscription list --source-resource-id $SOURCE_RESOURCE_ID
-```
-
-The following example output shows you're subscribed to events from the *MyAKS* cluster and those events are delivered to the *MyEventGridHub* event hub:
-
-```output
-[
- {
- "deadLetterDestination": null,
- "deadLetterWithResourceIdentity": null,
- "deliveryWithResourceIdentity": null,
- "destination": {
- "deliveryAttributeMappings": null,
- "endpointType": "EventHub",
- "resourceId": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNamespace/eventhubs/MyEventGridHub"
- },
- "eventDeliverySchema": "EventGridSchema",
- "expirationTimeUtc": null,
- "filter": {
- "advancedFilters": null,
- "enableAdvancedFilteringOnArrays": null,
- "includedEventTypes": [
- "Microsoft.ContainerService.NewKubernetesVersionAvailable"
- ],
- "isSubjectCaseSensitive": null,
- "subjectBeginsWith": "",
- "subjectEndsWith": ""
- },
- "id": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/MyResourceGroup/providers/Microsoft.ContainerService/managedClusters/MyAKS/providers/Microsoft.EventGrid/eventSubscriptions/MyEventGridSubscription",
- "labels": null,
- "name": "MyEventGridSubscription",
- "provisioningState": "Succeeded",
- "resourceGroup": "MyResourceGroup",
- "retryPolicy": {
- "eventTimeToLiveInMinutes": 1440,
- "maxDeliveryAttempts": 30
- },
- "systemData": null,
- "topic": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/MyResourceGroup/providers/microsoft.containerservice/managedclusters/MyAKS",
- "type": "Microsoft.EventGrid/eventSubscriptions"
- }
-]
-```
+1. Create a namespace using the [`az eventhubs namespace create`][az-eventhubs-namespace-create] command. Your namespace name must be unique.
+
+ ```azurecli-interactive
+ az eventhubs namespace create --location eastus --name myNamespace -g myResourceGroup
+ ```
+
+2. Create an event hub using the [`az eventhubs eventhub create`][az-eventhubs-eventhub-create] command.
+
+ ```azurecli-interactive
+ az eventhubs eventhub create --name myEventGridHub --namespace-name myNamespace -g myResourceGroup
+ ```
+
+3. Subscribe to the AKS events using the [`az eventgrid event-subscription create`][az-eventgrid-event-subscription-create] command.
+
+ ```azurecli-interactive
+ SOURCE_RESOURCE_ID=$(az aks show -g MyResourceGroup -n MyAKS --query id --output tsv)
+
+ ENDPOINT=$(az eventhubs eventhub show -g MyResourceGroup -n MyEventGridHub --namespace-name MyNamespace --query id --output tsv)
+
+ az eventgrid event-subscription create --name MyEventGridSubscription \
+ --source-resource-id $SOURCE_RESOURCE_ID \
+ --endpoint-type eventhub \
+ --endpoint $ENDPOINT
+ ```
+
+4. Verify your subscription to AKS events using the [`az eventgrid event-subscription list`][az-eventgrid-event-subscription-list] command.
+
+ ```azurecli-interactive
+ az eventgrid event-subscription list --source-resource-id $SOURCE_RESOURCE_ID
+ ```
+
+ The following example output shows you're subscribed to events from the `myManagedCluster` cluster and those events are delivered to the `myEventGridHub` event hub:
+
+ ```output
+ [
+ {
+ "deadLetterDestination": null,
+ "deadLetterWithResourceIdentity": null,
+ "deliveryWithResourceIdentity": null,
+ "destination": {
+ "deliveryAttributeMappings": null,
+ "endpointType": "EventHub",
+ "resourceId": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/Microsoft.EventHub/namespaces/myNamespace/eventhubs/myEventGridHub"
+ },
+ "eventDeliverySchema": "EventGridSchema",
+ "expirationTimeUtc": null,
+ "filter": {
+ "advancedFilters": null,
+ "enableAdvancedFilteringOnArrays": null,
+ "includedEventTypes": [
+ "Microsoft.ContainerService.NewKubernetesVersionAvailable"
+ ],
+ "isSubjectCaseSensitive": null,
+ "subjectBeginsWith": "",
+ "subjectEndsWith": ""
+ },
+ "id": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/myManagedCluster/providers/Microsoft.EventGrid/eventSubscriptions/myEventGridSubscription",
+ "labels": null,
+ "name": "myEventGridSubscription",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "myResourceGroup",
+ "retryPolicy": {
+ "eventTimeToLiveInMinutes": 1440,
+ "maxDeliveryAttempts": 30
+ },
+ "systemData": null,
+ "topic": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/microsoft.containerservice/managedclusters/myManagedCluster",
+ "type": "Microsoft.EventGrid/eventSubscriptions"
+ }
+ ]
+ ```
### [Azure PowerShell](#tab/azure-powershell)
-Create a namespace and event hub using [New-AzEventHubNamespace][new-azeventhubnamespace] and [New-AzEventHub][new-azeventhub]. The following example creates a namespace *MyNamespace* and an event hub *MyEventGridHub* in *MyNamespace*, both in the *MyResourceGroup* resource group.
-
-```azurepowershell-interactive
-New-AzEventHubNamespace -Location eastus -Name MyNamespace -ResourceGroupName MyResourceGroup
-New-AzEventHub -Name MyEventGridHub -Namespace MyNamespace -ResourceGroupName MyResourceGroup
-```
-
-> [!NOTE]
-> The *name* of your namespace must be unique.
-
-Subscribe to the AKS events using [New-AzEventGridSubscription][new-azeventgridsubscription]:
-
-```azurepowershell-interactive
-$SOURCE_RESOURCE_ID = (Get-AzAksCluster -ResourceGroupName MyResourceGroup -Name MyAKS).Id
-$ENDPOINT = (Get-AzEventHub -ResourceGroupName MyResourceGroup -EventHubName MyEventGridHub -Namespace MyNamespace).Id
-$params = @{
- EventSubscriptionName = 'MyEventGridSubscription'
- ResourceId = $SOURCE_RESOURCE_ID
- EndpointType = 'eventhub'
- Endpoint = $ENDPOINT
-}
-New-AzEventGridSubscription @params
-```
-
-Verify your subscription to AKS events using `Get-AzEventGridSubscription`:
-
-```azurepowershell-interactive
-Get-AzEventGridSubscription -ResourceId $SOURCE_RESOURCE_ID | Select-Object -ExpandProperty PSEventSubscriptionsList
-```
-
-The following example output shows you're subscribed to events from the *MyAKS* cluster and those events are delivered to the *MyEventGridHub* event hub:
-
-```Output
-EventSubscriptionName : MyEventGridSubscription
-Id : /subscriptions/SUBSCRIPTION_ID/resourceGroups/MyResourceGroup/providers/Microsoft.ContainerService/managedClusters/MyAKS/providers/Microsoft.EventGrid/eventSubscriptions/MyEventGridSubscription
-Type : Microsoft.EventGrid/eventSubscriptions
-Topic : /subscriptions/SUBSCRIPTION_ID/resourceGroups/myresourcegroup/providers/microsoft.containerservice/managedclusters/myaks
-Filter : Microsoft.Azure.Management.EventGrid.Models.EventSubscriptionFilter
-Destination : Microsoft.Azure.Management.EventGrid.Models.EventHubEventSubscriptionDestination
-ProvisioningState : Succeeded
-Labels :
-EventTtl : 1440
-MaxDeliveryAttempt : 30
-EventDeliverySchema : EventGridSchema
-ExpirationDate :
-DeadLetterEndpoint :
-Endpoint : /subscriptions/SUBSCRIPTION_ID/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNamespace/eventhubs/MyEventGridHub
-```
+1. Create a namespace using the [`New-AzEventHubNamespace`][new-azeventhubnamespace] cmdlet. Your namespace name must be unique.
+
+ ```azurepowershell-interactive
+ New-AzEventHubNamespace -Location eastus -Name MyNamespace -ResourceGroupName MyResourceGroup
+ ```
+
+2. Create an event hub using the [`New-AzEventHub`][new-azeventhub] cmdlet.
+
+ ```azurepowershell-interactive
+ New-AzEventHub -Name MyEventGridHub -Namespace MyNamespace -ResourceGroupName MyResourceGroup
+ ```
+
+3. Subscribe to the AKS events using the [`New-AzEventGridSubscription`][new-azeventgridsubscription] cmdlet.
+
+ ```azurepowershell-interactive
+ $SOURCE_RESOURCE_ID = (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myManagedCluster).Id
+
+ $ENDPOINT = (Get-AzEventHub -ResourceGroupName myResourceGroup -EventHubName myEventGridHub -Namespace myNamespace).Id
+
+ $params = @{
+ EventSubscriptionName = 'myEventGridSubscription'
+ ResourceId = $SOURCE_RESOURCE_ID
+ EndpointType = 'eventhub'
+ Endpoint = $ENDPOINT
+ }
+
+ New-AzEventGridSubscription @params
+ ```
+
+4. Verify your subscription to AKS events using the [`Get-AzEventGridSubscription`][get-azeventgridsubscription] cmdlet.
+
+ ```azurepowershell-interactive
+ Get-AzEventGridSubscription -ResourceId $SOURCE_RESOURCE_ID | Select-Object -ExpandProperty PSEventSubscriptionsList
+ ```
+
+ The following example output shows you're subscribed to events from the `myManagedCluster` cluster and those events are delivered to the `myEventGridHub` event hub:
+
+ ```Output
+ EventSubscriptionName : myEventGridSubscription
+ Id : /subscriptions/SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/myManagedCluster/providers/Microsoft.EventGrid/eventSubscriptions/myEventGridSubscription
+ Type : Microsoft.EventGrid/eventSubscriptions
+ Topic : /subscriptions/SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/microsoft.containerservice/managedclusters/myManagedCluster
+ Filter : Microsoft.Azure.Management.EventGrid.Models.EventSubscriptionFilter
+ Destination : Microsoft.Azure.Management.EventGrid.Models.EventHubEventSubscriptionDestination
+ ProvisioningState : Succeeded
+ Labels :
+ EventTtl : 1440
+ MaxDeliveryAttempt : 30
+ EventDeliverySchema : EventGridSchema
+ ExpirationDate :
+ DeadLetterEndpoint :
+ Endpoint : /subscriptions/SUBSCRIPTION_ID/resourceGroups/myResourceGroup/providers/Microsoft.EventHub/namespaces/myNamespace/eventhubs/myEventGridHub
+ ```
-When AKS events occur, you'll see those events appear in your event hub. For example, when the list of available Kubernetes versions for your clusters changes, you'll see a `Microsoft.ContainerService.NewKubernetesVersionAvailable` event. For more information on the events AKS emits, see [Azure Kubernetes Service (AKS) as an Event Grid source][aks-events].
+When AKS events occur, the events appear in your event hub. For example, when the list of available Kubernetes versions for your clusters changes, you see a `Microsoft.ContainerService.NewKubernetesVersionAvailable` event. For more information on the events AKS emits, see [Azure Kubernetes Service (AKS) as an Event Grid source][aks-events].
## Delete the cluster and subscriptions ### [Azure CLI](#tab/azure-cli)
-Use the [az group delete][az-group-delete] command to remove the resource group, the AKS cluster, namespace, and event hub, and all related resources.
+* Remove the resource group, AKS cluster, namespace, event hub, and all related resources using the [`az group delete`][az-group-delete] command.
-```azurecli-interactive
-az group delete --name MyResourceGroup --yes --no-wait
-```
+ ```azurecli-interactive
+ az group delete --name myResourceGroup --yes --no-wait
+ ```
### [Azure PowerShell](#tab/azure-powershell)
-Use the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource group, the AKS cluster, namespace, and event hub, and all related resources.
+* Remove the resource group, AKS cluster, namespace, event hub, and all related resources using the [`Remove-AzResourceGroup`][remove-azresourcegroup] cmdlet.
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name MyResourceGroup
-```
+ ```azurepowershell-interactive
+ Remove-AzResourceGroup -Name myResourceGroup
+ ```
-> [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete].
->
-> If you used a managed identity, the identity is managed by the platform and does not require removal.
+ > [!NOTE]
+ > When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster isn't removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete].
+ >
+ > If you used a managed identity, the identity is managed by the platform and doesn't require removal.
## Next steps In this quickstart, you deployed a Kubernetes cluster and then subscribed to AKS events in Azure Event Hubs.
-To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
+To learn more about AKS, and walk through a complete code to deployment example, continue to the following Kubernetes cluster tutorial.
> [!div class="nextstepaction"] > [AKS tutorial][aks-tutorial]
To learn more about AKS, and walk through a complete code to deployment example,
[az-group-delete]: /cli/azure/group#az_group_delete [sp-delete]: kubernetes-service-principal.md#other-considerations [remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
+[az-group-create]: /cli/azure/group#az_group_create
+[az-eventgrid-event-subscription-list]: /cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-list
+[get-azeventgridsubscription]: /powershell/module/az.eventgrid/get-azeventgridsubscription
+[new-azresourcegroup]: /powershell/module/az.resources/new-azresourcegroup
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
Note important changes to make, before you upgrade to any of the available minor
| 1.24 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload Identity V1.1.1<br>ASC Defender 1.0.56<br>AAD Pod Identity 1.8.13.6<br>Gitops 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 18.04 Cgroups V1 <br>ContainerD 1.7<br>| No Breaking Changes | None | 1.25 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload Identity V1.1.1<br>ASC Defender 1.0.56<br>AAD Pod Identity 1.8.13.6<br>Gitops 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 18.04 Cgroups V1 <br>ContainerD 1.7<br>| Ubuntu 22.04 by default with cgroupv2 and Overlay VPA 0.13.0 |CgroupsV2 - If you deploy Java applications with the JDK, prefer to use JDK 11.0.16 and later or JDK 15 and later, which fully support cgroup v2 | 1.26 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload Identity V1.1.1<br>ASC Defender 1.0.56<br>AAD Pod Identity 1.8.13.6<br>Gitops 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7<br>|No Breaking Changes |None
-| 1.27 Preview | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.10.0<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload Identity V1.1.1<br>ASC Defender 1.0.56<br>AAD Pod Identity 1.8.13.6<br>Gitops 1.7.0<br>KMS 0.5.0|Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V1 <br>ContainerD 1.7<br>|Keda 2.10.0 |None
+| 1.27 Preview | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.10.0<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload Identity V1.1.1<br>ASC Defender 1.0.56<br>AAD Pod Identity 1.8.13.6<br>Gitops 1.7.0<br>KMS 0.5.0|Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V1 <br>ContainerD 1.7 for Linux and 1.6 for Windows<br>|Keda 2.10.0 |Because of Ubuntu 22.04 FIPS certification status, we'll switch AKS FIPS nodes from 18.04 to 20.04 from 1.27 preview onwards.
## Alias minor version > [!NOTE]
aks Workload Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md
The following client libraries are the **minimum** version required
| Language | Library | Image | Example | Has Windows | |--|--|-|-|-|
-| .NET | [microsoft-authentication-library-for-dotnet](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | ghcr.io/azure/azure-workload-identity/msal-net | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-net/akvdotnet) | Yes |
-| Go | [microsoft-authentication-library-for-go](https://github.com/AzureAD/microsoft-authentication-library-for-go) | ghcr.io/azure/azure-workload-identity/msal-go | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-go) | Yes |
-| Java | [microsoft-authentication-library-for-java](https://github.com/AzureAD/microsoft-authentication-library-for-java) | ghcr.io/azure/azure-workload-identity/msal-java | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-java) | No |
-| JavaScript | [microsoft-authentication-library-for-js](https://github.com/AzureAD/microsoft-authentication-library-for-js) | ghcr.io/azure/azure-workload-identity/msal-node | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-node) | No |
-| Python | [microsoft-authentication-library-for-python](https://github.com/AzureAD/microsoft-authentication-library-for-python) | ghcr.io/azure/azure-workload-identity/msal-python | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-python) | No |
+| .NET | [microsoft-authentication-library-for-dotnet](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | ghcr.io/azure/azure-workload-identity/msal-net:latest | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-net/akvdotnet) | Yes |
+| Go | [microsoft-authentication-library-for-go](https://github.com/AzureAD/microsoft-authentication-library-for-go) | ghcr.io/azure/azure-workload-identity/msal-go:latest | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-go) | Yes |
+| Java | [microsoft-authentication-library-for-java](https://github.com/AzureAD/microsoft-authentication-library-for-java) | ghcr.io/azure/azure-workload-identity/msal-java:latest | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-java) | No |
+| JavaScript | [microsoft-authentication-library-for-js](https://github.com/AzureAD/microsoft-authentication-library-for-js) | ghcr.io/azure/azure-workload-identity/msal-node:latest | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-node) | No |
+| Python | [microsoft-authentication-library-for-python](https://github.com/AzureAD/microsoft-authentication-library-for-python) | ghcr.io/azure/azure-workload-identity/msal-python:latest | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-python) | No |
## Limitations
If you've used [Azure AD pod-managed identity][use-azure-ad-pod-identity], think
### Service account annotations
+All annotations are optional. If the annotation is not specified, the default value will be used.
+ |Annotation |Description |Default | |--||--| |`azure.workload.identity/client-id` |Represents the Azure AD application<br> client ID to be used with the pod. ||
If you've used [Azure AD pod-managed identity][use-azure-ad-pod-identity], think
### Pod annotations
+All annotations are optional. If the annotation is not specified, the default value will be used.
+ |Annotation |Description |Default | |--||--| |`azure.workload.identity/service-account-token-expiration` |Represents the `expirationSeconds` field for the projected service account token. It's an optional field that you configure to prevent any downtime caused by errors during service account token refresh. Kubernetes service account token expiry isn't correlated with Azure AD tokens. Azure AD tokens expire in 24 hours after they're issued. <sup>1</sup> |3600<br> Supported range is 3600-86400. |
api-management Api Management Configuration Repository Git https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-configuration-repository-git.md
The `apis` folder contains a folder for each API in the service instance, which
* `apis\<api name>\operations\` - Folder containing `<operation name>.description.html` files that map to the operations in the API. Each file contains the description of a single operation in the API, which maps to the `description` property of the [operation entity](/rest/api/apimanagement/current-ga/operation) in the REST API. ### apiVersionSets folder
-The `apiVerionSets` folder contains a folder for each API version set created for an API, and contains the following items.
+The `apiVersionSets` folder contains a folder for each API version set created for an API, and contains the following items.
* `apiVersionSets\<api version set Id>\configuration.json` - Configuration for the version set. This is the same information that would be returned if you were to call the [Get a specific version set](/rest/api/apimanagement/current-ga/api-version-set/get) operation.
api-management Api Management Howto Mutual Certificates For Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-mutual-certificates-for-clients.md
You can also create policy expressions with the [`context` variable](api-managem
> [!IMPORTANT] > * Starting May 2021, the `context.Request.Certificate` property only requests the certificate when the API Management instance's [`hostnameConfiguration`](/rest/api/apimanagement/current-ga/api-management-service/create-or-update#hostnameconfiguration) sets the `negotiateClientCertificate` property to True. By default, `negotiateClientCertificate` is set to False.
-> * If TLS renegotiation is disabled in your client, you may see TLS errors when requesting the certificate using the `context.Request.Certificate` property. If this occurs, enable TLS renegotation settings in the client.
+> * If TLS renegotiation is disabled in your client, you may see TLS errors when requesting the certificate using the `context.Request.Certificate` property. If this occurs, enable TLS renegotiation settings in the client.
### Checking the issuer and subject
api-management Api Management Template Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-template-resources.md
The following localization options are supported:
|ValidationErrorCredentialsInvalid|Email or password is invalid. Please correct the errors and try again.| |WebAuthenticationRequestIsNotValid|Request is not valid| |WebAuthenticationUserIsNotConfirm|Please confirm your registration before attempting to sign in.|
-|WebAuthenticationInvalidEmailFormated|Email is invalid: {0}|
+|WebAuthenticationInvalidEmailFormatted|Email is invalid: {0}|
|WebAuthenticationUserNotFound|User not found| |WebAuthenticationTenantNotRegistered|Your account belongs to an Azure Active Directory tenant which is not authorized to access this portal.| |WebAuthenticationAuthenticationFailed|Authentication has failed.|
api-management How To Deploy Self Hosted Gateway Azure Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-azure-arc.md
Previously updated : 05/25/2021 Last updated : 06/12/2023 # Deploy an Azure API Management gateway on Azure Arc (preview)
Deploying the API Management gateway on an Azure Arc-enabled Kubernetes cluster
```azurecli az k8s-extension create --cluster-type connectedClusters --cluster-name <cluster-name> \
- --resource-group <rg-name> --name <extension-name> --extension-type Microsoft.ApiManagement.Gateway \
- --scope namespace --target-namespace <namespace> \
- --configuration-settings gateway.endpoint='<Configuration URL>' \
- --configuration-protected-settings gateway.authKey='<token>' \
- --configuration-settings service.type='LoadBalancer' --release-train preview
+ --resource-group <rg-name> --name <extension-name> --extension-type Microsoft.ApiManagement.Gateway \
+ --scope namespace --target-namespace <namespace> \
+ --configuration-settings gateway.configuration.uri='<Configuration URL>' \
+ --config-protected-settings gateway.auth.token='<token>' \
+ --configuration-settings service.type='LoadBalancer' --release-train preview
``` > [!TIP]
- > `-protected-` flag for `authKey` is optional, but recommended.
+ > `-protected-` flag for `gateway.auth.token` is optional, but recommended.
1. Verify deployment status using the following CLI command: ```azurecli
Deploying the API Management gateway on an Azure Arc-enabled Kubernetes cluster
## Deploy the API Management gateway extension using Azure portal 1. In the Azure portal, navigate to your Azure Arc-connected cluster.
-1. In the left menu, select **Extensions (preview)** > **+ Add** > **API Management gateway (preview)**.
+1. In the left menu, select **Extensions** > **+ Add** > **API Management gateway (preview)**.
1. Select **Create**. 1. In the **Install API Management gateway** window, configure the gateway extension: * Select the subscription and resource group for your API Management instance.
Deploying the API Management gateway on an Azure Arc-enabled Kubernetes cluster
## Available extension configurations
+The self-hosted gateway extension for Azure Arc provides many configuration settings to customize the extension for your environment. This section lists required deployment settings and optional settings for integration with Log Analytics. For a complete list of settings, see the self-hosted gateway extension [reference](self-hosted-gateway-arc-reference.md).
+
+### Required settings
+ The following extension configurations are **required**. | Setting | Description | | - | -- |
-| `gateway.endpoint` | The gateway endpoint's Configuration URL. |
-| `gateway.authKey` | Token for access to the gateway. |
+| `gateway.configuration.uri` | Configuration endpoint in API Management service for the self-hosted gateway. |
+| `gateway.auth.token` | Gateway token (authentication key) to authenticate to API Management service. Typically starts with `GatewayKey`. |
| `service.type` | Kubernetes service configuration for the gateway: `LoadBalancer`, `NodePort`, or `ClusterIP`. | ### Log Analytics settings
To enable monitoring of the self-hosted gateway, configure the following Log Ana
* Discover all [Azure Arc-enabled Kubernetes extensions](../azure-arc/kubernetes/extensions.md). * Learn more about [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md). * Learn more about guidance to [run the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md).
+* For configuration options, see the self-hosted gateway extension [reference](self-hosted-gateway-arc-reference.md).
api-management Import Logic App As Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-logic-app-as-api.md
In this article, you learn how to:
> - Test the API in the Azure portal > [!NOTE]
-> API Management supports automated import of a Logic App (Consumption) resource. which runs in the multi-tenant Logic Apps environment. Learn more about [single-tenant versus muti-tenant Logic Apps](../logic-apps/single-tenant-overview-compare.md).
+> API Management supports automated import of a Logic App (Consumption) resource. which runs in the multi-tenant Logic Apps environment. Learn more about [single-tenant versus multi-tenant Logic Apps](../logic-apps/single-tenant-overview-compare.md).
## Prerequisites
api-management Json To Xml Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/json-to-xml-policy.md
The `json-to-xml` policy converts a request or response body from JSON to XML.
consider-accept-header="true | false" parse-date="true | false" namespace-separator="separator character"
- namespace-prefix="namepsace prefix"
+ namespace-prefix="namespace prefix"
attribute-block-name="name" /> ```
api-management Sap Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/sap-api.md
In this article, you'll:
1. From the side navigation menu, under the **APIs** section, select **APIs**. 1. Under **Create a new definition**, select **OpenAPI specification**.
- :::image type="content" source="./media/import-api-from-oas/oas-api.png" alt-text="OpenAPI specifiction":::
+ :::image type="content" source="./media/import-api-from-oas/oas-api.png" alt-text="OpenAPI specification":::
1. Click **Select a file**, and select the `openapi-spec.json` file that you saved locally in a previous step.
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
Previously updated : 07/11/2022 Last updated : 06/14/2023
The self-hosted gateway is an optional, containerized version of the default managed gateway included in every API Management service. It's useful for scenarios such as placing gateways in the same environments where you host your APIs. Use the self-hosted gateway to improve API traffic flow and address API security and compliance requirements.
-This article explains how the self-hosted gateway feature of Azure API Management enables hybrid and multi-cloud API management, presents its high-level architecture, and highlights its capabilities.
+This article explains how the self-hosted gateway feature of Azure API Management enables hybrid and multicloud API management, presents its high-level architecture, and highlights its capabilities.
For an overview of the features across the various gateway offerings, see [API gateway in API Management](api-management-gateways-overview.md#feature-comparison-managed-versus-self-hosted-gateways). [!INCLUDE [api-management-availability-premium-dev](../../includes/api-management-availability-premium-dev.md)]
-## Hybrid and multi-cloud API management
+## Hybrid and multicloud API management
-The self-hosted gateway feature expands API Management support for hybrid and multi-cloud environments and enables organizations to efficiently and securely manage APIs hosted on-premises and across clouds from a single API Management service in Azure.
+The self-hosted gateway feature expands API Management support for hybrid and multicloud environments and enables organizations to efficiently and securely manage APIs hosted on-premises and across clouds from a single API Management service in Azure.
With the self-hosted gateway, customers have the flexibility to deploy a containerized version of the API Management gateway component to the same environments where they host their APIs. All self-hosted gateways are managed from the API Management service they're federated with, thus providing customers with the visibility and unified management experience across all internal and external APIs.
We provide a variety of container images for self-hosted gateways to meet your n
You can find a full list of available tags [here](https://mcr.microsoft.com/product/azure-api-management/gateway/tags).
-<sup>1</sup>Preview versions are not officially supported and are for experimental purposes only.<br/>
+<sup>1</sup>Preview versions aren't officially supported and are for experimental purposes only. See the [self-hosted gateway support policies](self-hosted-gateway-support-policies.md#self-hosted-gateway-container-image-support-coverage). <br/>
### Use of tags in our official deployment options
To operate properly, each self-hosted gateway needs outbound connectivity on por
| Description | Required for v1 | Required for v2 | Notes | |:|:|:|:| | Hostname of the configuration endpoint | `<apim-service-name>.management.azure-api.net` | `<apim-service-name>.configuration.azure-api.net` | Connectivity to v2 endpoint requires DNS resolution of the default hostname. |
-| Public IP address of the API Management instance | ✔️ | ✔️ | IP addresses of primary location is sufficient. |
+| Public IP address of the API Management instance | ✔️ | ✔️ | IP address of primary location is sufficient. |
| Public IP addresses of Azure Storage [service tag](../virtual-network/service-tags-overview.md) | ✔️ | Optional<sup>2</sup> | IP addresses must correspond to primary location of API Management instance. | | Hostname of Azure Blob Storage account | ✔️ | Optional<sup>2</sup> | Account associated with instance (`<blob-storage-account-name>.blob.core.windows.net`) | | Hostname of Azure Table Storage account | ✔️ | Optional<sup>2</sup> | Account associated with instance (`<table-storage-account-name>.table.core.windows.net`) |
To operate properly, each self-hosted gateway needs outbound connectivity on por
> * The associated storage account names are listed in the service's **Network connectivity status** page in the Azure portal. > * Public IP addresses underlying the associated storage accounts are dynamic and can change without notice.
+### Authentication options
+
+To authenticate the connection between the self-hosted gateway and the cloud-based API Management instance's configuration endpoint, you have the following options in the gateway container's [configuration settings](self-hosted-gateway-settings-reference.md).
+
+|Option |Considerations |
+|||
+| [Azure Active Directory authentication](self-hosted-gateway-enable-azure-ad.md) | Configure one or more Azure AD apps for access to gateway<br/><br/>Manage access separately per app<br/><br/>Configure longer expiry times for secrets in accordance with your organization's policies<br/><br/>Use standard Azure AD procedures to assign or revoke user or group permissions to app and to rotate secrets<br/><br/> |
+| Gateway access token (also called authentication key) | Token expires every 30 days at maximum and must be renewed in the containers<br/><br/>Backed by a gateway key that can be rotated independently (for example, to revoke access) <br/><br/>Regenerating gateway key invalidates all access tokens created with it |
+ ### Connectivity failures When connectivity to Azure is lost, the self-hosted gateway is unable to receive configuration updates, report its status, or upload telemetry.
As of v2.1.1 and above, you can manage the ciphers that are being used through t
- Learn more about the various gateways in our [API gateway overview](api-management-gateways-overview.md) - Learn more about the support policy for the [self-hosted gateway](self-hosted-gateway-support-policies.md)-- Learn more about [API Management in a Hybrid and Multi-Cloud World](https://aka.ms/hybrid-and-multi-cloud-api-management)
+- Learn more about [API Management in a hybrid and multicloud world](https://aka.ms/hybrid-and-multi-cloud-api-management)
- Learn more about guidance for [running the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md) - [Deploy self-hosted gateway to Docker](how-to-deploy-self-hosted-gateway-docker.md) - [Deploy self-hosted gateway to Kubernetes](how-to-deploy-self-hosted-gateway-kubernetes.md)
app-service Overview Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-security.md
Except for the **Isolated** pricing tier, all tiers run your apps on the shared
- Serve internal application using an internal load balancer (ILB), which allows access only from inside your Azure Virtual Network. The ILB has an IP address from your private subnet, which provides total isolation of your apps from the internet. - [Use an ILB behind a web application firewall (WAF)](environment/integrate-with-application-gateway.md). The WAF offers enterprise-level protection to your public-facing applications, such as DDoS protection, URI filtering, and SQL injection prevention.
+## DDoS protection
+
+For web workloads, we highly recommend utilizing [Azure DDoS protection](../ddos-protection/ddos-protection-overview.md) and a [web application firewall](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to deploy [Azure Front Door](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [protection against network-level DDoS attacks](../frontdoor/front-door-ddos.md).
+ For more information, see [Introduction to Azure App Service Environments](environment/intro.md).
app-service Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-recommendations.md
This article contains security recommendations for Azure App Service. Implementi
## General | Recommendation | Comments |
-|-|-|-|
+|-|-|
| Stay up to date | Use the latest versions of supported platforms, programming languages, protocols, and frameworks. | ## Identity and access management
This article contains security recommendations for Azure App Service. Implementi
| Use the isolated pricing tier | Except for the isolated pricing tier, all tiers run your apps on the shared network infrastructure in Azure App Service. The isolated tier gives you complete network isolation by running your apps inside a dedicated [App Service environment](environment/intro.md). An App Service environment runs in your own instance of [Azure Virtual Network](../virtual-network/index.yml).| | Use secure connections when accessing on-premises resources | You can use [Hybrid connections](app-service-hybrid-connections.md), [Virtual Network integration](./overview-vnet-integration.md), or [App Service environment's](environment/intro.md) to connect to on-premises resources. | | Limit exposure to inbound network traffic | Network security groups allow you to restrict network access and control the number of exposed endpoints. For more information, see [How To Control Inbound Traffic to an App Service Environment](environment/app-service-app-service-environment-control-inbound-traffic.md). |
+| Protect against DDoS attacks | For web workloads, we highly recommend utilizing [Azure DDoS protection](../ddos-protection/ddos-protection-overview.md) and a [web application firewall](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to deploy [Azure Front Door](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [protection against network-level DDoS attacks](../frontdoor/front-door-ddos.md). |
## Monitoring
application-gateway Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/features.md
![Application Gateway conceptual](media/overview/figure1-720.png) > [!NOTE]
-> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to employ [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md).
+> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to deploy [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md).
Application Gateway includes the following features:
Web Application Firewall (WAF) is a service that provides centralized protection
Web applications are increasingly targets of malicious attacks that exploit common known vulnerabilities. Common among these exploits are SQL injection attacks, cross site scripting attacks to name a few. Preventing such attacks in application code can be challenging and may require rigorous maintenance, patching and monitoring at many layers of the application topology. A centralized web application firewall helps make security management much simpler and gives better assurance to application administrators against threats or intrusions. A WAF solution can also react to a security threat faster by patching a known vulnerability at a central location versus securing each of individual web applications. Existing application gateways can be converted to a Web Application Firewall enabled application gateway easily.
-For more information, see [What is Azure Web Application Firewall?](../web-application-firewall/overview.md).
+Refer to [Application DDoS protection](../web-application-firewall/shared/application-ddos-protection.md) for guidance on how to use Azure WAF with Application Gateway to protect against DDoS attacks. For more information, see [What is Azure Web Application Firewall?](../web-application-firewall/overview.md).
## Ingress Controller for AKS Application Gateway Ingress Controller (AGIC) allows you to use Application Gateway as the ingress for an [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) cluster.
application-gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview.md
To learn about Application Gateway features, see [Azure Application Gateway feat
To learn about Application Gateway infrastructure, see [Azure Application Gateway infrastructure configuration](configuration-infrastructure.md).
+## Security
+
+* Protect your applications against L7 layer DDoS protection using WAF. For more information, see [Application DDoS protection](../web-application-firewall/shared/application-ddos-protection.md).
+
+* Protect your apps from malicious actors with Bot manager rules based on MicrosoftΓÇÖs own Threat Intelligence.
+
+* Secure applications against L3 and L4 DDoS attacks with [Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md) plan.
+
+* Privately connect to your backend behind Application Gateway with [Private Link](private-link.md) and embrace a zero-trust access model.
+
+* Eliminate risk of data exfiltration and control privacy of communication from within the virtual network with a fully [Private-only Application Gateway deployment](application-gateway-private-deployment.md).
+
+* Provide a centralized security experience for your application via Azure Policy, Azure Advisor, and Microsoft Sentinel integration that ensures consistent security features across apps.
++ ## Pricing and SLA For Application Gateway pricing information, see [Application Gateway pricing](https://azure.microsoft.com/pricing/details/application-gateway/).
applied-ai-services Form Recognizer Container Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md
- billing={FORM_RECOGNIZER_ENDPOINT_URI} - apiKey={FORM_RECOGNIZER_KEY} - AzureCognitiveServiceLayoutHost=http://azure-cognitive-service-layout:5000
- ports:
+ ports:
- "5000:5050" azure-cognitive-service-layout: container_name: azure-cognitive-service-layout
- EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI} - apiKey={FORM_RECOGNIZER_KEY}- ``` Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
- EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI} - apiKey={FORM_RECOGNIZER_KEY}- ``` Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
- EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI} - apiKey={FORM_RECOGNIZER_KEY}-- ``` Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
docker-compose up
The following code sample is a self-contained `docker compose` example to run the Form Recognizer General Document container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your ID and Read container instances. ```yml
- version: "3.9"
-
- azure-cognitive-service-receipt:
- container_name: azure-cognitive-service-id-document
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document-3.0
- environment:
- - EULA=accept
- - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apiKey={FORM_RECOGNIZER_KEY}
- - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
- ports:
- - "5000:5050"
- azure-cognitive-service-read:
- container_name: azure-cognitive-service-read
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.0
- environment:
- - EULA=accept
- - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apiKey={FORM_RECOGNIZER_KEY}
--
+version: "3.9"
+
+ azure-cognitive-service-receipt:
+ container_name: azure-cognitive-service-id-document
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+ - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
+ ports:
+ - "5000:5050"
+ azure-cognitive-service-read:
+ container_name: azure-cognitive-service-read
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
``` Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
automation Overview Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview-monitoring-agent.md
To enable tracking of Windows Services data, you must upgrade CT extension and u
#### [For Arc-enabled Windows VMs](#tab/win-arc-vm) ```powershell-interactive
-ΓÇô az connectedmachine extension create --name ChangeTracking-Linux --publisher Microsoft.Azure.ChangeTrackingAndInventory --type ChangeTracking-Windows --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> --enable-auto-upgrade true
+ΓÇô az connectedmachine extension create --name ChangeTracking-Windows --publisher Microsoft.Azure.ChangeTrackingAndInventory --type ChangeTracking-Windows --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> --enable-auto-upgrade true
``` #### [For Arc-enabled Linux VMs](#tab/lin-arc-vm) ```powershell-interactive-- az connectedmachine extension create --name ChangeTracking-Windows --publisher Microsoft.Azure.ChangeTrackingAndInventory --type ChangeTracking-Linux --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> --enable-auto-upgrade true
+- az connectedmachine extension create --name ChangeTracking-Linux --publisher Microsoft.Azure.ChangeTrackingAndInventory --type ChangeTracking-Linux --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> --enable-auto-upgrade true
```
azure-arc Extensions Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md
For more information, see [Understand Azure Policy for Kubernetes clusters](../.
## Azure Key Vault Secrets Provider -- **Supported distributions**: AKS on Azure Stack HCI, AKS hybrid clusters provisioned from Azure, Cluster API Azure, Google Kubernetes Engine, Canonical Kubernetes Distribution, OpenShift Kubernetes Distribution, Amazon Elastic Kubernetes Service, VMWare Tanzu Kubernetes Grid
+- **Supported distributions**: AKS on Azure Stack HCI, AKS hybrid clusters provisioned from Azure, Cluster API Azure, Google Kubernetes Engine, Canonical Kubernetes Distribution, OpenShift Kubernetes Distribution, Amazon Elastic Kubernetes Service, VMware Tanzu Kubernetes Grid
The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integration of Azure Key Vault as a secrets store with a Kubernetes cluster via a CSI volume. For Azure Arc-enabled Kubernetes clusters, you can install the Azure Key Vault Secrets Provider extension to fetch secrets.
For more information, see [Use the Azure Key Vault Secrets Provider extension to
## Microsoft Defender for Containers -- **Supported distributions**: AKS hybrid clusters provisioned from Azure, Cluster API Azure, Azure Red Hat OpenShift, Red Hat OpenShift (version 4.6 or newer), Google Kubernetes Engine Standard, Amazon Elastic Kubernetes Service, VMWare Tanzu Kubernetes Grid, Rancher Kubernetes Engine, Canonical Kubernetes Distribution
+- **Supported distributions**: AKS hybrid clusters provisioned from Azure, Cluster API Azure, Azure Red Hat OpenShift, Red Hat OpenShift (version 4.6 or newer), Google Kubernetes Engine Standard, Amazon Elastic Kubernetes Service, VMware Tanzu Kubernetes Grid, Rancher Kubernetes Engine, Canonical Kubernetes Distribution
Microsoft Defender for Containers is the cloud-native solution that is used to secure your containers so you can improve, monitor, and maintain the security of your clusters, containers, and their applications. It gathers information related to security like audit log data from the Kubernetes cluster, and provides recommendations and threat alerts based on gathered data.
For more information, see [Enable Microsoft Defender for Containers](../../defen
## Azure Arc-enabled Open Service Mesh -- **Supported distributions**: AKS, AKS on Azure Stack HCI, AKS hybrid clusters provisioned from Azure, Cluster API Azure, Google Kubernetes Engine, Canonical Kubernetes Distribution, Rancher Kubernetes Engine, OpenShift Kubernetes Distribution, Amazon Elastic Kubernetes Service, VMWare Tanzu Kubernetes Grid
+- **Supported distributions**: AKS, AKS on Azure Stack HCI, AKS hybrid clusters provisioned from Azure, Cluster API Azure, Google Kubernetes Engine, Canonical Kubernetes Distribution, Rancher Kubernetes Engine, OpenShift Kubernetes Distribution, Amazon Elastic Kubernetes Service, VMware Tanzu Kubernetes Grid
[Open Service Mesh (OSM)](https://docs.openservicemesh.io/) is a lightweight, extensible, Cloud Native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
With the integration between Azure API Management and Azure Arc on Kubernetes, y
For more information, see [Deploy an Azure API Management gateway on Azure Arc (preview)](../../api-management/how-to-deploy-self-hosted-gateway-azure-arc.md). > [!IMPORTANT]
-> API Management self-hosted gateway on Azure Arc is currently in public preview. During preview, the API Management gateway extension is available in the following regions: West Europe, East US.
+> API Management self-hosted gateway on Azure Arc is currently in public preview.
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ## Azure Arc-enabled Machine Learning
azure-arc System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/system-requirements.md
Title: Azure Arc resource bridge (preview) system requirements description: Learn about system requirements for Azure Arc resource bridge (preview). Previously updated : 03/23/2023 Last updated : 06/15/2023 # Azure Arc resource bridge (preview) system requirements
The control plane IP has the following requirements:
- Open communication with the management machine. - The control plane needs to be able to resolve the management machine and vice versa.-
+- Static IP address assigned; the IP should be outside the DHCP range but still available on the network segment. This IP address can't be assigned to any other machine on the network. If you're using Azure Kubernetes Service on Azure Stack HCI (AKS hybrid) and installing resource bridge, then the control plane IP for the resource bridge can't be used by the AKS hybrid cluster. For specific instructions on deploying Arc resource bridge with AKS on Azure Stack HCI, see [AKS on HCI (AKS hybrid) - Arc resource bridge deployment](/azure/aks/hybrid/deploy-arc-resource-bridge-windows-server).
- If using a proxy, the proxy server has to also be reachable from IPs within the IP prefix, including the reserved appliance VM IP.
azure-cache-for-redis Cache How To Premium Clustering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-clustering.md
The Redis clustering protocol requires each client to connect to each shard dire
### How do I connect to my cache when clustering is enabled?
-You can connect to your cache using the same [endpoints](cache-configure.md#properties), [ports](cache-configure.md#properties), and [keys](cache-configure.md#access-keys) that you use when connecting to a cache that doesn't have clustering enabled. Redis manages the clustering on the backend so you don't have to manage it from your client.
+You can connect to your cache using the same [endpoints](cache-configure.md#properties), [ports](cache-configure.md#properties), and [keys](cache-configure.md#access-keys) that you use when connecting to a cache that doesn't have clustering enabled. Redis manages the clustering on the backend so you don't have to manage it from your client as long as the client library supports Redis clustering.
### Can I directly connect to the individual shards of my cache?
azure-cache-for-redis Cache Retired Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-retired-features.md
Cloud Service version 4 caches can't be upgraded to version 6 until they're migr
For more information, see [Caches with a dependency on Cloud Services (classic)](./cache-faq.yml).
-Starting on April 30, 2023, Cloud Service caches receive only critical security updates and critical bug fixes. Cloud Service caches won't support any new features released after April 30, 2023. We highly recommend migrating your caches to Azure Virtual Machine Scale Set.
+Cloud Service cache will continue to function beyond June 30, 2023, however, starting on April 30, 2023, Cloud Service caches receive only critical security updates and bug fixes with limited support. Cloud Service caches won't support any new features released after April 30, 2023. We highly recommend migrating your caches to Azure Virtual Machine Scale Set as soon as possible.
#### Do I need to update my application to be able to use Redis version 6?
No, the upgrade can't be rolled back.
## Next steps <!-- Add a context sentence for the following links --> - [What's new](cache-whats-new.md)-- [Azure Cache for Redis FAQ](cache-faq.yml)
+- [Azure Cache for Redis FAQ](cache-faq.yml)
azure-functions Functions Bindings Event Grid Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-output.md
The type of the output parameter used with an Event Grid output binding depends
# [In-process](#tab/in-process)
-The following example shows a C# function that binds to a `CloudEvent` using version 3.x of the extension, which is in preview:
+The following example shows a C# function that publishes a `CloudEvent` using version 3.x of the extension:
```cs using System.Threading.Tasks;
namespace Azure.Extensions.WebJobs.Sample
} ```
-The following example shows a C# function that binds to an `EventGridEvent` using version 3.x of the extension, which is in preview:
+The following example shows a C# function that publishes an `EventGridEvent` using version 3.x of the extension:
```cs using System.Threading.Tasks;
namespace Azure.Extensions.WebJobs.Sample
} ```
-The following example shows a C# function that writes an [Microsoft.Azure.EventGrid.Models.EventGridEvent][EventGridEvent] message to an Event Grid custom topic, using the method return value as the output:
+The following example shows a C# function that publishes an [EventGridEvent][EventGridEvent] message to an Event Grid custom topic, using the method return value as the output:
```csharp [FunctionName("EventGridOutput")]
public static EventGridEvent Run([TimerTrigger("0 */5 * * * *")] TimerInfo myTim
} ```
-The following example shows how to use the `IAsyncCollector` interface to send a batch of messages.
+It is also possible to use an `out` parameter to accomplish the same thing:
+```csharp
+[FunctionName("EventGridOutput")]
+[return: EventGrid(TopicEndpointUri = "MyEventGridTopicUriSetting", TopicKeySetting = "MyEventGridTopicKeySetting")]
+public static void Run(
+ [TimerTrigger("0 */5 * * * *")] TimerInfo myTimer,
+ EventGrid(TopicEndpointUri = "MyEventGridTopicUriSetting", TopicKeySetting = "MyEventGridTopicKeySetting") out eventGridEvent,
+ ILogger log)
+{
+ eventGridEvent = EventGridEvent("message-id", "subject-name", "event-data", "event-type", DateTime.UtcNow, "1.0");
+}
+```
+
+The following example shows how to use the `IAsyncCollector` interface to send a batch of `EventGridEvent` messages.
```csharp [FunctionName("EventGridAsyncOutput")]
public static async Task Run(
} ```
+Starting in version 3.3.0, it is possible to use Azure Active Directory when authenticating the output binding:
+
+```csharp
+[FunctionName("EventGridAsyncOutput")]
+public static async Task Run(
+ [TimerTrigger("0 */5 * * * *")] TimerInfo myTimer,
+ [EventGrid(Connection = "MyEventGridConnection"]IAsyncCollector<CloudEvent> outputEvents,
+ ILogger log)
+{
+ for (var i = 0; i < 3; i++)
+ {
+ var myEvent = new CloudEvent("message-id-" + i, "subject-name", "event-data");
+ await outputEvents.AddAsync(myEvent);
+ }
+}
+```
+
+When using the Connection property, the `topicEndpointUri` must be specified as a child of the connection setting, and the `TopicEndpointUri` and `TopicKeySetting` properties should not be used. For local development, use the local.settings.json file to store the connection information:
+```json
+{
+ "Values": {
+ "myConnection__topicEndpointUri": "{topicEndpointUri}"
+ }
+}
+```
+When deployed, use the application settings to store this information.
++ # [Isolated process](#tab/isolated-process) The following example shows how the custom type is used in both the trigger and an Event Grid output binding:
public class Function {
} ```
-You can also use a POJO class to send EventGrid messages.
+You can also use a POJO class to send Event Grid messages.
```java public class Function {
Functions version 1.x doesn't support isolated worker process.
C# script functions support the following types: + [Azure.Messaging.CloudEvent][CloudEvent]
-+ [Azure.Messaging.EventGrid][EventGridEvent2]
++ [Azure.Messaging.EventGrid][EventGridEvent] + [Newtonsoft.Json.Linq.JObject][JObject] + [System.String][String]
There are two options for outputting an Event Grid message from a function:
* [Dispatch an Event Grid event](./functions-bindings-event-grid-trigger.md)
-[EventGridEvent]: /dotnet/api/microsoft.azure.eventgrid.models.eventgridevent
+[EventGridEvent]: /dotnet/api/azure.messaging.eventgrid.eventgridevent
[CloudEvent]: /dotnet/api/azure.messaging.cloudevent
azure-maps Creator Qgis Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-qgis-plugin.md
Title: View and edit data with the Azure Maps QGIS plugin
+ Title: Work with datasets using the QGIS plugin
description: How to view and edit indoor map data using the Azure Maps QGIS plugin
azure-monitor Azure Monitor Agent Troubleshoot Linux Vm Rsyslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md
Title: Syslog troubleshooting on AMA Linux Agent
-description: Guidance for troubleshooting rsyslog issues on Linux virtual machines, scale sets with Azure Monitor agent and Data Collection Rules.
+ Title: Syslog troubleshooting on Azure Monitor Agent for Linux
+description: Guidance for troubleshooting rsyslog issues on Linux virtual machines, scale sets with Azure Monitor Agent, and data collection rules.
Last updated 5/31/2023
-# Syslog troubleshooting guide for Azure Monitor Linux Agent
+# Syslog troubleshooting guide for Azure Monitor Agent for Linux
-Overview of Azure Monitor Linux Agent syslog collection and supported RFC standards:
+Overview of Azure Monitor Agent for Linux Syslog collection and supported RFC standards:
-- AMA installs an output configuration for the system syslog daemon during the installation process. The configuration file specifies the way events flow between the syslog daemon and AMA.
+- Azure Monitor Agent installs an output configuration for the system Syslog daemon during the installation process. The configuration file specifies the way events flow between the Syslog daemon and Azure Monitor Agent.
- For `rsyslog` (most Linux distributions), the configuration file is `/etc/rsyslog.d/10-azuremonitoragent.conf`. For `syslog-ng`, the configuration file is `/etc/syslog-ng/conf.d/azuremonitoragent.conf`.-- AMA listens to a UNIX domain socket to receive events from `rsyslog` / `syslog-ng`. The socket path for this communication is `/run/azuremonitoragent/default_syslog.socket`-- The syslog daemon will use queues when AMA ingestion is delayed, or when AMA isn't reachable.-- AMA ingests syslog events via the aforementioned socket and filters them based on facility / severity combination from DCR configuration in `/etc/opt/microsoft/azuremonitoragent/config-cache/configchunks/`. Any `facility` / `severity` not present in the DCR will be dropped.-- AMA attempts to parse events in accordance with **RFC3164** and **RFC5424**. Additionally, it knows how to parse the message formats listed [here](./azure-monitor-agent-overview.md#data-sources-and-destinations).-- AMA identifies the destination endpoint for Syslog events from the DCR configuration and attempts to upload the events.
+- Azure Monitor Agent listens to a UNIX domain socket to receive events from `rsyslog` / `syslog-ng`. The socket path for this communication is `/run/azuremonitoragent/default_syslog.socket`.
+- The Syslog daemon uses queues when Azure Monitor Agent ingestion is delayed or when Azure Monitor Agent isn't reachable.
+- Azure Monitor Agent ingests Syslog events via the previously mentioned socket and filters them based on facility or severity combination from data collection rule (DCR) configuration in `/etc/opt/microsoft/azuremonitoragent/config-cache/configchunks/`. Any `facility` or `severity` not present in the DCR is dropped.
+- Azure Monitor Agent attempts to parse events in accordance with **RFC3164** and **RFC5424**. It also knows how to parse the message formats listed on [this website](./azure-monitor-agent-overview.md#data-sources-and-destinations).
+- Azure Monitor Agent identifies the destination endpoint for Syslog events from the DCR configuration and attempts to upload the events.
> [!NOTE]
- > AMA uses local persistency by default, all events received from `rsyslog` / `syslog-ng` are queued in `/var/opt/microsoft/azuremonitoragent/events` if they fail to be uploaded.
-
+ > Azure Monitor Agent uses local persistency by default. All events received from `rsyslog` or `syslog-ng` are queued in `/var/opt/microsoft/azuremonitoragent/events` if they fail to be uploaded.
+ ## Issues
-### Rsyslog data not uploaded due to full disk space issue on Azure Monitor Linux Agent
+You might encounter the following issues.
+
+### Rsyslog data isn't uploaded because of a full disk space issue on Azure Monitor Agent for Linux
+
+The next sections describe the issue.
#### Symptom
-**Syslog data is not uploading**: When inspecting the error logs at `/var/opt/microsoft/azuremonitoragent/log/mdsd.err`, you'll see entries about *Error while inserting item to Local persistent store…No space left on device* similar to the following snippet:
+**Syslog data is not uploading**: When you inspect the error logs at `/var/opt/microsoft/azuremonitoragent/log/mdsd.err`, you see entries about *Error while inserting item to Local persistent store…No space left on device* similar to the following snippet:
``` 2021-11-23T18:15:10.9712760Z: Error while inserting item to Local persistent store syslog.error: IO error: No space left on device: While appending to file: /var/opt/microsoft/azuremonitoragent/events/syslog.error/000555.log: No space left on device ``` #### Cause
-Linux AMA buffers events to `/var/opt/microsoft/azuremonitoragent/events` prior to ingestion. On a default Linux AMA install, this directory will take ~650MB of disk space at idle. The size on disk will increase when under sustained logging load. It will get cleaned up about every 60 seconds and will reduce back to ~650 MB when the load returns to idle.
+Azure Monitor Agent for Linux buffers events to `/var/opt/microsoft/azuremonitoragent/events` prior to ingestion. On a default Azure Monitor Agent for Linux installation, this directory takes ~650 MB of disk space at idle. The size on disk increases when it's under sustained logging load. It gets cleaned up about every 60 seconds and reduces back to ~650 MB when the load returns to idle.
-#### Confirming the issue of full disk
-The `df` command shows almost no space available on `/dev/sda1`, as shown below:
+#### Confirm the issue of a full disk
+The `df` command shows almost no space available on `/dev/sda1`, as shown here:
```bash df -h
tmpfs 63G 0 63G 0% /sys/fs/cgroup
tmpfs 13G 0 13G 0% /run/user/1000 ```
-The `du` command can be used to inspect the disk to determine which files are causing the disk to be full. For example:
+You can use the `du` command to inspect the disk to determine which files are causing the disk to be full. For example:
```bash cd /var/log
The `du` command can be used to inspect the disk to determine which files are ca
18G syslog.1 ```
-In some cases, `du` may not report any significantly large files/directories. It may be possible that a [file marked as (deleted) is taking up the space](https://unix.stackexchange.com/questions/182077/best-way-to-free-disk-space-from-deleted-files-that-are-held-open). This issue can happen when some other process has attempted to delete a file, but there remains a process with the file still open. The `lsof` command can be used to check for such files. In the example below, we see that `/var/log/syslog` is marked as deleted, but is taking up 3.6 GB of disk space. It hasn't been deleted because a process with PID 1484 still has the file open.
+In some cases, `du` might not report any large files or directories. It might be possible that a [file marked as (deleted) is taking up the space](https://unix.stackexchange.com/questions/182077/best-way-to-free-disk-space-from-deleted-files-that-are-held-open). This issue can happen when some other process has attempted to delete a file, but a process with the file is still open. You can use the `lsof` command to check for such files. In the following example, we see that `/var/log/syslog` is marked as deleted but it takes up 3.6 GB of disk space. It hasn't been deleted because a process with PID 1484 still has the file open.
```bash sudo lsof +L1
rsyslogd 1484 syslog 14w REG 8,1 3601566564 0 35280 /var/log/syslog (
``` ### Rsyslog default configuration logs all facilities to /var/log/
-On some popular distros (for example Ubuntu 18.04 LTS), rsyslog ships with a default configuration file (`/etc/rsyslog.d/50-default.conf`) which logs events from nearly all facilities to disk at `/var/log/syslog`. Note that for RedHat/CentOS family syslog events will be stored under `/var/log/` but in a different file: `/var/log/messages`.
+On some popular distros (for example, Ubuntu 18.04 LTS), rsyslog ships with a default configuration file (`/etc/rsyslog.d/50-default.conf`), which logs events from nearly all facilities to disk at `/var/log/syslog`. RedHat/CentOS family Syslog events are stored under `/var/log/` but in a different file: `/var/log/messages`.
-AMA doesn't rely on syslog events being logged to `/var/log/`. Instead, it configures rsyslog service to forward events over a socket directly to the azuremonitoragent service process (mdsd).
+Azure Monitor Agent doesn't rely on Syslog events being logged to `/var/log/`. Instead, it configures the rsyslog service to forward events over a socket directly to the `azuremonitoragent` service process (mdsd).
#### Fix: Remove high-volume facilities from /etc/rsyslog.d/50-default.conf
-If you're sending a high log volume through rsyslog and your system is setup to log events for these facilities, consider modifying the default rsyslog config to avoid logging and storing them under `/var/log/`. The events for this facility would still be forwarded to AMA because rsyslog is using a different configuration for forwarding placed in `/etc/rsyslog.d/10-azuremonitoragent.conf`.
+If you're sending a high log volume through rsyslog and your system is set up to log events for these facilities, consider modifying the default rsyslog config to avoid logging and storing them under `/var/log/`. The events for this facility would still be forwarded to Azure Monitor Agent because rsyslog uses a different configuration for forwarding placed in `/etc/rsyslog.d/10-azuremonitoragent.conf`.
+
+1. For example, to remove `local4` events from being logged at `/var/log/syslog` or `/var/log/messages`, change this line in `/etc/rsyslog.d/50-default.conf` from this snippet:
-1. For example, to remove local4 events from being logged at `/var/log/syslog` or `/var/log/messages`, change this line in `/etc/rsyslog.d/50-default.conf` from this:
```config *.*;auth,authpriv.none -/var/log/syslog ```
- To this (add local4.none;):
+ To this snippet (add `local4.none;`):
```config *.*;local4.none;auth,authpriv.none -/var/log/syslog ```
-2. `sudo systemctl restart rsyslog`
-### Azure Monitor Linux Agent Event Buffer is Filling Disk
-If you observe the `/var/opt/microsoft/azuremonitor/events` directory growing unbounded (10 GB or higher) and not reducing in size, [file a ticket](#file-a-ticket) with **Summary** as 'AMA Event Buffer is filling disk' and **Problem type** as 'I need help configuring data collection from a VM'.
+1. `sudo systemctl restart rsyslog`
+
+### Azure Monitor Agent for Linux event buffer is filling a disk
+
+If you observe the `/var/opt/microsoft/azuremonitor/events` directory growing unbounded (10 GB or higher) and not reducing in size, [file a ticket](#file-a-ticket). For **Summary**, enter **Azure Monitor Agent Event Buffer is filling disk**. For **Problem type**, enter **I need help configuring data collection from a VM**.
[!INCLUDE [azure-monitor-agent-file-a-ticket](../../../includes/azure-monitor-agent/azure-monitor-agent-file-a-ticket.md)]
azure-monitor Data Collection Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-syslog.md
Title: Collect syslog with Azure Monitor Agent
-description: Configure collection of syslog logs using a data collection rule on virtual machines with the Azure Monitor Agent.
+ Title: Collect Syslog events with Azure Monitor Agent
+description: Configure collection of Syslog events by using a data collection rule on virtual machines with Azure Monitor Agent.
Last updated 05/10/2023
-# Collect syslog with Azure Monitor Agent overview
+# Collect Syslog events with Azure Monitor Agent
-Syslog is an event logging protocol that's common to Linux. You can use the Syslog daemon built into Linux devices and appliances to collect local events of the types you specify, and have it send those events to Log Analytics Workspace. Applications send messages that might be stored on the local machine or delivered to a Syslog collector. When the Azure Monitor agent for Linux is installed, it configures the local Syslog daemon to forward messages to the agent when syslog collection is enabled in [data collection rule (DCR)](../essentials/data-collection-rule-overview.md). The Azure Monitor Agent then sends the messages to Azure Monitor/Log Analytics workspace where a corresponding syslog record is created in [Syslog table](https://learn.microsoft.com/azure/azure-monitor/reference/tables/syslog).
+Syslog is an event logging protocol that's common to Linux. You can use the Syslog daemon that's built in to Linux devices and appliances to collect local events of the types you specify. Then you can have it send those events to a Log Analytics workspace. Applications send messages that might be stored on the local machine or delivered to a Syslog collector.
+
+When the Azure Monitor agent for Linux is installed, it configures the local Syslog daemon to forward messages to the agent when Syslog collection is enabled in [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md). Azure Monitor Agent then sends the messages to an Azure Monitor or Log Analytics workspace where a corresponding Syslog record is created in a [Syslog table](/azure/azure-monitor/reference/tables/syslog).
![Diagram that shows Syslog collection.](media/data-sources-syslog/overview.png)
The following facilities are supported with the Syslog collector:
* uucp * local0-local7
-For some device types that don't allow local installation of the Azure Monitor agent, the agent can be installed instead on a dedicated Linux-based log forwarder. The originating device must be configured to send Syslog events to the Syslog daemon on this forwarder instead of the local daemon. Please see [Sentinel tutorial](../../sentinel/forward-syslog-monitor-agent.md) for more information.
+For some device types that don't allow local installation of Azure Monitor Agent, the agent can be installed instead on a dedicated Linux-based log forwarder. The originating device must be configured to send Syslog events to the Syslog daemon on this forwarder instead of the local daemon. For more information, see the [Sentinel tutorial](../../sentinel/forward-syslog-monitor-agent.md).
## Configure Syslog
-The Azure Monitor agent for Linux will only collect events with the facilities and severities that are specified in its configuration. You can configure Syslog through the Azure portal or by managing configuration files on your Linux agents.
+The Azure Monitor Agent for Linux only collects events with the facilities and severities that are specified in its configuration. You can configure Syslog through the Azure portal or by managing configuration files on your Linux agents.
### Configure Syslog in the Azure portal
-Configure Syslog from the Data Collection Rules menu of the Azure Monitor. This configuration is delivered to the configuration file on each Linux agent.
-* Select Add data source.
-* For Data source type, select Linux syslog
+Configure Syslog from the **Data Collection Rules** menu of Azure Monitor. This configuration is delivered to the configuration file on each Linux agent.
+
+1. Select **Add data source**.
+1. For **Data source type**, select **Linux syslog**.
-You can collect syslog events with different log level for each facility. By default, all syslog facility types will be collected. If you do not want to collect for example events of `auth` type, select `none` in the `Minimum log level` list box for `auth` facility and save the changes. If you need to change default log level for syslog events and collect only events with log level starting ΓÇ£NOTICEΓÇ¥ or higher priority, select ΓÇ£LOG_NOTICEΓÇ¥ in ΓÇ£Minimum log levelΓÇ¥ list box.
+You can collect Syslog events with a different log level for each facility. By default, all Syslog facility types are collected. If you don't want to collect, for example, events of `auth` type, select **NONE** in the **Minimum log level** list box for `auth` facility and save the changes. If you need to change the default log level for Syslog events and collect only events with a log level starting at **NOTICE** or a higher priority, select **LOG_NOTICE** in the **Minimum log level** list box.
By default, all configuration changes are automatically pushed to all agents that are configured in the DCR. ### Create a data collection rule
-Create a *data collection rule* in the same region as your Log Analytics workspace.
-A data collection rule is an Azure resource that allows you to define the way data should be handled as it's ingested into the workspace.
+Create a *data collection rule* in the same region as your Log Analytics workspace. A DCR is an Azure resource that allows you to define the way data should be handled as it's ingested into the workspace.
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for and open **Monitor**. 1. Under **Settings**, select **Data Collection Rules**. 1. Select **Create**.
- :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-data-collection-rule.png" alt-text="Screenshot of the data collections rules pane with the create option selected.":::
-
+ :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-data-collection-rule.png" alt-text="Screenshot that shows the Data Collection Rules pane with the Create option selected.":::
#### Add resources+ 1. Select **Add resources**.
-1. Use the filters to find the virtual machine that you'll use to collect logs.
- :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-rule-scope.png" alt-text="Screenshot of the page to select the scope for the data collection rule. ":::
+1. Use the filters to find the virtual machine you want to use to collect logs.
+
+ :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-rule-scope.png" alt-text="Screenshot that shows the page to select the scope for the data collection rule. ":::
1. Select the virtual machine. 1. Select **Apply**. 1. Select **Next: Collect and deliver**.
-#### Add data source
+#### Add a data source
1. Select **Add data source**. 1. For **Data source type**, select **Linux syslog**.
- :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-rule-data-source.png" alt-text="Screenshot of page to select data source type and minimum log level.":::
+
+ :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-rule-data-source.png" alt-text="Screenshot that shows the page to select the data source type and minimum log level.":::
1. For **Minimum log level**, leave the default values **LOG_DEBUG**. 1. Select **Next: Destination**.
-#### Add destination
+#### Add a destination
1. Select **Add destination**.
- :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-rule-add-destination.png" alt-text="Screenshot of the destination tab with the add destination option selected.":::
+ :::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-rule-add-destination.png" alt-text="Screenshot that shows the Destination tab with the Add destination option selected.":::
1. Enter the following values: |Field |Value |
A data collection rule is an Azure resource that allows you to define the way d
1. Select **Add data source**. 1. Select **Next: Review + create**.
-### Create rule
+### Create a rule
1. Select **Create**.
-1. Wait 20 minutes before moving on to the next section.
+1. Wait 20 minutes before you move on to the next section.
-If your VM doesn't have the Azure Monitor agent installed, the data collection rule deployment triggers the installation of the agent on the VM.
+If your VM doesn't have Azure Monitor Agent installed, the DCR deployment triggers the installation of the agent on the VM.
-## Configure Syslog on Linux Agent
-When the Azure Monitoring Agent is installed on Linux machine it installs a default Syslog configuration file that defines the facility and severity of the messages that are collected if syslog is enabled in DCR. The configuration file is different depending on the Syslog daemon that the client has installed.
+## Configure Syslog on the Linux agent
+When Azure Monitor Agent is installed on a Linux machine, it installs a default Syslog configuration file that defines the facility and severity of the messages that are collected if Syslog is enabled in a DCR. The configuration file is different depending on the Syslog daemon that the client has installed.
### Rsyslog
-On many Linux distributions, the rsyslogd daemon is responsible for consuming, storing, and routing log messages sent using the Linux syslog API. Azure Monitor agent uses the unix domain socket output module (omuxsock) in rsyslog to forward log messages to the Azure Monitor Agent. The AMA installation includes default config files that get placed under the following directory:
-`/etc/opt/microsoft/azuremonitoragent/syslog/rsyslogconf/05-azuremonitoragent-loadomuxsock.conf`
-`/etc/opt/microsoft/azuremonitoragent/syslog/rsyslogconf/05-azuremonitoragent-loadomuxsock.conf`
+On many Linux distributions, the rsyslogd daemon is responsible for consuming, storing, and routing log messages sent by using the Linux Syslog API. Azure Monitor Agent uses the UNIX domain socket output module (`omuxsock`) in rsyslog to forward log messages to Azure Monitor Agent.
+
+The Azure Monitor Agent installation includes default config files that get placed under the following directory: `/etc/opt/microsoft/azuremonitoragent/syslog/rsyslogconf/`
-When syslog is added to data collection rule, these configuration files will be installed under `etc/rsyslog.d` system directory and rsyslog will be automatically restarted for the changes to take effect. These files are used by rsyslog to load the output module and forward the events to Azure Monitoring agent daemon using defined rules. The builtin omuxsock module cannot be loaded more than once. Therefore, the configurations for loading of the module and forwarding of the events with corresponding forwarding format template are split in two different files. Its default contents are shown in the following example. This example collects Syslog messages sent from the local agent for all facilities with all log levels.
+When Syslog is added to a DCR, these configuration files are installed under the `etc/rsyslog.d` system directory and rsyslog is automatically restarted for the changes to take effect. These files are used by rsyslog to load the output module and forward the events to the Azure Monitor Agent daemon by using defined rules.
+
+The built-in `omuxsock` module can't be loaded more than once. For this reason, the configurations for loading of the module and forwarding of the events with corresponding forwarding format template are split in two different files. Its default contents are shown in the following example. This example collects Syslog messages sent from the local agent for all facilities with all log levels.
``` $ cat /etc/rsyslog.d/10-azuremonitoragent.conf # Azure Monitor Agent configuration: forward logs to azuremonitoragent
$ cat /etc/rsyslog.d/05-azuremonitoragent-loadomuxsock.conf
# Azure Monitor Agent configuration: load rsyslog forwarding module. $ModLoad omuxsock ```
-Note that on some legacy systems such as CentOS 7.3 we have seen rsyslog log formatting issues when using traditional forwarding format to send syslog events to Azure Monitor Agent and for these systems, Azure Monitor Agent is automatically placing legacy forwarder template instead:
+
+On some legacy systems, such as CentOS 7.3, we've seen rsyslog log formatting issues when a traditional forwarding format is used to send Syslog events to Azure Monitor Agent. For these systems, Azure Monitor Agent automatically places a legacy forwarder template instead:
+ `template(name="AMA_RSYSLOG_TraditionalForwardFormat" type="string" string="%TIMESTAMP% %HOSTNAME% %syslogtag%%msg:::sp-if-no-1st-sp%%msg%\n")`
+### Syslog-ng
-### Syslog-ng
+The configuration file for syslog-ng is installed at `/etc/opt/microsoft/azuremonitoragent/syslog/syslog-ngconf/azuremonitoragent.conf`. When Syslog collection is added to a DCR, this configuration file is placed under the `/etc/syslog-ng/conf.d/azuremonitoragent.conf` system directory and syslog-ng is automatically restarted for the changes to take effect.
-The configuration file for syslog-ng is installed at `/etc/opt/microsoft/azuremonitoragent/syslog/syslog-ngconf/azuremonitoragent.conf`. When Syslog collection is added to data collection rule, this configuration file will be placed under `/etc/syslog-ng/conf.d/azuremonitoragent.conf` system directory and syslog-ng will be automatically restarted for the changes to take effect. Its default contents are shown in this example. This example collects Syslog messages sent from the local agent for all facilities and all severities.
+The default contents are shown in the following example. This example collects Syslog messages sent from the local agent for all facilities and all severities.
``` $ cat /etc/syslog-ng/conf.d/azuremonitoragent.conf # Azure MDSD configuration: syslog forwarding config for mdsd agent options {};
log { source(s_src); # will be automatically parsed from /etc/syslog-ng/syslog-n
destination(d_azure_mdsd); }; ```
-Note* Azure Monitor supports collection of messages sent by rsyslog or syslog-ng, where rsyslog is the default daemon. The default Syslog daemon on version 5 of Red Hat Enterprise Linux, CentOS, and Oracle Linux version (sysklog) isn't supported for Syslog event collection. To collect Syslog data from this version of these distributions, the rsyslog daemon should be installed and configured to replace sysklog.
-
-Note*
-If you edit the Syslog configuration, you must restart the Syslog daemon for the changes to take effect.
-
+>[!Note]
+> Azure Monitor supports collection of messages sent by rsyslog or syslog-ng, where rsyslog is the default daemon. The default Syslog daemon on version 5 of Red Hat Enterprise Linux, CentOS, and Oracle Linux version (sysklog) isn't supported for Syslog event collection. To collect Syslog data from this version of these distributions, the rsyslog daemon should be installed and configured to replace sysklog.
+If you edit the Syslog configuration, you must restart the Syslog daemon for the changes to take effect.
## Prerequisites
-You will need:
+You need:
-- Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac).-- [Data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint).-- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
+- A Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac).
+- A [data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint).
+- [Permissions to create DCR objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
## Syslog record properties
The following table provides different examples of log queries that retrieve Sys
## Next steps
-Learn more about:
+Learn more about:
-- [Azure Monitor Agent](azure-monitor-agent-overview.md).-- [Data collection rules](../essentials/data-collection-rule-overview.md).-- [Best practices for cost management in Azure Monitor](../best-practices-cost.md).
+- [Azure Monitor Agent](azure-monitor-agent-overview.md)
+- [Data collection rules](../essentials/data-collection-rule-overview.md)
+- [Best practices for cost management in Azure Monitor](../best-practices-cost.md)
azure-monitor Om Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/om-agents.md
Title: Connect Operations Manager to Azure Monitor | Microsoft Docs description: To maintain your investment in System Center Operations Manager and use extended capabilities with Log Analytics, you can integrate Operations Manager with your workspace. Previously updated : 01/30/2023 Last updated : 06/15/2023
To maintain your existing investment in [System Center Operations Manager](/syst
Integrating with System Center Operations Manager adds value to your service operations strategy by using the speed and efficiency of Azure Monitor in collecting, storing, and analyzing log data from Operations Manager. Azure Monitor log queries help correlate and work toward identifying the faults of problems and surfacing recurrences in support of your existing problem management process. The flexibility of the query engine to examine performance, event, and alert data with rich dashboards and reporting capabilities to expose this data in meaningful ways demonstrates the strength Azure Monitor brings in complementing Operations Manager. The agents reporting to the Operations Manager management group collect data from your servers based on the [Log Analytics data sources](../agents/agent-data-sources.md) and solutions you've enabled in your workspace. Depending on the solutions enabled:+
+>[!Note]
+>Newer integrations and reconfiguration of the existing integration between Operations Manager management server and Log Analytics will no longer work as this connection will be retired soon.
+ - The data is sent directly from an Operations Manager management server to the service, or - The data is sent directly from the agent to a Log Analytics workspace because of the volume of data collected on the agent-managed system.
To ensure the security of data in transit to Azure Monitor, configure the agent
Perform the following series of steps to configure your Operations Manager management group to connect to one of your Log Analytics workspaces. > [!NOTE]
-> If Log Analytics data stops coming in from a specific agent or management server, reset the Winsock Catalog by using `netsh winsock reset`. Then reboot the server. Resetting the Winsock Catalog allows network connections that were broken to be reestablished.
+> - If Log Analytics data stops coming in from a specific agent or management server, reset the Winsock Catalog by using `netsh winsock reset`. Then reboot the server. Resetting the Winsock Catalog allows network connections that were broken to be reestablished.
+> - Newer integrations and reconfiguration of the existing integration between Operations Manager management server and Log Analytics will no longer workas this connection will be retired soon. However, you can still connect your monitored System Center Operations Manager agents to Log Analytics using the following methods based on your scenario.
+> 1. Use a Log Analytics Gateway and point the agent to that server. Learn more about [Connect computers without internet access by using the Log Analytics gateway in Azure Monitor](/azure/azure-monitor/agents/gateway).
+> 2. Use the AMA (Azure Monitoring Agent) agent side-by-side to connect the agent to Log Analytics. Learn more about [Migrate to Azure Monitor Agent from Log Analytics agent](/azure/azure-monitor/agents/azure-monitor-agent-migration).ΓÇ»
+> 3. Configure a direct connection to Log Analytics in the Microsoft Monitoring Agent. (Dual-Home with System Center Operations Manager).
During initial registration of your Operations Manager management group with a Log Analytics workspace, the option to specify the proxy configuration for the management group isn't available in the Operations console. The management group has to be successfully registered with the service before this option is available. To work around this situation, update the system proxy configuration by using `netsh` on the system you're running the Operations console from to configure integration, and all management servers in the management group.
In the future, if you plan on reconnecting your management group to a Log Analyt
## Next steps
-To add functionality and gather data, see [Add Azure Monitor solutions from the Solutions Gallery](/previous-versions/azure/azure-monitor/insights/solutions).
+To add functionality and gather data, see [Add Azure Monitor solutions from the Solutions Gallery](/previous-versions/azure/azure-monitor/insights/solutions).
azure-monitor Annotations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/annotations.md
description: Learn how to create annotations to track deployment or other signif
Last updated 01/24/2023-+ # Release annotations for Application Insights
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
The SDKs catch many exceptions automatically, so you don't always have to call `
* **ASP.NET**: [Write code to catch exceptions](./asp-net-exceptions.md). * **Java EE**: [Exceptions are caught automatically](./opentelemetry-enable.md?tabs=java).
-* **JavaScript**: Exceptions are caught automatically. If you want to disable automatic collection, add a line to the SDK Loader Script that you insert in your webpages:
+* **JavaScript**: Exceptions are caught automatically. If you want to disable automatic collection, add a line to the JavaScript (Web) SDK Loader Script that you insert in your webpages:
```javascript ({
azure-monitor Api Filtering Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md
For apps written using [ASP.NET Core](asp-net-core.md#add-telemetryinitializers)
Insert a JavaScript telemetry initializer, if needed. For more information on the telemetry initializers for the Application Insights JavaScript SDK, see [Telemetry initializers](https://github.com/microsoft/ApplicationInsights-JS#telemetry-initializers).
-#### [SDK Loader Script](#tab/sdkloaderscript)
+#### [JavaScript (Web) SDK Loader Script](#tab/javascriptwebsdkloaderscript)
-Insert a telemetry initializer by adding the onInit callback function in the [SDK Loader Script configuration](./javascript-sdk.md?tabs=sdkloaderscript#sdk-loader-script-configuration):
+Insert a telemetry initializer by adding the onInit callback function in the [JavaScript (Web) SDK Loader Script configuration](./javascript-sdk.md?tabs=javascriptwebsdkloaderscript#javascript-web-sdk-loader-script-configuration):
```html <script type="text/javascript">
-!function(v,y,T){<!-- Removed the SDK Loader Script code for brevity -->}(window,document,{
+!function(v,y,T){<!-- Removed the JavaScript (Web) SDK Loader Script code for brevity -->}(window,document,{
src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js", crossOrigin: "anonymous", onInit: function (sdk) {
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
ms.devlang: csharp Last updated 04/24/2023-+ # Application Insights for ASP.NET Core applications
HttpContext.Features.Get<RequestTelemetry>().Properties["myProp"] = someData
## Enable client-side telemetry for web applications
-The preceding steps are enough to help you start collecting server-side telemetry. If your application has client-side components, follow the next steps to start collecting [usage telemetry](./usage-overview.md) SDK Loader Script injection by configuration.
+The preceding steps are enough to help you start collecting server-side telemetry. If your application has client-side components, follow the next steps to start collecting [usage telemetry](./usage-overview.md) JavaScript (Web) SDK Loader Script injection by configuration.
1. In `_ViewImports.cshtml`, add injection:
As an alternative to using `FullScript`, `ScriptBody` is available starting in A
</script> ```
-The `.cshtml` file names referenced earlier are from a default MVC application template. Ultimately, if you want to properly enable client-side monitoring for your application, the JavaScript SDK Loader Script must appear in the `<head>` section of each page of your application that you want to monitor. Add the JavaScript SDK Loader Script to `_Layout.cshtml` in an application template to enable client-side monitoring.
+The `.cshtml` file names referenced earlier are from a default MVC application template. Ultimately, if you want to properly enable client-side monitoring for your application, the JavaScript JavaScript (Web) SDK Loader Script must appear in the `<head>` section of each page of your application that you want to monitor. Add the JavaScript JavaScript (Web) SDK Loader Script to `_Layout.cshtml` in an application template to enable client-side monitoring.
-If your project doesn't include `_Layout.cshtml`, you can still add [client-side monitoring](./website-monitoring.md) by adding the JavaScript SDK Loader Script to an equivalent file that controls the `<head>` of all pages within your app. Alternatively, you can add the SDK Loader Script to multiple pages, but we don't recommend it.
+If your project doesn't include `_Layout.cshtml`, you can still add [client-side monitoring](./website-monitoring.md) by adding the JavaScript JavaScript (Web) SDK Loader Script to an equivalent file that controls the `<head>` of all pages within your app. Alternatively, you can add the JavaScript (Web) SDK Loader Script to multiple pages, but we don't recommend it.
> [!NOTE] > JavaScript injection provides a default configuration experience. If you require [configuration](./javascript.md#configuration) beyond setting the connection string, you're required to remove auto-injection as described and manually add the [JavaScript SDK](./javascript.md#add-the-javascript-sdk).
azure-monitor Asp Net Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md
Last updated 03/22/2023 ms.devlang: csharp -+ # Dependency tracking in Application Insights
azure-monitor Asp Net Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-exceptions.md
ms.devlang: csharp Last updated 11/15/2022-+
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
ms.devlang: csharp Last updated 04/18/2023-+ # Explore .NET/.NET Core and Python trace logs in Application Insights
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md
You have now successfully configured server-side application monitoring. If you
## Add client-side monitoring
-The previous sections provided guidance on methods to automatically and manually configure server-side monitoring. To add client-side monitoring, use the [client-side JavaScript SDK](javascript.md). You can monitor any web page's client-side transactions by adding a [JavaScript SDK Loader Script](./javascript-sdk.md?tabs=sdkloaderscript#get-started) before the closing `</head>` tag of the page's HTML.
+The previous sections provided guidance on methods to automatically and manually configure server-side monitoring. To add client-side monitoring, use the [client-side JavaScript SDK](javascript.md). You can monitor any web page's client-side transactions by adding a [JavaScript JavaScript (Web) SDK Loader Script](./javascript-sdk.md?tabs=javascriptwebsdkloaderscript#get-started) before the closing `</head>` tag of the page's HTML.
-Although it's possible to manually add the SDK Loader Script to the header of each HTML page, we recommend that you instead add the SDK Loader Script to a primary page. That action injects the SDK Loader Script into all pages of a site.
+Although it's possible to manually add the JavaScript (Web) SDK Loader Script to the header of each HTML page, we recommend that you instead add the JavaScript (Web) SDK Loader Script to a primary page. That action injects the JavaScript (Web) SDK Loader Script into all pages of a site.
-For the template-based ASP.NET MVC app from this article, the file that you need to edit is *_Layout.cshtml*. You can find it under **Views** > **Shared**. To add client-side monitoring, open *_Layout.cshtml* and follow the [SDK Loader Script-based setup instructions](./javascript-sdk.md?tabs=sdkloaderscript#get-started) from the article about client-side JavaScript SDK configuration.
+For the template-based ASP.NET MVC app from this article, the file that you need to edit is *_Layout.cshtml*. You can find it under **Views** > **Shared**. To add client-side monitoring, open *_Layout.cshtml* and follow the [JavaScript (Web) SDK Loader Script-based setup instructions](./javascript-sdk.md?tabs=javascriptwebsdkloaderscript#get-started) from the article about client-side JavaScript SDK configuration.
## Troubleshooting
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
Links are provided to more information for each supported scenario.
> [!NOTE] > Auto-instrumentation was known as "codeless attach" before October 2021.
-## SDK Loader Script injection by configuration
+## JavaScript (Web) SDK Loader Script injection by configuration
-If youΓÇÖre using the following supported SDKs, you can configure the SDK Loader Script to inject from the server-side SDK onto each page.
+If youΓÇÖre using the following supported SDKs, you can configure the JavaScript (Web) SDK Loader Script to inject from the server-side SDK onto each page.
> [!NOTE] > See the linked article for instructions on how to install the server-side SDK.
If youΓÇÖre using the following supported SDKs, you can configure the SDK Loader
| ASP.NET Core | [Enable client-side telemetry for web applications](./asp-net-core.md?tabs=netcorenew%2Cnetcore6#enable-client-side-telemetry-for-web-applications) | | Node.js | [Automatic web Instrumentation](./nodejs.md#automatic-web-instrumentationpreview) |
+For other methods to instrument your application with the Application Insights JavaScript SDK, see [Get started with the JavaScript SDK](./javascript-sdk.md).
+ ## Next steps * [Application Insights overview](app-insights-overview.md)
azure-monitor Configuration With Applicationinsights Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/configuration-with-applicationinsights-config.md
Last updated 03/22/2023 ms.devlang: csharp -+
azure-monitor Continuous Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/continuous-monitoring.md
Title: Continuous monitoring of your Azure DevOps release pipeline | Microsoft D
description: This article provides instructions to quickly set up continuous monitoring with Azure Pipelines and Application Insights. Last updated 05/01/2020-+ # Add continuous monitoring to your release pipeline
azure-monitor Custom Operations Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-operations-tracking.md
ms.devlang: csharp Last updated 11/26/2019-+ # Track custom operations with Application Insights .NET SDK
azure-monitor Distributed Tracing Telemetry Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-tracing-telemetry-correlation.md
This feature is in `Microsoft.ApplicationInsights.JavaScript`. It's disabled by
distributedTracingMode: DistributedTracingModes.W3C ``` -- **[SDK Loader Script-based setup](./javascript-sdk.md?tabs=sdkloaderscript#get-started)**
+- **[JavaScript (Web) SDK Loader Script-based setup](./javascript-sdk.md?tabs=javascriptwebsdkloaderscript#get-started)**
Add the following configuration: ```
azure-monitor Get Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/get-metric.md
Last updated 04/05/2023 ms.devlang: csharp -+ # Custom metric collection in .NET and .NET Core
azure-monitor Ilogger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ilogger.md
Last updated 04/24/2023 ms.devlang: csharp -+ # Application Insights logging with .NET
azure-monitor Javascript Feature Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md
-# Feature extensions for the Application Insights JavaScript SDK (Click Analytics)
+# Enable Click Analytics Auto-Collection plug-in
Application Insights JavaScript SDK feature extensions are extra features that can be added to the Application Insights JavaScript SDK to enhance its functionality. In this article, we cover the Click Analytics plug-in, which automatically tracks click events on webpages and uses `data-*` attributes or customized tags on HTML elements to populate event telemetry.
+> [!IMPORTANT]
+> If you haven't already, you need to first [enable Azure Monitor Application Insights Real User Monitoring](./javascript-sdk.md) before you enable the Click Analytics plug-in.
-## Get started
+## What data does the plug-in collect?
+
+The following key properties are captured by default when the plug-in is enabled.
+
+### Custom event properties
+
+| Name | Description | Sample |
+| | |--|
+| Name | The name of the custom event. For more information on how a name gets populated, see [Name column](#name).| About |
+| itemType | Type of event. | customEvent |
+|sdkVersion | Version of Application Insights SDK along with click plug-in.|JavaScript:2_ClickPlugin2|
+
+### Custom dimensions
+
+| Name | Description | Sample |
+| | |--|
+| actionType | Action type that caused the click event. It can be a left or right click. | CL |
+| baseTypeSource | Base Type source of the custom event. | ClickEvent |
+| clickCoordinates | Coordinates where the click event is triggered. | 659X47 |
+| content | Placeholder to store extra `data-*` attributes and values. | [{sample1:value1, sample2:value2}] |
+| pageName | Title of the page where the click event is triggered. | Sample Title |
+| parentId | ID or name of the parent element. For more information on how a parentId is populated, see [parentId key](#parentid-key). | navbarContainer |
+
+### Custom measurements
+
+| Name | Description | Sample |
+| | |--|
+| timeToAction | Time taken in milliseconds for the user to click the element since the initial page load. | 87407 |
-Users can set up the Click Analytics Auto-Collection plug-in via SDK Loader Script or npm and then optionally add a framework extension.
+
+## Add the Click Analytics plug-in
+
+Users can set up the Click Analytics Auto-Collection plug-in via JavaScript (Web) SDK Loader Script or npm and then optionally add a framework extension.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
-### [SDK Loader Script setup](#tab/sdkloaderscript)
+### 1. Add the code
+
+#### [JavaScript (Web) SDK Loader Script](#tab/javascriptwebsdkloaderscript)
Ignore this setup if you use the npm setup.
Ignore this setup if you use the npm setup.
[clickPluginInstance.identifier] : clickPluginConfig }, };
- // Application Insights SDK Loader Script code
- !function(v,y,T){<!-- Removed the SDK Loader Script code for brevity -->}(window,document,{
+ // Application Insights JavaScript (Web) SDK Loader Script code
+ !function(v,y,T){<!-- Removed the JavaScript (Web) SDK Loader Script code for brevity -->}(window,document,{
src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js", crossOrigin: "anonymous", cfg: configObj // configObj is defined above.
Ignore this setup if you use the npm setup.
``` > [!NOTE]
-> To add or update SDK Loader Script configuration, see [SDK Loader Script configuration](./javascript-sdk.md?tabs=sdkloaderscript#sdk-loader-script-configuration).
+> To add or update JavaScript (Web) SDK Loader Script configuration, see [JavaScript (Web) SDK Loader Script configuration](./javascript-sdk.md?tabs=javascriptwebsdkloaderscript#javascript-web-sdk-loader-script-configuration).
-### [npm setup](#tab/npmsetup)
+#### [npm package](#tab/npmpackage)
Install the npm package:
appInsights.loadAppInsights();
-## Add a framework extension
+### 2. (Optional) Add a framework extension
Add a framework extension, if needed.
-### [React](#tab/react)
+#### [React](#tab/react)
```javascript import React from 'react';
appInsights.loadAppInsights();
``` > [!NOTE]
-> To add React configuration, see [React configuration](./javascript-framework-extensions.md?tabs=react#configuration). For more information on the React plug-in, see [React plug-in](./javascript-framework-extensions.md?tabs=react#react-application-insights-javascript-sdk-plug-in).
+> To add React configuration, see [React configuration](./javascript-framework-extensions.md?tabs=react#add-configuration). For more information on the React plug-in, see [React plug-in](./javascript-framework-extensions.md?tabs=react).
-### [React Native](#tab/reactnative)
+#### [React Native](#tab/reactnative)
```typescript import { ApplicationInsights } from '@microsoft/applicationinsights-web';
appInsights.loadAppInsights();
``` > [!NOTE]
-> To add React Native configuration, see [Enable Correlation for React Native](./javascript-framework-extensions.md?tabs=reactnative#enable-correlation). For more information on the React Native plug-in, see [React Native plug-in](./javascript-framework-extensions.md?tabs=reactnative#react-native-plugin-for-application-insights-javascript-sdk).
+> For more information on the React Native plug-in, see [React Native plug-in](./javascript-framework-extensions.md?tabs=reactnative).
-### [Angular](#tab/angular)
+#### [Angular](#tab/angular)
```javascript import { ApplicationInsights } from '@microsoft/applicationinsights-web';
export class AppComponent {
``` > [!NOTE]
-> To add Angular configuration, see [Enable Correlation for Angular](./javascript-framework-extensions.md?tabs=angular#enable-correlation). For more information on the Angular plug-in, see [Angular plug-in](./javascript-framework-extensions.md?tabs=angular#angular-plugin-for-application-insights-javascript-sdk).
+> For more information on the Angular plug-in, see [Angular plug-in](./javascript-framework-extensions.md?tabs=angular).
-## Set the authenticated user context
+### 3. (Optional) Set the authenticated user context
-If you need to set this optional setting, see [Set the authenticated user context](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#setauthenticatedusercontext). This setting isn't required to use Click Analytics.
+If you need to set this optional setting, see [Set the authenticated user context](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#setauthenticatedusercontext). This setting isn't required to use the Click Analytics plug-in.
## Use the plug-in
You can replace the asterisk (`*`) in `data-*` with any name following the [prod
- The name must not contain a semicolon (U+003A). - The name must not contain capital letters.
-## What data does the plug-in collect?
-
-The following key properties are captured by default when the plug-in is enabled.
-
-### Custom event properties
-
-| Name | Description | Sample |
-| | |--|
-| Name | The name of the custom event. For more information on how a name gets populated, see [Name column](#name).| About |
-| itemType | Type of event. | customEvent |
-|sdkVersion | Version of Application Insights SDK along with click plug-in.|JavaScript:2_ClickPlugin2|
-
-### Custom dimensions
-
-| Name | Description | Sample |
-| | |--|
-| actionType | Action type that caused the click event. It can be a left or right click. | CL |
-| baseTypeSource | Base Type source of the custom event. | ClickEvent |
-| clickCoordinates | Coordinates where the click event is triggered. | 659X47 |
-| content | Placeholder to store extra `data-*` attributes and values. | [{sample1:value1, sample2:value2}] |
-| pageName | Title of the page where the click event is triggered. | Sample Title |
-| parentId | ID or name of the parent element. For more information on how a parentId is populated, see [parentId key](#parentid-key). | navbarContainer |
-
-### Custom measurements
-
-| Name | Description | Sample |
-| | |--|
-| timeToAction | Time taken in milliseconds for the user to click the element since the initial page load. | 87407 |
-
-## Advanced configuration
+## Add advanced configuration
| Name | Type | Default | Description | | | --| --| - |
appInsights.loadAppInsights();
## Sample app
-[Simple web app with the Click Analytics Autocollection Plug-in enabled](https://go.microsoft.com/fwlink/?linkid=2152871)
+See a [simple web app with the Click Analytics Autocollection Plug-in enabled](https://go.microsoft.com/fwlink/?linkid=2152871) for how to implement custom event properties such as `Name` and `parentid` and custom behavior and content. See the [sample app readme](https://github.com/Azure-Samples/Application-Insights-Click-Plugin-Demo/blob/main/README.md) for information about where to find click data.
## Examples of `parentId` key
export const clickPluginConfigWithParentDataTag = {
For example 2, for clicked element `<Button>`, the value of `parentId` is `parentid2`. Even though `parentDataTag` is declared, the `data-parentid` definition takes precedence. > [!NOTE] > If the `data-parentid` attribute was defined within the div element with `className=ΓÇ¥test2ΓÇ¥`, the value for `parentId` would still be `parentid2`.-
+
### Example 3 ```javascript
See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/ap
## Next steps
+- [Confirm data is flowing](./javascript-sdk.md#5-confirm-data-is-flowing).
- See the [documentation on utilizing HEART workbook](usage-heart.md) for expanded product analytics. - See the [GitHub repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [npm Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Autocollection Plug-in. - Use [Events Analysis in the Usage experience](usage-segmentation.md) to analyze top clicks and slice by available dimensions. - Use the [Telemetry Viewer extension](https://github.com/microsoft/ApplicationInsights-JS/tree/master/tools/chrome-debug-extension) to list out the individual events in the network payload and monitor the internal calls within Application Insights.-- See a [sample app](https://go.microsoft.com/fwlink/?linkid=2152871) for how to implement custom event properties such as Name and parentid and custom behavior and content.-- See the [sample app readme](https://github.com/Azure-Samples/Application-Insights-Click-Plugin-Demo/blob/main/README.md) for where to find click data and [Log Analytics](../logs/log-analytics-tutorial.md#write-a-query) if you arenΓÇÖt familiar with the process of writing a query.
+- See [Log Analytics](../logs/log-analytics-tutorial.md#write-a-query) if you arenΓÇÖt familiar with the process of writing a query.
- Build a [workbook](../visualize/workbooks-overview.md) or [export to Power BI](../logs/log-powerbi.md) to create custom visualizations of click data.--
azure-monitor Javascript Framework Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-framework-extensions.md
Title: Framework extensions for Application Insights JavaScript SDK
+ Title: Enable a framework extension for Application Insights JavaScript SDK
description: Learn how to install and use JavaScript framework extensions for the Application Insights JavaScript SDK. ibiza
-# Framework extensions for Application Insights JavaScript SDK
+# Enable a framework extension for Application Insights JavaScript SDK
In addition to the core SDK, there are also plugins available for specific frameworks, such as the [React plugin](javascript-framework-extensions.md?tabs=react), the [React Native plugin](javascript-framework-extensions.md?tabs=reactnative), and the [Angular plugin](javascript-framework-extensions.md?tabs=angular). These plugins provide extra functionality and integration with the specific framework.
-## [React](#tab/react)
+> [!IMPORTANT]
+> If you haven't already, you need to first [enable Azure Monitor Application Insights Real User Monitoring](./javascript-sdk.md) before you enable a framework extension.
+
+## Prerequisites
+
+### [React](#tab/react)
+
+None.
+
+### [React Native](#tab/reactnative)
+
+You must be using a version >= 2.0.0 of `@microsoft/applicationinsights-web`. This plugin only works in react-native apps. It doesn't work with [apps using the Expo framework](https://docs.expo.io/) or Create React Native App, which is based on the Expo framework.
+
+### [Angular](#tab/angular)
-### React Application Insights JavaScript SDK plug-in
+None.
+++
+## What does the plug-in enable?
+
+### [React](#tab/react)
The React plug-in for the Application Insights JavaScript SDK enables: - Tracking of router changes - React components usage statistics
-### Get started
+### [React Native](#tab/reactnative)
+
+The React Native plugin for Application Insights JavaScript SDK collects device information. By default, this plugin automatically collects:
+
+- **Unique Device ID** (Also known as Installation ID.)
+- **Device Model Name** (Such as iPhone XS, Samsung Galaxy Fold, Huawei P30 Pro etc.)
+- **Device Type** (For example, handset, tablet, etc.)
+
+### [Angular](#tab/angular)
+
+The Angular plugin for the Application Insights JavaScript SDK, enables:
+
+- Tracking of router changes
+- Tracking uncaught exceptions
+
+> [!WARNING]
+> Angular plugin is NOT ECMAScript 3 (ES3) compatible.
+
+> [!IMPORTANT]
+> When we add support for a new Angular version, our NPM package becomes incompatible with down-level Angular versions. Continue to use older NPM packages until you're ready to upgrade your Angular version.
+++
+## Add a plug-in
+
+To add a plug-in, follow the steps in this section.
-Install the npm package:
+### 1. Install the package
+
+#### [React](#tab/react)
```bash
npm install @microsoft/applicationinsights-react-js
```
-### Basic usage
+#### [React Native](#tab/reactnative)
-Initialize a connection to Application Insights:
+By default, this plugin relies on the [`react-native-device-info` package](https://www.npmjs.com/package/react-native-device-info). You must install and link to this package. Keep the `react-native-device-info` package up to date to collect the latest device names using your app.
+
+Since v3, support for accessing the DeviceInfo has been abstracted into an interface `IDeviceInfoModule` to enable you to use / set your own device info module. This interface uses the same function names and result `react-native-device-info`.
+
+```zsh
+
+npm install --save @microsoft/applicationinsights-react-native @microsoft/applicationinsights-web
+npm install --save react-native-device-info
+react-native link react-native-device-info
+
+```
+
+#### [Angular](#tab/angular)
+
+```bash
+npm install @microsoft/applicationinsights-angularplugin-js
+```
+++
+### 2. Add the extension to your code
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
+#### [React](#tab/react)
+
+Initialize a connection to Application Insights:
+ > [!TIP] > If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md), uncomment the lines for Click Analytics and delete `extensions: [reactPlugin],`.
var appInsights = new ApplicationInsights({
appInsights.loadAppInsights(); ```
+> [!TIP]
+> If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process.
+
+#### [React Native](#tab/reactnative)
+
+To use this plugin, you need to construct the plugin and add it as an `extension` to your existing Application Insights instance.
+
+> [!TIP]
+> If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md), uncomment the lines for Click Analytics and delete `extensions: [RNPlugin]`.
+
+```typescript
+import { ApplicationInsights } from '@microsoft/applicationinsights-web';
+import { ReactNativePlugin } from '@microsoft/applicationinsights-react-native';
+
+var RNPlugin = new ReactNativePlugin();
+// Add the Click Analytics plug-in.
+/* var clickPluginInstance = new ClickAnalyticsPlugin();
+var clickPluginConfig = {
+ autoCapture: true
+}; */
+var appInsights = new ApplicationInsights({
+ config: {
+ connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
+ // If you're adding the Click Analytics plug-in, delete the next line.
+ extensions: [RNPlugin]
+ // Add the Click Analytics plug-in.
+ /* extensions: [RNPlugin, clickPluginInstance],
+ extensionConfig: {
+ [clickPluginInstance.identifier]: clickPluginConfig
+ } */
+ }
+});
+appInsights.loadAppInsights();
+
+```
+
+#### Disabling automatic device info collection
+
+```typescript
+import { ApplicationInsights } from '@microsoft/applicationinsights-web';
+
+var RNPlugin = new ReactNativePlugin();
+var appInsights = new ApplicationInsights({
+ config: {
+ instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE',
+ disableDeviceCollection: true,
+ extensions: [RNPlugin]
+ }
+});
+appInsights.loadAppInsights();
+```
+
+#### Using your own device info collection class
+
+```typescript
+import { ApplicationInsights } from '@microsoft/applicationinsights-web';
+
+// Simple inline constant implementation
+const myDeviceInfoModule = {
+ getModel: () => "deviceModel",
+ getDeviceType: () => "deviceType",
+ // v5 returns a string while latest returns a promise
+ getUniqueId: () => "deviceId", // This "may" also return a Promise<string>
+};
+
+var RNPlugin = new ReactNativePlugin();
+RNPlugin.setDeviceInfoModule(myDeviceInfoModule);
+
+var appInsights = new ApplicationInsights({
+ config: {
+ instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE',
+ extensions: [RNPlugin]
+ }
+});
+
+appInsights.loadAppInsights();
+```
+
+> [!TIP]
+> If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process.
+
+#### [Angular](#tab/angular)
+
+Set up an instance of Application Insights in the entry component in your app:
+
+> [!IMPORTANT]
+> When using the ErrorService, there is an implicit dependency on the `@microsoft/applicationinsights-analytics-js` extension. you MUST include either the `'@microsoft/applicationinsights-web'` or include the `@microsoft/applicationinsights-analytics-js` extension. Otherwise, unhandled errors caught by the error service will not be sent.
+
+> [!TIP]
+> If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md), uncomment the lines for Click Analytics and delete `extensions: [angularPlugin],`.
+
+```js
+import { Component } from '@angular/core';
+import { ApplicationInsights } from '@microsoft/applicationinsights-web';
+import { AngularPlugin } from '@microsoft/applicationinsights-angularplugin-js';
+import { Router } from '@angular/router';
+
+@Component({
+ selector: 'app-root',
+ templateUrl: './app.component.html',
+ styleUrls: ['./app.component.css']
+})
+export class AppComponent {
+ constructor(
+ private router: Router
+ ){
+ var angularPlugin = new AngularPlugin();
+ // Add the Click Analytics plug-in.
+ /* var clickPluginInstance = new ClickAnalyticsPlugin();
+ var clickPluginConfig = {
+ autoCapture: true
+ }; */
+ const appInsights = new ApplicationInsights({
+ config: {
+ connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
+ // If you're adding the Click Analytics plug-in, delete the next line.
+ extensions: [angularPlugin],
+ // Add the Click Analytics plug-in.
+ // extensions: [angularPlugin, clickPluginInstance],
+ extensionConfig: {
+ [angularPlugin.identifier]: { router: this.router }
+ // Add the Click Analytics plug-in.
+ // [clickPluginInstance.identifier]: clickPluginConfig
+ }
+ }
+ });
+ appInsights.loadAppInsights();
+ }
+}
+```
+
+To track uncaught exceptions, set up ApplicationinsightsAngularpluginErrorService in `app.module.ts`:
+
+> [!IMPORTANT]
+> When using the ErrorService, there is an implicit dependency on the `@microsoft/applicationinsights-analytics-js` extension. you MUST include either the `'@microsoft/applicationinsights-web'` or include the `@microsoft/applicationinsights-analytics-js` extension. Otherwise, unhandled errors caught by the error service will not be sent.
+
+```js
+import { ApplicationinsightsAngularpluginErrorService } from '@microsoft/applicationinsights-angularplugin-js';
+
+@NgModule({
+ ...
+ providers: [
+ {
+ provide: ErrorHandler,
+ useClass: ApplicationinsightsAngularpluginErrorService
+ }
+ ]
+ ...
+})
+export class AppModule { }
+```
+
+To chain more custom error handlers, create custom error handlers that implement IErrorService:
+
+```javascript
+import { IErrorService } from '@microsoft/applicationinsights-angularplugin-js';
+
+export class CustomErrorHandler implements IErrorService {
+ handleError(error: any) {
+ ...
+ }
+}
+```
+
+And pass errorServices array through extensionConfig:
+
+```javascript
+extensionConfig: {
+ [angularPlugin.identifier]: {
+ router: this.router,
+ error
+ }
+ }
+```
+
+> [!TIP]
+> If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process.
+++
+## Add configuration
+
+### [React](#tab/react)
+ ### Configuration | Name | Default | Description |
const App = () => {
The `AppInsightsErrorBoundary` requires two props to be passed to it. They're the `ReactPlugin` instance created for the application and a component to be rendered when an error occurs. When an unhandled error occurs, `trackException` is called with the information provided to the error boundary, and the `onError` component appears.
-### Enable correlation
-
-Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools.
-
-In JavaScript, correlation is turned off by default to minimize the telemetry we send by default. To enable correlation, see the [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing).
-
-#### Route tracking
-
-The React plug-in automatically tracks route changes and collects other React-specific telemetry.
-
-> [!NOTE]
-> `enableAutoRouteTracking` should be set to `false`. If it's set to `true`, then when the route changes, duplicate `PageViews` can be sent.
-
-For `react-router v6` or other scenarios where router history isn't exposed, you can add `enableAutoRouteTracking: true` to your [setup configuration](#basic-usage).
-
-#### PageView
-
-If a custom `PageView` duration isn't provided, `PageView` duration defaults to a value of `0`.
-
-### Sample app
-
-Check out the [Application Insights React demo](https://github.com/microsoft/applicationinsights-react-js/tree/main/sample/applicationinsights-react-sample).
-
-> [!TIP]
-> If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process.
-
-## [React Native](#tab/reactnative)
-
-### React Native plugin for Application Insights JavaScript SDK
-
-The React Native plugin for Application Insights JavaScript SDK collects device information. By default, this plugin automatically collects:
--- **Unique Device ID** (Also known as Installation ID.)-- **Device Model Name** (Such as iPhone XS, Samsung Galaxy Fold, Huawei P30 Pro etc.)-- **Device Type** (For example, handset, tablet, etc.)-
-### Requirements
-
-You must be using a version >= 2.0.0 of `@microsoft/applicationinsights-web`. This plugin only works in react-native apps. It doesn't work with [apps using the Expo framework](https://docs.expo.io/) or Create React Native App, which is based on the Expo framework.
-
-### Getting started
-
-By default, this plugin relies on the [`react-native-device-info` package](https://www.npmjs.com/package/react-native-device-info). You must install and link to this package. Keep the `react-native-device-info` package up to date to collect the latest device names using your app.
-
-Since v3, support for accessing the DeviceInfo has been abstracted into an interface `IDeviceInfoModule` to enable you to use / set your own device info module. This interface uses the same function names and result `react-native-device-info`.
-
-```zsh
-
-npm install --save @microsoft/applicationinsights-react-native @microsoft/applicationinsights-web
-npm install --save react-native-device-info
-react-native link react-native-device-info
-
-```
-
-### Initializing the plugin
-
-To use this plugin, you need to construct the plugin and add it as an `extension` to your existing Application Insights instance.
--
-> [!TIP]
-> If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md), uncomment the lines for Click Analytics and delete `extensions: [RNPlugin]`.
-
-```typescript
-import { ApplicationInsights } from '@microsoft/applicationinsights-web';
-import { ReactNativePlugin } from '@microsoft/applicationinsights-react-native';
-
-var RNPlugin = new ReactNativePlugin();
-// Add the Click Analytics plug-in.
-/* var clickPluginInstance = new ClickAnalyticsPlugin();
-var clickPluginConfig = {
- autoCapture: true
-}; */
-var appInsights = new ApplicationInsights({
- config: {
- connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
- // If you're adding the Click Analytics plug-in, delete the next line.
- extensions: [RNPlugin]
- // Add the Click Analytics plug-in.
- /* extensions: [RNPlugin, clickPluginInstance],
- extensionConfig: {
- [clickPluginInstance.identifier]: clickPluginConfig
- } */
- }
-});
-appInsights.loadAppInsights();
-
-```
-
-#### Disabling automatic device info collection
-
-```typescript
-import { ApplicationInsights } from '@microsoft/applicationinsights-web';
-
-var RNPlugin = new ReactNativePlugin();
-var appInsights = new ApplicationInsights({
- config: {
- instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE',
- disableDeviceCollection: true,
- extensions: [RNPlugin]
- }
-});
-appInsights.loadAppInsights();
-```
-
-#### Using your own device info collection class
-
-```typescript
-import { ApplicationInsights } from '@microsoft/applicationinsights-web';
-
-// Simple inline constant implementation
-const myDeviceInfoModule = {
- getModel: () => "deviceModel",
- getDeviceType: () => "deviceType",
- // v5 returns a string while latest returns a promise
- getUniqueId: () => "deviceId", // This "may" also return a Promise<string>
-};
-
-var RNPlugin = new ReactNativePlugin();
-RNPlugin.setDeviceInfoModule(myDeviceInfoModule);
-
-var appInsights = new ApplicationInsights({
- config: {
- instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE',
- extensions: [RNPlugin]
- }
-});
-
-appInsights.loadAppInsights();
-```
+### [React Native](#tab/reactnative)
### IDeviceInfoModule
export interface IDeviceInfoModule {
If events are getting "blocked" because the `Promise` returned via `getUniqueId` is never resolved / rejected, you can call `setDeviceId()` on the plugin to "unblock" this waiting state. There is also an automatic timeout configured via `uniqueIdPromiseTimeout` (defaults to 5 seconds), which will internally call `setDeviceId()` with any previously configured value.
-### Enable Correlation
-
-Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools.
-
-JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation, reference [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing).
-
-#### PageView
-
-If a custom `PageView` duration isn't provided, `PageView` duration defaults to a value of 0.
-
-> [!TIP]
-> If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process.
-
-
-## [Angular](#tab/angular)
-
-## Angular plugin for Application Insights JavaScript SDK
-
-The Angular plugin for the Application Insights JavaScript SDK, enables:
--- Tracking of router changes-- Tracking uncaught exceptions-
-> [!WARNING]
-> Angular plugin is NOT ECMAScript 3 (ES3) compatible.
-
-> [!IMPORTANT]
-> When we add support for a new Angular version, our NPM package becomes incompatible with down-level Angular versions. Continue to use older NPM packages until you're ready to upgrade your Angular version.
-
-### Getting started
-
-Install npm package:
-
-```bash
-npm install @microsoft/applicationinsights-angularplugin-js
-```
-
-### Basic usage
-
-Set up an instance of Application Insights in the entry component in your app:
---
-> [!IMPORTANT]
-> When using the ErrorService, there is an implicit dependency on the `@microsoft/applicationinsights-analytics-js` extension. you MUST include either the `'@microsoft/applicationinsights-web'` or include the `@microsoft/applicationinsights-analytics-js` extension. Otherwise, unhandled errors caught by the error service will not be sent.
-
-> [!TIP]
-> If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md), uncomment the lines for Click Analytics and delete `extensions: [angularPlugin],`.
-
-```js
-import { Component } from '@angular/core';
-import { ApplicationInsights } from '@microsoft/applicationinsights-web';
-import { AngularPlugin } from '@microsoft/applicationinsights-angularplugin-js';
-import { Router } from '@angular/router';
-
-@Component({
- selector: 'app-root',
- templateUrl: './app.component.html',
- styleUrls: ['./app.component.css']
-})
-export class AppComponent {
- constructor(
- private router: Router
- ){
- var angularPlugin = new AngularPlugin();
- // Add the Click Analytics plug-in.
- /* var clickPluginInstance = new ClickAnalyticsPlugin();
- var clickPluginConfig = {
- autoCapture: true
- }; */
- const appInsights = new ApplicationInsights({
- config: {
- connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
- // If you're adding the Click Analytics plug-in, delete the next line.
- extensions: [angularPlugin],
- // Add the Click Analytics plug-in.
- // extensions: [angularPlugin, clickPluginInstance],
- extensionConfig: {
- [angularPlugin.identifier]: { router: this.router }
- // Add the Click Analytics plug-in.
- // [clickPluginInstance.identifier]: clickPluginConfig
- }
- }
- });
- appInsights.loadAppInsights();
- }
-}
-```
-
-To track uncaught exceptions, set up ApplicationinsightsAngularpluginErrorService in `app.module.ts`:
-
-> [!IMPORTANT]
-> When using the ErrorService, there is an implicit dependency on the `@microsoft/applicationinsights-analytics-js` extension. you MUST include either the `'@microsoft/applicationinsights-web'` or include the `@microsoft/applicationinsights-analytics-js` extension. Otherwise, unhandled errors caught by the error service will not be sent.
-
-```js
-import { ApplicationinsightsAngularpluginErrorService } from '@microsoft/applicationinsights-angularplugin-js';
-
-@NgModule({
- ...
- providers: [
- {
- provide: ErrorHandler,
- useClass: ApplicationinsightsAngularpluginErrorService
- }
- ]
- ...
-})
-export class AppModule { }
-```
-
-To chain more custom error handlers, create custom error handlers that implement IErrorService:
-
-```javascript
-import { IErrorService } from '@microsoft/applicationinsights-angularplugin-js';
-
-export class CustomErrorHandler implements IErrorService {
- handleError(error: any) {
- ...
- }
-}
-```
-
-And pass errorServices array through extensionConfig:
+### [Angular](#tab/angular)
-```javascript
-extensionConfig: {
- [angularPlugin.identifier]: {
- router: this.router,
- error
- }
- }
-```
+None.
-### Enable Correlation
+
-Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools.
+## Sample app
-JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation, reference [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing).
+### [React](#tab/react)
-#### Route tracking
+Check out the [Application Insights React demo](https://github.com/microsoft/applicationinsights-react-js/tree/main/sample/applicationinsights-react-sample).
-The Angular Plugin automatically tracks route changes and collects other Angular specific telemetry.
+### [React Native](#tab/reactnative)
-> [!NOTE]
-> `enableAutoRouteTracking` should be set to `false` if it set to true then when the route changes duplicate PageViews may be sent.
+Currently unavailable.
-#### PageView
+### [Angular](#tab/angular)
-If a custom `PageView` duration isn't provided, `PageView` duration defaults to a value of 0.
-
-> [!TIP]
-> If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process.
+Check out the [Application Insights Angular demo](https://github.com/microsoft/applicationinsights-angularplugin-js/tree/main/sample/applicationinsights-angularplugin-sample).
## Next steps -- To learn more about the JavaScript SDK, see the [Application Insights JavaScript SDK documentation](javascript.md).-- To learn about the Kusto Query Language and querying data in Log Analytics, see the [Log query overview](../../azure-monitor/logs/log-query-overview.md).
+- [Confirm data is flowing](javascript-sdk.md#5-confirm-data-is-flowing).
azure-monitor Javascript Sdk Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk-configuration.md
+
+ Title: Microsoft Azure Monitor Application Insights JavaScript SDK configuration
+description: Microsoft Azure Monitor Application Insights JavaScript SDK configuration.
+ Last updated : 02/28/2023
+ms.devlang: javascript
++++
+# Microsoft Azure Monitor Application Insights JavaScript SDK configuration
+
+The Azure Application Insights JavaScript SDK provides configuration for tracking, monitoring, and debugging your web applications.
+
+> [!div class="checklist"]
+> - [SDK configuration](#sdk-configuration)
+> - [Cookie configuration and management](#cookies)
+> - [Source map un-minify support](#source-map)
+> - [Tree shaking optimized code](#tree-shaking)
+
+## SDK configuration
+
+These configuration fields are optional and default to false unless otherwise stated.
+
+| Name | Type | Default | Description |
+||||-|
+| accountId | string | null | An optional account ID, if your app groups users into accounts. No spaces, commas, semicolons, equals, or vertical bars |
+| sessionRenewalMs | numeric | 1800000 | A session is logged if the user is inactive for this amount of time in milliseconds. Default is 30 minutes |
+| sessionExpirationMs | numeric | 86400000 | A session is logged if it has continued for this amount of time in milliseconds. Default is 24 hours |
+| maxBatchSizeInBytes | numeric | 10000 | Max size of telemetry batch. If a batch exceeds this limit, it's immediately sent and a new batch is started |
+| maxBatchInterval | numeric | 15000 | How long to batch telemetry for before sending (milliseconds) |
+| disableExceptionTracking | boolean | false | If true, exceptions aren't autocollected. Default is false. |
+| disableTelemetry | boolean | false | If true, telemetry isn't collected or sent. Default is false. |
+| enableDebug | boolean | false | If true, **internal** debugging data is thrown as an exception **instead** of being logged, regardless of SDK logging settings. Default is false. <br>***Note:*** Enabling this setting results in dropped telemetry whenever an internal error occurs. It can be useful for quickly identifying issues with your configuration or usage of the SDK. If you don't want to lose telemetry while debugging, consider using `loggingLevelConsole` or `loggingLevelTelemetry` instead of `enableDebug`. |
+| loggingLevelConsole | numeric | 0 | Logs **internal** Application Insights errors to console. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) |
+| loggingLevelTelemetry | numeric | 1 | Sends **internal** Application Insights errors as telemetry. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) |
+| diagnosticLogInterval | numeric | 10000 | (internal) Polling interval (in ms) for internal logging queue |
+| samplingPercentage | numeric | 100 | Percentage of events that is sent. Default is 100, meaning all events are sent. Set it if you wish to preserve your data cap for large-scale applications. |
+| autoTrackPageVisitTime | boolean | false | If true, on a pageview, the _previous_ instrumented page's view time is tracked and sent as telemetry and a new timer is started for the current pageview. It's sent as a custom metric named `PageVisitTime` in `milliseconds` and is calculated via the Date [now()](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Date/now) function (if available) and falls back to (new Date()).[getTime()](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Date/getTime) if now() is unavailable (IE8 or less). Default is false. |
+| disableAjaxTracking | boolean | false | If true, Ajax calls aren't autocollected. Default is false. |
+| disableFetchTracking | boolean | false | The default setting for `disableFetchTracking` is `false`, meaning it's enabled. However, in versions prior to 2.8.10, it was disabled by default. When set to `true`, Fetch requests aren't automatically collected. The default setting changed from `true` to `false` in version 2.8.0. |
+| excludeRequestFromAutoTrackingPatterns | string[] \| RegExp[] | undefined | Provide a way to exclude specific route from automatic tracking for XMLHttpRequest or Fetch request. If defined, for an Ajax / fetch request that the request url matches with the regex patterns, auto tracking is turned off. Default is undefined. |
+| addRequestContext | (requestContext: IRequestionContext) => {[key: string]: any} | undefined | Provide a way to enrich dependencies logs with context at the beginning of api call. Default is undefined. You need to check if `xhr` exists if you configure `xhr` related context. You need to check if `fetch request` and `fetch response` exist if you configure `fetch` related context. Otherwise you may not get the data you need. |
+| overridePageViewDuration | boolean | false | If true, default behavior of trackPageView is changed to record end of page view duration interval when trackPageView is called. If false and no custom duration is provided to trackPageView, the page view performance is calculated using the navigation timing API. Default is false. |
+| maxAjaxCallsPerView | numeric | 500 | Default 500 - controls how many Ajax calls are monitored per page view. Set to -1 to monitor all (unlimited) Ajax calls on the page. |
+| disableDataLossAnalysis | boolean | true | If false, internal telemetry sender buffers are checked at startup for items not yet sent. |
+| disableCorrelationHeaders | boolean | false | If false, the SDK adds two headers ('Request-Id' and 'Request-Context') to all dependency requests to correlate them with corresponding requests on the server side. Default is false. |
+| correlationHeaderExcludedDomains | string[] | undefined | Disable correlation headers for specific domains |
+| correlationHeaderExcludePatterns | regex[] | undefined | Disable correlation headers using regular expressions |
+| correlationHeaderDomains | string[] | undefined | Enable correlation headers for specific domains |
+| disableFlushOnBeforeUnload | boolean | false | Default false. If true, flush method isn't called when onBeforeUnload event triggers |
+| enableSessionStorageBuffer | boolean | true | Default true. If true, the buffer with all unsent telemetry is stored in session storage. The buffer is restored on page load |
+| cookieCfg | [ICookieCfgConfig](#cookies)<br>[Optional]<br>(Since 2.6.0) | undefined | Defaults to cookie usage enabled see [ICookieCfgConfig](#cookies) settings for full defaults. |
+| disableCookiesUsage | alias for [`cookieCfg.enabled`](#cookies)<br>[Optional] | false | Default false. A boolean that indicates whether to disable the use of cookies by the SDK. If true, the SDK doesn't store or read any data from cookies.<br>(Since v2.6.0) If `cookieCfg.enabled` is defined it takes precedence. Cookie usage can be re-enabled after initialization via the core.getCookieMgr().setEnabled(true). |
+| cookieDomain | alias for [`cookieCfg.domain`](#cookies)<br>[Optional] | null | Custom cookie domain. It's helpful if you want to share Application Insights cookies across subdomains.<br>(Since v2.6.0) If `cookieCfg.domain` is defined it takes precedence over this value. |
+| cookiePath | alias for [`cookieCfg.path`](#cookies)<br>[Optional]<br>(Since 2.6.0) | null | Custom cookie path. It's helpful if you want to share Application Insights cookies behind an application gateway.<br>If `cookieCfg.path` is defined, it takes precedence. |
+| isRetryDisabled | boolean | false | Default false. If false, retry on 206 (partial success), 408 (timeout), 429 (too many requests), 500 (internal server error), 503 (service unavailable), and 0 (offline, only if detected) |
+| isStorageUseDisabled | boolean | false | If true, the SDK doesn't store or read any data from local and session storage. Default is false. |
+| isBeaconApiDisabled | boolean | true | If false, the SDK sends all telemetry using the [Beacon API](https://www.w3.org/TR/beacon) |
+| disableXhr | boolean | false | Don't use XMLHttpRequest or XDomainRequest (for IE < 9) by default instead attempt to use fetch() or sendBeacon. If no other transport is available, it uses XMLHttpRequest |
+| onunloadDisableBeacon | boolean | false | Default false. when tab is closed, the SDK sends all remaining telemetry using the [Beacon API](https://www.w3.org/TR/beacon) |
+| onunloadDisableFetch | boolean | false | If fetch keepalive is supported don't use it for sending events during unload, it may still fall back to fetch() without keepalive |
+| sdkExtension | string | null | Sets the sdk extension name. Only alphabetic characters are allowed. The extension name is added as a prefix to the 'ai.internal.sdkVersion' tag (for example, 'ext_javascript:2.0.0'). Default is null. |
+| isBrowserLinkTrackingEnabled | boolean | false | Default is false. If true, the SDK tracks all [Browser Link](/aspnet/core/client-side/using-browserlink) requests. |
+| appId | string | null | AppId is used for the correlation between AJAX dependencies happening on the client-side with the server-side requests. When Beacon API is enabled, it can't be used automatically, but can be set manually in the configuration. Default is null |
+| enableCorsCorrelation | boolean | false | If true, the SDK adds two headers ('Request-Id' and 'Request-Context') to all CORS requests to correlate outgoing AJAX dependencies with corresponding requests on the server side. Default is false |
+| namePrefix | string | undefined | An optional value that is used as name postfix for localStorage and session cookie name.
+| sessionCookiePostfix | string | undefined | An optional value that is used as name postfix for session cookie name. If undefined, namePrefix is used as name postfix for session cookie name.
+| userCookiePostfix | string | undefined | An optional value that is used as name postfix for user cookie name. If undefined, no postfix is added on user cookie name.
+| enableAutoRouteTracking | boolean | false | Automatically track route changes in Single Page Applications (SPA). If true, each route change sends a new Pageview to Application Insights. Hash route changes (`example.com/foo#bar`) are also recorded as new page views.
+| enableRequestHeaderTracking | boolean | false | If true, AJAX & Fetch request headers is tracked, default is false. If ignoreHeaders isn't configured, Authorization and X-API-Key headers aren't logged.
+| enableResponseHeaderTracking | boolean | false | If true, AJAX & Fetch request's response headers is tracked, default is false. If ignoreHeaders isn't configured, WWW-Authenticate header isn't logged.
+| ignoreHeaders | string[] | ["Authorization", "X-API-Key", "WWW-Authenticate"] | AJAX & Fetch request and response headers to be ignored in log data. To override or discard the default, add an array with all headers to be excluded or an empty array to the configuration.
+| enableAjaxErrorStatusText | boolean | false | Default false. If true, include response error data text boolean in dependency event on failed AJAX requests. |
+| enableAjaxPerfTracking | boolean | false | Default false. Flag to enable looking up and including extra browser window.performance timings in the reported Ajax (XHR and fetch) reported metrics.
+| maxAjaxPerfLookupAttempts | numeric | 3 | Defaults to 3. The maximum number of times to look for the window.performance timings (if available) is required. Not all browsers populate the window.performance before reporting the end of the XHR request. For fetch requests, it's added after it's complete.
+| ajaxPerfLookupDelay | numeric | 25 | Defaults to 25 ms. The amount of time to wait before reattempting to find the windows.performance timings for an Ajax request, time is in milliseconds and is passed directly to setTimeout().
+| distributedTracingMode | numeric or `DistributedTracingModes` | `DistributedTracingModes.AI_AND_W3C` | Sets the distributed tracing mode. If AI_AND_W3C mode or W3C mode is set, W3C trace context headers (traceparent/tracestate) are generated and included in all outgoing requests. AI_AND_W3C is provided for back-compatibility with any legacy Application Insights instrumented services.
+| enableUnhandledPromiseRejectionTracking | boolean | false | If true, unhandled promise rejections are autocollected as a JavaScript error. When disableExceptionTracking is true (don't track exceptions), the config value is ignored and unhandled promise rejections aren't reported.
+| disableInstrumentationKeyValidation | boolean | false | If true, instrumentation key validation check is bypassed. Default value is false.
+| enablePerfMgr | boolean | false | [Optional] When enabled (true) it creates local perfEvents for code that has been instrumented to emit perfEvents (via the doPerf() helper). It can be used to identify performance issues within the SDK based on your usage or optionally within your own instrumented code.
+| perfEvtsSendAll | boolean | false | [Optional] When _enablePerfMgr_ is enabled and the [IPerfManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfManager.ts) fires a [INotificationManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/INotificationManager.ts).perfEvent() this flag determines whether an event is fired (and sent to all listeners) for all events (true) or only for 'parent' events (false &lt;default&gt;).<br />A parent [IPerfEvent](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfEvent.ts) is an event where no other IPerfEvent is still running at the point of the event being created and its _parent_ property isn't null or undefined. Since v2.5.7
+| createPerfMgr | (core: IAppInsightsCore, notification
+| idLength | numeric | 22 | [Optional] Identifies the default length used to generate new random session and user IDs. Defaults to 22, previous default value was 5 (v2.5.8 or less), if you need to keep the previous maximum length you should set the value to 5.
+| customHeaders | `[{header: string, value: string}]` | undefined | [Optional] The ability for the user to provide extra headers when using a custom endpoint. customHeaders aren't added on browser shutdown moment when beacon sender is used. And adding custom headers isn't supported on IE9 or earlier.
+| convertUndefined | `any` | undefined | [Optional] Provide user an option to convert undefined field to user defined value.
+| eventsLimitInMem | number | 10000 | [Optional] The number of events that can be kept in memory before the SDK starts to drop events when not using Session Storage (the default).
+| disableIkeyDeprecationMessage | boolean | true | [Optional] Disable instrumentation Key deprecation error message. If true, error messages are NOT sent.
+
+## Cookies
+
+The Azure Application Insights JavaScript SDK provides instance-based cookie management that allows you to control the use of cookies.
+
+You can control cookies by enabling or disabling them, setting custom domains and paths, and customizing the functions for managing cookies.
+
+### Cookie configuration
+
+ICookieMgrConfig is a cookie configuration for instance-based cookie management added in 2.6.0. The options provided allow you to enable or disable the use of cookies by the SDK. You can also set custom cookie domains and paths and customize the functions for fetching, setting, and deleting cookies.
+
+The ICookieMgrConfig options are defined in the following table.
+
+| Name | Type | Default | Description |
+||||-|
+| enabled | boolean | true | The current instance of the SDK uses this boolean to indicate whether the use of cookies is enabled. If false, the instance of the SDK initialized by this configuration doesn't store or read any data from cookies. |
+| domain | string | null | Custom cookie domain. It's helpful if you want to share Application Insights cookies across subdomains. If not provided uses the value from root `cookieDomain` value. |
+| path | string | / | Specifies the path to use for the cookie, if not provided it uses any value from the root `cookiePath` value. |
+| ignoreCookies | string[] | undefined | Specify the cookie name(s) to be ignored, it causes any matching cookie name to never be read or written. They may still be explicitly purged or deleted. You don't need to repeat the name in the `blockedCookies` configuration. (since v2.8.8)
+| blockedCookies | string[] | undefined | Specify the cookie name(s) to never write. It prevents creating or updating any cookie name, but they can still be read unless also included in the ignoreCookies. They may still be purged or deleted explicitly. If not provided, it defaults to the same list in ignoreCookies. (Since v2.8.8)
+| getCookie | `(name: string) => string` | null | Function to fetch the named cookie value, if not provided it uses the internal cookie parsing / caching. |
+| setCookie | `(name: string, value: string) => void` | null | Function to set the named cookie with the specified value, only called when adding or updating a cookie. |
+| delCookie | `(name: string, value: string) => void` | null | Function to delete the named cookie with the specified value, separated from setCookie to avoid the need to parse the value to determine whether the cookie is being added or removed. If not provided it uses the internal cookie parsing / caching. |
+
+### Cookie management
+
+Starting from version 2.6.0, the Azure Application Insights JavaScript SDK provides instance-based cookie management that can be disabled and re-enabled after initialization.
+
+If you disabled cookies during initialization using the `disableCookiesUsage` or `cookieCfg.enabled` configurations, you can re-enable them using the `setEnabled` function of the ICookieMgr object.
+
+The instance-based cookie management replaces the previous CoreUtils global functions of `disableCookies()`, `setCookie()`, `getCookie()`, and `deleteCookie()`.
+
+To take advantage of the tree-shaking enhancements introduced in version 2.6.0, it's recommended to no longer use the global functions
+
+## Source map
+
+Source map support helps you debug minified JavaScript code with the ability to unminify the minified callstack of your exception telemetry.
+
+> [!div class="checklist"]
+> - Compatible with all current integrations on the **Exception Details** panel
+> - Supports all current and future JavaScript SDKs, including Node.JS, without the need for an SDK upgrade
+
+To view the unminified callstack, select an Exception Telemetry item in the Azure portal, find the source maps that match the call stack, and drag and drop the source maps onto the call stack in the Azure portal. The source map must have the same name as the source file of a stack frame, but with a `map` extension.
++
+## Tree shaking
+
+Tree shaking eliminates unused code from the final JavaScript bundle.
+
+To take advantage of tree shaking, import only the necessary components of the SDK into your code. By doing so, unused code isn't included in the final bundle, reducing its size and improving performance.
+
+### Tree shaking enhancements and recommendations
+
+In version 2.6.0, we deprecated and removed the internal usage of these static helper classes to improve support for tree-shaking algorithms. It lets npm packages safely drop unused code.
+
+- `CoreUtils`
+- `EventHelper`
+- `Util`
+- `UrlHelper`
+- `DateTimeUtils`
+- `ConnectionStringParser`
+
+ The functions are now exported as top-level roots from the modules, making it easier to refactor your code for better tree-shaking.
+
+The static classes were changed to const objects that reference the new exported functions, and future changes are planned to further refactor the references.
+
+### Tree shaking deprecated functions and replacements
+
+| Existing | Replacement |
+|-|-|
+| **CoreUtils** | **@microsoft/applicationinsights-core-js** |
+| CoreUtils._canUseCookies | None. Don't use as it causes all of CoreUtils reference to be included in your final code.<br> Refactor your cookie handling to use the `appInsights.getCookieMgr().setEnabled(true/false)` to set the value and `appInsights.getCookieMgr().isEnabled()` to check the value. |
+| CoreUtils.isTypeof | isTypeof |
+| CoreUtils.isUndefined | isUndefined |
+| CoreUtils.isNullOrUndefined | isNullOrUndefined |
+| CoreUtils.hasOwnProperty | hasOwnProperty |
+| CoreUtils.isFunction | isFunction |
+| CoreUtils.isObject | isObject |
+| CoreUtils.isDate | isDate |
+| CoreUtils.isArray | isArray |
+| CoreUtils.isError | isError |
+| CoreUtils.isString | isString |
+| CoreUtils.isNumber | isNumber |
+| CoreUtils.isBoolean | isBoolean |
+| CoreUtils.toISOString | toISOString or getISOString |
+| CoreUtils.arrForEach | arrForEach |
+| CoreUtils.arrIndexOf | arrIndexOf |
+| CoreUtils.arrMap | arrMap |
+| CoreUtils.arrReduce | arrReduce |
+| CoreUtils.strTrim | strTrim |
+| CoreUtils.objCreate | objCreateFn |
+| CoreUtils.objKeys | objKeys |
+| CoreUtils.objDefineAccessors | objDefineAccessors |
+| CoreUtils.addEventHandler | addEventHandler |
+| CoreUtils.dateNow | dateNow |
+| CoreUtils.isIE | isIE |
+| CoreUtils.disableCookies | disableCookies<br>Referencing either causes CoreUtils to be referenced for backward compatibility.<br> Refactor your cookie handling to use the `appInsights.getCookieMgr().setEnabled(false)` |
+| CoreUtils.newGuid | newGuid |
+| CoreUtils.perfNow | perfNow |
+| CoreUtils.newId | newId |
+| CoreUtils.randomValue | randomValue |
+| CoreUtils.random32 | random32 |
+| CoreUtils.mwcRandomSeed | mwcRandomSeed |
+| CoreUtils.mwcRandom32 | mwcRandom32 |
+| CoreUtils.generateW3CId | generateW3CId |
+| **EventHelper** | **@microsoft/applicationinsights-core-js** |
+| EventHelper.Attach | attachEvent |
+| EventHelper.AttachEvent | attachEvent |
+| EventHelper.Detach | detachEvent |
+| EventHelper.DetachEvent | detachEvent |
+| **Util** | **@microsoft/applicationinsights-common-js** |
+| Util.NotSpecified | strNotSpecified |
+| Util.createDomEvent | createDomEvent |
+| Util.disableStorage | utlDisableStorage |
+| Util.isInternalApplicationInsightsEndpoint | isInternalApplicationInsightsEndpoint |
+| Util.canUseLocalStorage | utlCanUseLocalStorage |
+| Util.getStorage | utlGetLocalStorage |
+| Util.setStorage | utlSetLocalStorage |
+| Util.removeStorage | utlRemoveStorage |
+| Util.canUseSessionStorage | utlCanUseSessionStorage |
+| Util.getSessionStorageKeys | utlGetSessionStorageKeys |
+| Util.getSessionStorage | utlGetSessionStorage |
+| Util.setSessionStorage | utlSetSessionStorage |
+| Util.removeSessionStorage | utlRemoveSessionStorage |
+| Util.disableCookies | disableCookies<br>Referencing either causes CoreUtils to be referenced for backward compatibility.<br> Refactor your cookie handling to use the `appInsights.getCookieMgr().setEnabled(false)` |
+| Util.canUseCookies | canUseCookies<br>Referencing either causes CoreUtils to be referenced for backward compatibility.<br>Refactor your cookie handling to use the `appInsights.getCookieMgr().isEnabled()` |
+| Util.disallowsSameSiteNone | uaDisallowsSameSiteNone |
+| Util.setCookie | coreSetCookie<br>Referencing causes CoreUtils to be referenced for backward compatibility.<br>Refactor your cookie handling to use the `appInsights.getCookieMgr().set(name: string, value: string)` |
+| Util.stringToBoolOrDefault | stringToBoolOrDefault |
+| Util.getCookie | coreGetCookie<br>Referencing causes CoreUtils to be referenced for backward compatibility.<br>Refactor your cookie handling to use the `appInsights.getCookieMgr().get(name: string)` |
+| Util.deleteCookie | coreDeleteCookie<br>Referencing causes CoreUtils to be referenced for backward compatibility.<br>Refactor your cookie handling to use the `appInsights.getCookieMgr().del(name: string, path?: string)` |
+| Util.trim | strTrim |
+| Util.newId | newId |
+| Util.random32 | <br>No replacement, refactor your code to use the core random32(true) |
+| Util.generateW3CId | generateW3CId |
+| Util.isArray | isArray |
+| Util.isError | isError |
+| Util.isDate | isDate |
+| Util.toISOStringForIE8 | toISOString |
+| Util.getIEVersion | getIEVersion |
+| Util.msToTimeSpan | msToTimeSpan |
+| Util.isCrossOriginError | isCrossOriginError |
+| Util.dump | dumpObj |
+| Util.getExceptionName | getExceptionName |
+| Util.addEventHandler | attachEvent |
+| Util.IsBeaconApiSupported | isBeaconApiSupported |
+| Util.getExtension | getExtensionByName
+| **UrlHelper** | **@microsoft/applicationinsights-common-js** |
+| UrlHelper.parseUrl | urlParseUrl |
+| UrlHelper.getAbsoluteUrl | urlGetAbsoluteUrl |
+| UrlHelper.getPathName | urlGetPathName |
+| UrlHelper.getCompeteUrl | urlGetCompleteUrl |
+| UrlHelper.parseHost | urlParseHost |
+| UrlHelper.parseFullHost | urlParseFullHost
+| **DateTimeUtils** | **@microsoft/applicationinsights-common-js** |
+| DateTimeUtils.Now | dateTimeUtilsNow |
+| DateTimeUtils.GetDuration | dateTimeUtilsDuration |
+| **ConnectionStringParser** | **@microsoft/applicationinsights-common-js** |
+| ConnectionStringParser.parse | parseConnectionString |
+
+## Troubleshooting
+
+See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/javascript-sdk-troubleshooting).
+
+## Next steps
+
+* [Track usage](usage-overview.md)
+* [Custom events and metrics](api-custom-events-metrics.md)
+* [Build-measure-learn](usage-overview.md)
azure-monitor Javascript Sdk Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk-upgrade.md
Upgrading to the new version of the Application Insights JavaScript SDK can prov
If you're using the current application insights PRODUCTION SDK (1.0.20) and want to see if the new SDK works in runtime, update the URL depending on your current SDK loading scenario. -- Download via CDN scenario: Update the SDK Loader Script that you currently use to point to the following URL:
+- Download via CDN scenario: Update the JavaScript (Web) SDK Loader Script that you currently use to point to the following URL:
``` "https://js.monitor.azure.com/scripts/b/ai.2.min.js" ```
azure-monitor Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk.md
The Microsoft Azure Monitor Application Insights JavaScript SDK allows you to mo
Follow the steps in this section to instrument your application with the Application Insights JavaScript SDK. > [!TIP]
-> Good news! We're making it even easier to enable JavaScript. Check out where [SDK Loader Script injection by configuration is available](./codeless-overview.md#sdk-loader-script-injection-by-configuration)!
+> Good news! We're making it even easier to enable JavaScript. Check out where [JavaScript (Web) SDK Loader Script injection by configuration is available](./codeless-overview.md#javascript-web-sdk-loader-script-injection-by-configuration)!
> [!NOTE]
-> If you have a React, React Native, or Angular application, you can [optionally add these plug-ins after you follow the steps to get started](#5-optional-advanced-sdk-configuration).
+> If you have a React, React Native, or Angular application, you can [optionally add these plug-ins after you follow the steps to get started](#4-optional-add-advanced-sdk-configuration).
### 1. Add the JavaScript code
Two methods are available to add the code to enable Application Insights via the
| Method | When would I use this method? | |:-|:|
-| SDK Loader Script | For most customers, we recommend the SDK Loader Script because you never have to update the SDK and you get the latest updates automatically. Also, you have control over which pages you add the Application Insights JavaScript SDK to. |
+| JavaScript (Web) SDK Loader Script | For most customers, we recommend the JavaScript (Web) SDK Loader Script because you never have to update the SDK and you get the latest updates automatically. Also, you have control over which pages you add the Application Insights JavaScript SDK to. |
| npm package | You want to bring the SDK into your code and enable IntelliSense. This option is only needed for developers who require more custom events and configuration. |
-#### [SDK Loader Script](#tab/sdkloaderscript)
+#### [JavaScript (Web) SDK Loader Script](#tab/javascriptwebsdkloaderscript)
-1. Paste the SDK Loader Script at the top of each page for which you want to enable Application Insights.
+1. Paste the JavaScript (Web) SDK Loader Script at the top of each page for which you want to enable Application Insights.
> [!NOTE] > Preferably, you should add it as the first script in your <head> section so that it can monitor any potential issues with all of your dependencies.
Two methods are available to add the code to enable Application Insights via the
</script> ```
-1. (Optional) Add or update optional [SDK Loader Script configuration](#sdk-loader-script-configuration), depending on if you need to optimize the loading of your web page or resolve loading errors.
+1. (Optional) Add or update optional [JavaScript (Web) SDK Loader Script configuration](#javascript-web-sdk-loader-script-configuration), depending on if you need to optimize the loading of your web page or resolve loading errors.
- :::image type="content" source="media/javascript-sdk/sdk-loader-script-configuration.png" alt-text="Screenshot of the SDK Loader Script. The parameters for configuring the SDK Loader Script are highlighted." lightbox="media/javascript-sdk/sdk-loader-script-configuration.png":::
+ :::image type="content" source="media/javascript-sdk/sdk-loader-script-configuration.png" alt-text="Screenshot of the JavaScript (Web) SDK Loader Script. The parameters for configuring the JavaScript (Web) SDK Loader Script are highlighted." lightbox="media/javascript-sdk/sdk-loader-script-configuration.png":::
-#### SDK Loader Script configuration
+#### JavaScript (Web) SDK Loader Script configuration
| Name | Type | Required? | Description |||--| | src | string | Required | The full URL for where to load the SDK from. This value is used for the "src" attribute of a dynamically added &lt;script /&gt; tag. You can use the public CDN location or your own privately hosted one.
- | name | string | Optional | The global name for the initialized SDK. Use this setting if you need to initialize two different SDKs at the same time.<br><br>The default value is appInsights, so ```window.appInsights``` is a reference to the initialized instance.<br><br> Note: If you assign a name value or if a previous instance has been assigned to the global name appInsightsSDK, the SDK initialization code requires it to be in the global namespace as `window.appInsightsSDK=<name value>` to ensure the correct SDK Loader Script skeleton, and proxy methods are initialized and updated.
- | ld | number in ms | Optional | Defines the load delay to wait before attempting to load the SDK. Use this setting when the HTML page is failing to load because the SDK Loader Script is loading at the wrong time.<br><br>The default value is 0ms after timeout. If you use a negative value, the script tag is immediately added to the <head> region of the page and blocks the page load event until the script is loaded or fails.
- | useXhr | boolean | Optional | This setting is used only for reporting SDK load failures. For example, this setting is useful when the SDK Loader Script is preventing the HTML page from loading, causing fetch() to be unavailable.<br><br>Reporting first attempts to use fetch() if available and then fallback to XHR. Set this setting to `true` to bypass the fetch check. This setting is only required if your application is being used in an environment where fetch would fail to send the failure events such as if the SDK Loader Script isn't loading successfully.
+ | name | string | Optional | The global name for the initialized SDK. Use this setting if you need to initialize two different SDKs at the same time.<br><br>The default value is appInsights, so ```window.appInsights``` is a reference to the initialized instance.<br><br> Note: If you assign a name value or if a previous instance has been assigned to the global name appInsightsSDK, the SDK initialization code requires it to be in the global namespace as `window.appInsightsSDK=<name value>` to ensure the correct JavaScript (Web) SDK Loader Script skeleton, and proxy methods are initialized and updated.
+ | ld | number in ms | Optional | Defines the load delay to wait before attempting to load the SDK. Use this setting when the HTML page is failing to load because the JavaScript (Web) SDK Loader Script is loading at the wrong time.<br><br>The default value is 0ms after timeout. If you use a negative value, the script tag is immediately added to the <head> region of the page and blocks the page load event until the script is loaded or fails.
+ | useXhr | boolean | Optional | This setting is used only for reporting SDK load failures. For example, this setting is useful when the JavaScript (Web) SDK Loader Script is preventing the HTML page from loading, causing fetch() to be unavailable.<br><br>Reporting first attempts to use fetch() if available and then fallback to XHR. Set this setting to `true` to bypass the fetch check. This setting is only required if your application is being used in an environment where fetch would fail to send the failure events such as if the JavaScript (Web) SDK Loader Script isn't loading successfully.
| crossOrigin | string | Optional | By including this setting, the script tag added to download the SDK includes the crossOrigin attribute with this string value. Use this setting when you need to provide support for CORS. When not defined (the default), no crossOrigin attribute is added. Recommended values are not defined (the default), "", or "anonymous". For all valid values, see the [cross origin HTML attribute](https://developer.mozilla.org/docs/Web/HTML/Attributes/crossorigin) documentation.
- | onInit | function(aiSdk) { ... } | Optional | This callback function is called after the main SDK script has been successfully loaded and initialized from the CDN (based on the src value). This callback function is useful when you need to insert a telemetry initializer. It's passed one argument, which is a reference to the SDK instance that's being called for and is also called before the first initial page view. If the SDK has already been loaded and initialized, this callback is still called. NOTE: During the processing of the sdk.queue array, this callback is called. You CANNOT add any more items to the queue because they're ignored and dropped. (Added as part of SDK Loader Script version 5--the sv:"5" value within the script). |
+ | onInit | function(aiSdk) { ... } | Optional | This callback function is called after the main SDK script has been successfully loaded and initialized from the CDN (based on the src value). This callback function is useful when you need to insert a telemetry initializer. It's passed one argument, which is a reference to the SDK instance that's being called for and is also called before the first initial page view. If the SDK has already been loaded and initialized, this callback is still called. NOTE: During the processing of the sdk.queue array, this callback is called. You CANNOT add any more items to the queue because they're ignored and dropped. (Added as part of JavaScript (Web) SDK Loader Script version 5--the sv:"5" value within the script). |
#### [npm package](#tab/npmpackage)
To paste the connection string in your environment, follow these steps:
### 3. (Optional) Add SDK configuration
-The optional [SDK configuration](./javascript-sdk-advanced.md#sdk-configuration) is passed to the Application Insights JavaScript SDK during initialization.
+The optional [SDK configuration](./javascript-sdk-configuration.md#sdk-configuration) is passed to the Application Insights JavaScript SDK during initialization.
To add SDK configuration, add each configuration option directly under `connectionString`. For example: :::image type="content" source="media/javascript-sdk/example-sdk-configuration.png" alt-text="Screenshot of JavaScript code with SDK configuration options added and highlighted." lightbox="media/javascript-sdk/example-sdk-configuration.png":::
-### 4. Confirm data is flowing
+### 4. (Optional) Add advanced SDK configuration
+
+If you want to use the extra features provided by plugins for specific frameworks and optionally enable the Click Analytics plug-in, see:
+
+- [React plugin](javascript-framework-extensions.md?tabs=react)
+- [React native plugin](javascript-framework-extensions.md?tabs=reactnative)
+- [Angular plugin](javascript-framework-extensions.md?tabs=reactnative)
+
+> [!TIP]
+> We collect page views by default. But if you want to also collect clicks by default, consider adding the Click Analytics Auto-Collection plug-in. If you're adding a framework extension, you'll have the option to add Click Analytics when you add the framework extension. If you're not adding a framework extension, [add the Click Analytics plug-in](./javascript-feature-extensions.md).
+
+### 5. Confirm data is flowing
1. Go to your Application Insights resource that you've enabled the SDK for. 1. In the Application Insights resource menu on the left, under **Investigate**, select the **Transaction search** pane. 1. Open the **Event types** dropdown menu and select **Select all** to clear the checkboxes in the menu.
-1. From the **Event types** dropdown menu, select **Page View**.
+1. From the **Event types** dropdown menu, select:
+
+ - **Page View** for Azure Monitor Application Insights Real User Monitoring
+ - **Custom Event** for the Click Analytics Auto-Collection plug-in.
It might take a few minutes for data to show up in the portal.
To add SDK configuration, add each configuration option directly under `connecti
If you can't run the application or you aren't getting data as expected, see the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/javascript-sdk-troubleshooting).
-### 5. (Optional) Advanced SDK configuration
-
-If you want to use the extra features provided by plugins for specific frameworks, see:
--- [React plugin](javascript-framework-extensions.md?tabs=react)-- [React native plugin](javascript-framework-extensions.md?tabs=reactnative)-- [Angular plugin](javascript-framework-extensions.md?tabs=reactnative)-
-> [!TIP]
-> We collect page views by default. But if you want to also collect clicks by default, consider adding the [Click Analytics plug-in](javascript-feature-extensions.md).
- ## Support - If you're having trouble with enabling Application Insights, see the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/javascript-sdk-troubleshooting).
If you want to use the extra features provided by plugins for specific framework
## Next steps
-* [Track usage](usage-overview.md)
+* [Explore Application Insights usage experiences](usage-overview.md)
* [Track page views](api-custom-events-metrics.md#page-views)
-* [Custom events and metrics](api-custom-events-metrics.md)
-* [JavaScript telemetry initializers](api-filtering-sampling.md#javascript-telemetry-initializers)
-* [Build-measure-learn](usage-overview.md)
-* [JavaScript SDK advanced topics](javascript-sdk-advanced.md)
+* [Track custom events and metrics](api-custom-events-metrics.md)
+* [Insert a JavaScript telemetry initializer](api-filtering-sampling.md#javascript-telemetry-initializers)
+* [Add JavaScript SDK configuration](javascript-sdk-configuration.md)
* See the detailed [release notes](https://github.com/microsoft/ApplicationInsights-JS/releases) on GitHub for updates and bug fixes.
+* [Query data in Log Analytics](../../azure-monitor/logs/log-query-overview.md).
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
appInsights.start();
### Automatic web Instrumentation[Preview]
- Automatic web Instrumentation can be enabled for node server via SDK Loader Script injection by configuration.
+ Automatic web Instrumentation can be enabled for node server via JavaScript (Web) SDK Loader Script injection by configuration.
```javascript let appInsights = require("applicationinsights");
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
For more information, see [Connection string configuration](./java-standalone-co
JavaScript doesn't support the use of environment variables. You have two options: -- To use the SDK Loader Script, see [SDK Loader Script](./javascript-sdk.md?tabs=sdkloaderscript#get-started).
+- To use the JavaScript (Web) SDK Loader Script, see [JavaScript (Web) SDK Loader Script](./javascript-sdk.md?tabs=javascriptwebsdkloaderscript#get-started).
- Manual setup: ```javascript import { ApplicationInsights } from '@microsoft/applicationinsights-web'
azure-monitor Telemetry Channels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/telemetry-channels.md
Last updated 05/14/2019 ms.devlang: csharp -+ # Telemetry channels in Application Insights
azure-monitor Tutorial Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-asp-net-core.md
ms.devlang: csharp Last updated 04/24/2023-+ # Enable Application Insights for ASP.NET Core applications
azure-monitor Tutorial Asp Net Custom Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-asp-net-custom-metrics.md
Last updated 08/22/2022 ms.devlang: csharp -+ # Capture Application Insights custom metrics with .NET and .NET Core
azure-monitor Usage Heart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-heart.md
You only have to interact with the main workbook, **HEART Analytics - All Sectio
To validate that data is flowing as expected to light up the metrics accurately, select the **Development Requirements** tab. > [!IMPORTANT]
-> Unless you [set the authenticated user context](./javascript-feature-extensions.md#set-the-authenticated-user-context), you must select **Anonymous Users** from the **ConversionScope** dropdown to see telemetry data.
+> Unless you [set the authenticated user context](./javascript-feature-extensions.md#3-optional-set-the-authenticated-user-context), you must select **Anonymous Users** from the **ConversionScope** dropdown to see telemetry data.
:::image type="content" source="media/usage-overview/development-requirements-1.png" alt-text="Screenshot that shows the Development Requirements tab of the HEART Analytics - All Sections workbook.":::
azure-monitor Usage Segmentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-segmentation.md
Three of the **Usage** panes use the same tool to slice and dice telemetry from
* **Sessions tool**: How many sessions of user activity have included certain pages and features of your app? A session is reset after half an hour of user inactivity, or after 24 hours of continuous use. * **Events tool**: How often are certain pages and features of your app used? A page view is counted when a browser loads a page from your app, provided you've [instrumented it](./javascript.md).
- A custom event represents one occurrence of something happening in your app. It's often a user interaction like a button selection or the completion of a task. You insert code in your app to [generate custom events](./api-custom-events-metrics.md#trackevent) or use the [Click Analytics](javascript-feature-extensions.md#feature-extensions-for-the-application-insights-javascript-sdk-click-analytics) extension.
+ A custom event represents one occurrence of something happening in your app. It's often a user interaction like a button selection or the completion of a task. You insert code in your app to [generate custom events](./api-custom-events-metrics.md#trackevent) or use the [Click Analytics](javascript-feature-extensions.md) extension.
> [!NOTE] > For information on an alternatives to using [anonymous IDs](./data-model-complete.md#anonymous-user-id) and ensuring an accurate count, see the documentation for [authenticated IDs](./data-model-complete.md#authenticated-user-id).
azure-monitor Work Item Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/work-item-integration.md
Title: Work Item Integration - Application Insights
description: Learn how to create work items in GitHub or Azure DevOps with Application Insights data embedded in them. Last updated 06/27/2021-+ # Work Item Integration
azure-monitor Container Insights Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-authentication.md
Last updated 06/13/2023
-# Authentication for Azure Monitor - Container Insights
+# Authentication for Container Insights
Container Insights now defaults to managed identity authentication. This secure and simplified authentication model has a monitoring agent that uses the cluster's managed identity to send data to Azure Monitor. It replaces the existing legacy certificate-based local authentication and removes the requirement of adding a Monitoring Metrics Publisher role to the cluster. ## How to enable
-Click on the relevant tab for instructions to enable Managed identity authentication on existing clusters.
+Click on the relevant tab for instructions to enable Managed identity authentication on your clusters.
## [Azure portal](#tab/portal-azure-monitor)
-No action is needed when creating a cluster from the Portal. However, it isn't possible to switch to Managed Identity authentication from the Azure portal. Customers must use command line tools to migrate. See other tabs for migration instructions and templates.
+When creating a new cluster from the Azure portal: On the **Integrations** tab, first check the box for *Enable Container Logs*, then check the box for *Use managed identity*.
+
+For existing clusters, you can switch to Managed Identity authentication from the *Monitor settings* panel: Navigate to your AKS cluster, scroll through the menu on the left till you see the **Monitoring** section, there click on the **Insights** tab. In the Insights tab, click on the *Monitor Settings* option and check the box for *Use managed identity*
+
+If you don't see the *Use managed identity* option, you are using an SPN clusters. In that case, you must use command line tools to migrate. See other tabs for migration instructions and templates.
## [Azure CLI](#tab/cli)
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
If you have a Kubernetes cluster with Windows nodes, review and configure the ne
## Authentication
-Container insights defaults to managed identity authentication. This secure and simplified authentication model has a monitoring agent that uses the cluster's managed identity to send data to Azure Monitor. It replaces the existing legacy certificate-based local authentication and removes the requirement of adding a *Monitoring Metrics Publisher* role to the cluster.
+Container insights defaults to managed identity authentication. This secure and simplified authentication model has a monitoring agent that uses the cluster's managed identity to send data to Azure Monitor. It replaces the existing legacy certificate-based local authentication and removes the requirement of adding a *Monitoring Metrics Publisher* role to the cluster. Read more in [Authentication for Container Insights](container-insights-authentication.md)
## Agent
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Essentials|[Use private endpoints for Managed Prometheus and Azure Monitor works
Essentials|[Private Link for data ingestion for Managed Prometheus and Azure Monitor workspace](essentials/private-link-data-ingestion.md)|New article: Private Link for data ingestion for Managed Prometheus and Azure Monitor workspace| Essentials|[Collect Prometheus metrics from an Arc-enabled Kubernetes cluster (preview)](essentials/prometheus-metrics-from-arc-enabled-cluster.md)|New article: Collect Prometheus metrics from an Arc-enabled Kubernetes cluster (preview)| Essentials|[How to migrate from the metrics API to the getBatch API](essentials/migrate-to-batch-api.md)|Migrate from the metrics API to the getBatch API|
-Essentials|[Azure Active Directory authorization proxy](essentials/prometheus-authorization-proxy.md)|Aad auth proxy|
+Essentials|[Azure Active Directory authorization proxy](essentials/prometheus-authorization-proxy.md)|Microsoft Azure Active Directory (Azure AD) auth proxy|
Essentials|[Integrate KEDA with your Azure Kubernetes Service cluster](essentials/integrate-keda.md)|New Article: Integrate KEDA with AKS and Prometheus| Essentials|[General Availability: Azure Monitor managed service for Prometheus](https://techcommunity.microsoft.com/t5/azure-observability-blog/general-availability-azure-monitor-managed-service-for/ba-p/3817973)|General Availability: Azure Monitor managed service for Prometheus | Insights|[Monitor and analyze runtime behavior with Code Optimizations (Preview)](insights/code-optimizations.md)|New doc for public preview release of Code Optimizations feature.|
Logs|[Set daily cap on Log Analytics workspace](logs/daily-cap.md)|Starting Sept
|Subservice| Article | Description | |||| Agents|[Azure Monitor Agent Performance Benchmark](agents/azure-monitor-agent-performance.md)|Added performance benchmark data for the scenario of using Azure Monitor Agent to forward data to a gateway.|
-Agents|[Troubleshoot issues with the Log Analytics agent for Windows](agents/agent-windows-troubleshoot.md)|Log Analytics will no longer accept connections from MMA versions that use old root CAs (MMA versions prior to the Winter 2020 release for Log Analytics agent, and prior to SCOM 2019 UR3 for SCOM). |
+Agents|[Troubleshoot issues with the Log Analytics agent for Windows](agents/agent-windows-troubleshoot.md)|Log Analytics will no longer accept connections from MMA versions that use old root CAs (MMA versions prior to the Winter 2020 release for Log Analytics agent, and prior to Microsoft System Center Operations Manager 2019 UR3 for Operations Manager). |
Agents|[Azure Monitor Agent overview](agents/agents-overview.md)|Log Analytics agent supports Windows Server 2022. | Alerts|[Common alert schema](alerts/alerts-common-schema.md)|Updated alert payload common schema to include custom properties.| Alerts|[Create and manage action groups in the Azure portal](alerts/action-groups.md)|Clarified use of basic auth in webhook.|
Containers|[Manage the Container insights agent](containers/container-insights-m
Essentials|[Azure Monitor Metrics overview](essentials/data-platform-metrics.md)|New Batch Metrics API that allows multiple resource requests and reducing throttling found in the non-batch version. | General|[Cost optimization in Azure Monitor](best-practices-cost.md)|Rewritten to match organization of Well Architected Framework service guides| General|[Best practices for Azure Monitor Logs](best-practices-logs.md)|New article with consolidated list of best practices for Logs organized by WAF pillar.|
-General|[Migrate from System Center Operations Manager (SCOM) to Azure Monitor](azure-monitor-operations-manager.md)|Migrate from SCOM to Azure Monitor|
+General|[Migrate from Operations Manager to Azure Monitor](azure-monitor-operations-manager.md)|Migrate from Operations Manager to Azure Monitor|
Logs|[Application Insights API Access with Microsoft Azure Active Directory (Azure AD) Authentication](app/app-insights-azure-ad-api.md)|New article that explains how to authenticate and access the Azure Monitor Application Insights APIs using Azure AD.| Logs|[Tutorial: Replace custom fields in Log Analytics workspace with KQL-based custom columns](logs/custom-fields-migrate.md)|Guidance for migrate legacy custom fields to KQL-based custom columns using transformations.| Logs|[Monitor Log Analytics workspace health](logs/log-analytics-workspace-health.md)|View Log Analytics workspace health metrics, including query success metrics, directly from the Log Analytics workspace screen in the Azure portal.|
Alerts|[Connect ServiceNow to Azure Monitor](alerts/itsmc-secure-webhook-connect
Application-Insights|[Application Insights SDK support guidance](app/sdk-support-guidance.md)|Release notes are now available for each SDK.| Application-Insights|[What is distributed tracing and telemetry correlation?](app/distributed-tracing-telemetry-correlation.md)|Merged our documents related to distributed tracing and telemetry correlation.| Application-Insights|[Application Insights availability tests](app/availability-overview.md)|Separated and called out the two Classic Tests, which are older versions of availability tests.|
-Application-Insights|[Microsoft Azure Monitor Application Insights JavaScript SDK advanced topics](app/javascript-sdk-advanced.md)|JavaScript SDK advanced topics now include npm setup, cookie configuration and management, source map un-minify support, and tree shaking optimized code.|
+Application-Insights|[Microsoft Azure Monitor Application Insights JavaScript SDK configuration](app/javascript-sdk-configuration.md)|JavaScript SDK configuration now includes npm setup, cookie configuration and management, source map un-minify support, and tree shaking optimized code.|
Application-Insights|[Microsoft Azure Monitor Application Insights JavaScript SDK](app/javascript-sdk.md)|Our introductory article to the JavaScript SDK now provides only the fast and easy code-snippet method of getting started.| Application-Insights|[Geolocation and IP address handling](app/ip-collection.md)|Updated code samples for .NET 6/7.| Application-Insights|[Application Insights logging with .NET](app/ilogger.md)|Updated code samples for .NET 6/7.|
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
Configuring UDRs on the source VM subnets with the address prefix of delegated s
> [!NOTE] > To access an Azure NetApp Files volume from an on-premises network via a VNet gateway (ExpressRoute or VPN) and firewall, configure the route table assigned to the VNet gateway to include the `/32` IPv4 address of the Azure NetApp Files volume listed and point to the firewall as the next hop. Using an aggregate address space that includes the Azure NetApp Files volume IP address will not forward the Azure NetApp Files traffic to the firewall.
+>[!NOTE]
+>If you want to configure a UDR route in the VM VNet, to control the routing of packets destined for a VNet-peered Azure NetApp Files standard volume, the UDR prefix must be more specific or equal to the delegated subnet size of the Azure NetApp Files volume. If the UDR prefix is of size greater than the delegated subnet size, it will not be effective.
+ ## Azure native environments The following diagram illustrates an Azure-native environment:
azure-resource-manager Bicep Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-cli.md
Title: Bicep CLI commands and overview
description: Describes the commands that you can use in the Bicep CLI. These commands include building Azure Resource Manager templates from Bicep. Previously updated : 04/18/2023 Last updated : 06/15/2023 # Bicep CLI commands
-This article describes the commands you can use in the Bicep CLI. You must have the [Bicep CLI installed](./install.md) to run the commands.
+This article describes the commands you can use in the Bicep CLI. You have two options for executing these commands: either by utilizing Azure CLI or by directly invoking Bicep CLI commands. Each method requires a distinct installation process. For more information, see [Install Azure CLI](./install.md#azure-cli) and [Install Azure PowerShell](./install.md#azure-powershell).
-You can either run the Bicep CLI commands through Azure CLI or by calling Bicep directly. This article shows how to run the commands in Azure CLI. When running through Azure CLI, you start the commands with `az`. If you're not using Azure CLI, run the commands without `az` at the start of the command. For example, `az bicep build` becomes `bicep build`.
+This article shows how to run the commands in Azure CLI. When running through Azure CLI, you start the commands with `az`. If you're not using Azure CLI, run the commands without `az` at the start of the command. For example, `az bicep build` becomes `bicep build`, and `az bicep version` becomes `bicep --version`.
+
+> [!NOTE]
+> The commands related to the Bicep parameters files are exclusively supported within the Bicep CLI and are not currently available in Azure CLI. These commands include: `build-params`, `decompile-params`, and `generate-params`.
## build
When you get this error, either run the `build` command without the `--no-restor
To use the `--no-restore` switch, you must have Bicep CLI version **0.4.1008 or later**.
+## build-params
+
+The `build-params` command builds a _.bicepparam_ file into a JSON parameters file.
+
+```azurecli
+bicep build-params params.bicepparam
+```
+
+This command converts a _params.bicepparam_ parameters file into a _params.json_ JSON parameters file.
+ ## decompile The `decompile` command converts ARM template JSON to a Bicep file.
The command creates a file named _main.bicep_ in the same directory as _main.jso
For more information about using this command, see [Decompiling ARM template JSON to Bicep](decompile.md).
+## decompile-params
+
+The `decompile-params` command decompile a JSON parameters file to a _.bicepparam_ parameters file.
+
+```azurecli
+bicep decompile-params azuredeploy.parameters.json --bicep-file ./dir/main.bicep
+```
+
+This command decompiles a _azuredeploy.parameters.json_ parameters file into a _azuredeploy.parameters.bicepparam_ file. `-bicep-file` specifies the path to the Bicep file (relative to the .bicepparam file) that will be referenced in the `using` declaration.
+ ## generate-params
-The `generate-params` command builds *.parameters.json* file from the given bicep file, updates if there is an existing parameters.json file.
+The `generate-params` command builds a parameters file from the given Bicep file, updates if there is an existing parameters file.
```azurecli
-az bicep generate-params --file main.bicep
+bicep generate-params main.bicep --output-format bicepparam --include-params all
+```
+
+The command creates a Bicep parameters file named _main.bicepparam_. The parameter file contains all parameters in the Bicep file, whether configured with default values or not.
+
+```azurecli
+bicep generate-params --file main.bicep --outfile main.parameters.json
``` The command creates a parameter file named _main.parameters.json_. The parameter file only contains the parameters without default values configured in the Bicep file.
To use the restore command, you must have Bicep CLI version **0.4.1008 or later*
To manually restore the external modules for a file, use:
-```powershell
-bicep restore <bicep-file> [--force]
+```azurecli
+az bicep restore <bicep-file> [--force]
``` The Bicep file you provide is the file you wish to deploy. It must contain a module that links to a registry. For example, you can restore the following file:
Bicep CLI version 0.4.1008 (223b8d227a)
To call this command directly through the Bicep CLI, use:
-```powershell
+```Bicep CLI
bicep --version ```
azure-resource-manager Bicep Functions Parameters File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-parameters-file.md
+
+ Title: Bicep functions - parameters file
+description: Describes the functions used in the Bicep parameters files.
++ Last updated : 06/05/2023++
+# Parameters file function for Bicep
+
+Bicep provides a function called `readEnvironmentVariable()` that allows you to retrieve values from environment variables. It also offers the flexibility to set a default value if the environment variable does not exist. This function can only be using in the `.bicepparam` files. For more information, see [Bicep parameters file](./parameter-files.md).
+
+## readEnvironmentVariable()
+
+`readEnvironmentVariable(variableName, [defaultValue])`
+
+Returns the value of the environment variable, or set a default value if the environment variable doesn't exist. Variable loading occurs during compilation, not at runtime.
+
+Namespace: [sys](bicep-functions.md#namespaces-for-functions).
+
+### Parameters
+
+| Parameter | Required | Type | Description |
+|: |: |: |: |
+| variableName | Yes | string | The name of the variable. |
+| defaultValue | No | string | A default string value to be used if the environment variable does not exist. |
+
+### Return value
+
+The string value of the environment variable or a default value.
+
+### Examples
+
+The following examples show how to retrieve the values of environment variables.
+
+```bicep
+use './main.bicep'
+
+param adminPassword = readEnvironmentVariable('admin_password')
+param boolfromEnvironmentVariables = bool(readEnvironmentVariable('boolVariableName','false'))
+```
+
+## Next steps
+
+For more information about Bicep parameters file, see [Parameters file](./parameter-files.md).
azure-resource-manager Bicep Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions.md
Title: Bicep functions
description: Describes the functions to use in a Bicep file to retrieve values, work with strings and numerics, and retrieve deployment information. Previously updated : 05/11/2023 Last updated : 06/05/2023 # Bicep functions
The following functions are available for working with lambda expressions. All o
* [reduce](bicep-functions-lambda.md#reduce) * [sort](bicep-functions-lambda.md#sort) - ## Logical functions The following function is available for working with logical conditions. This function is in the `sys` namespace.
The following functions are available for working with objects. All of these fun
* [length](./bicep-functions-object.md#length) * [union](./bicep-functions-object.md#union)
+## Parameters file functions
+
+The [readEnvironmentVariable function](./bicep-functions-parameters-file.md) is available in Bicep to read environment variable values. This function is in the `sys` namespace.
+ ## Resource functions The following functions are available for getting resource values. Most of these functions are in the `az` namespace. The list functions and the getSecret function are called directly on the resource type, so they don't have a namespace qualifier.
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-cli.md
Title: Deploy resources with Azure CLI and Bicep files | Microsoft Docs description: Use Azure Resource Manager and Azure CLI to deploy resources to Azure. The resources are defined in a Bicep file.-- Previously updated : 07/08/2022 Last updated : 06/13/2023
If you're deploying to a resource group that doesn't exist, create the resource
az group create --name ExampleGroup --location "Central US" ```
-To deploy a local Bicep file, use the `--template-file` parameter in the deployment command. The following example also shows how to set a parameter value.
+To deploy a local Bicep file, use the `--template-file` switch in the deployment command. The following example also shows how to set a parameter value.
```azurecli-interactive az deployment group create \
Currently, Azure CLI doesn't support deploying remote Bicep files. You can use [
## Parameters
-To pass parameter values, you can use either inline parameters or a parameter file.
+To pass parameter values, you can use either inline parameters or a parameters file.
### Inline parameters
az deployment group create \
However, if you're using Azure CLI with Windows Command Prompt (CMD) or PowerShell, set the variable to a JSON string. Escape the quotation marks: `$params = '{ \"prefix\": {\"value\":\"start\"}, \"suffix\": {\"value\":\"end\"} }'`.
-### Parameter files
+The evaluation of parameters follows a sequential order, meaning that if a value is assigned multiple times, only the last assigned value is used. To ensure proper parameter assignment, it is advised to provide your parameters file initially and selectively override specific parameters using the _KEY=VALUE_ syntax. It's important to mention that if you are supplying a `bicepparam` parameters file, you can use this argument only once.
-Rather than passing parameters as inline values in your script, you may find it easier to use a JSON file that contains the parameter values. The parameter file must be a local file. External parameter files aren't supported with Azure CLI. Bicep file uses JSON parameter files.
+### Parameters files
-For more information about the parameter file, see [Create Resource Manager parameter file](./parameter-files.md).
+Rather than passing parameters as inline values in your script, you may find it easier to use a `.bicepparam` file or a JSON file that contains the parameter values. The parameters file must be a local file. External parameters files aren't supported with Azure CLI.
-To pass a local parameter file, specify the path and file name. The following example shows a parameter file named _storage.parameters.json_. The file is in the same directory where the command is run.
+For more information about the parameters file, see [Create Resource Manager parameters file](./parameter-files.md).
+
+To pass a local Bicep parameters file, specify the path and file name. The following example shows a parameters file named _storage.bicepparam_. The file is in the same directory where the command is run.
+
+```azurecli-interactive
+az deployment group create \
+ --name ExampleDeployment \
+ --resource-group ExampleGroup \
+ --template-file storage.bicep \
+ --parameters storage.bicepparam
+```
+
+The following example shows a parameters file named _storage.parameters.json_. The file is in the same directory where the command is run.
```azurecli-interactive az deployment group create \
azure-resource-manager Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-powershell.md
Title: Deploy resources with PowerShell and Bicep description: Use Azure Resource Manager and Azure PowerShell to deploy resources to Azure. The resources are defined in a Bicep file.-- Previously updated : 08/05/2022 Last updated : 06/05/2023 # Deploy resources with Bicep and Azure PowerShell
If you're deploying to a resource group that doesn't exist, create the resource
New-AzResourceGroup -Name ExampleGroup -Location "Central US" ```
-To deploy a local Bicep file, use the `-TemplateFile` parameter in the deployment command.
+To deploy a local Bicep file, use the `-TemplateFile` switch in the deployment command.
```azurepowershell New-AzResourceGroupDeployment `
Currently, Azure PowerShell doesn't support deploying remote Bicep files. Use [B
## Parameters
-To pass parameter values, you can use either inline parameters or a parameter file.
+To pass parameter values, you can use either inline parameters or a parameters file.
### Inline parameters
New-AzResourceGroupDeployment -ResourceGroupName testgroup `
-exampleArray $subnetArray ```
-### Parameter files
+### Parameters files
-Rather than passing parameters as inline values in your script, you may find it easier to use a JSON file that contains the parameter values. The parameter file can be a local file or an external file with an accessible URI. Bicep file uses JSON parameter files.
+Rather than passing parameters as inline values in your script, you may find it easier to use a `.bicepparam` file or a JSON file that contains the parameter values. The parameters file can be a local file or an external file with an accessible URI.
-For more information about the parameter file, see [Create Resource Manager parameter file](./parameter-files.md).
+For more information about the parameters file, see [Create Resource Manager parameters file](./parameter-files.md).
-To pass a local parameter file, use the `TemplateParameterFile` parameter:
+To pass a local parameters file, use the `TemplateParameterFile` parameter with a `.bicepparam` file:
+
+```powershell
+New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup `
+ -TemplateFile c:\BicepFiles\storage.bicep `
+ -TemplateParameterFile c:\BicepFiles\storage.bicepparam
+```
+
+To pass a local parameters file, use the `TemplateParameterFile` parameter with a JSON parameters file:
```powershell New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup `
New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName Example
-TemplateParameterFile c:\BicepFiles\storage.parameters.json ```
-To pass an external parameter file, use the `TemplateParameterUri` parameter:
+To pass an external parameters file, use the `TemplateParameterUri` parameter:
```powershell New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup `
New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName Example
-TemplateParameterUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.parameters.json ```
+The `TemplateParameterUri` parameter doesn't support `.bicepparam` files, it only supports JSON parameters files.
+ ## Preview changes Before deploying your Bicep file, you can preview the changes the Bicep file will make to your environment. Use the [what-if operation](./deploy-what-if.md) to verify that the Bicep file makes the changes that you expect. What-if also validates the Bicep file for errors.
azure-resource-manager Key Vault Parameter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/key-vault-parameter.md
Title: Key Vault secret with Bicep description: Shows how to pass a secret from a key vault as a parameter during Bicep deployment.-- Previously updated : 06/18/2021 Last updated : 06/15/2023 # Use Azure Key Vault to pass secure parameter value during Bicep deployment
-Instead of putting a secure value (like a password) directly in your Bicep file or parameter file, you can retrieve the value from an [Azure Key Vault](../../key-vault/general/overview.md) during a deployment. When a [module](./modules.md) expects a `string` parameter with `secure:true` modifier, you can use the [getSecret function](bicep-functions-resource.md#getsecret) to obtain a key vault secret. The value is never exposed because you only reference its key vault ID.
+Instead of putting a secure value (like a password) directly in your Bicep file or parameters file, you can retrieve the value from an [Azure Key Vault](../../key-vault/general/overview.md) during a deployment. When a [module](./modules.md) expects a `string` parameter with `secure:true` modifier, you can use the [getSecret function](bicep-functions-resource.md#getsecret) to obtain a key vault secret. The value is never exposed because you only reference its key vault ID.
> [!IMPORTANT]
-> This article focuses on how to pass a sensitive value as a template parameter. When the secret is passed as a parameter, the key vault can exist in a different subscription than the resource group you're deploying to.
+> This article focuses on how to pass a sensitive value as a template parameter. When the secret is passed as a parameter, the key vault can exist in a different subscription than the resource group you're deploying to.
> > This article doesn't cover how to set a virtual machine property to a certificate's URL in a key vault. For a quickstart template of that scenario, see [Install a certificate from Azure Key Vault on a Virtual Machine](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/vm-winrm-keyvault-windows).
module sql './sql.bicep' = {
} ```
-## Reference secrets in parameter file
+## Reference secrets in parameters file
-If you don't want to use a module, you can reference the key vault directly in the parameter file. The following image shows how the parameter file references the secret and passes that value to the Bicep file.
+If you don't want to use a module, you can reference the key vault directly in the parameters file. The following image shows how the parameters file references the secret and passes that value to the Bicep file.
![Resource Manager key vault integration diagram](./media/key-vault-parameter/statickeyvault.png)
+> [!NOTE]
+> Currently you can only reference the key vault in JSON parameters files. You can't reference key vault in Bicep parameters file.
+ The following Bicep file deploys a SQL server that includes an administrator password. The password parameter is set to a secure string. But the Bicep doesn't specify where that value comes from. ```bicep
resource sqlServer 'Microsoft.Sql/servers@2020-11-01-preview' = {
-Now, create a parameter file for the preceding Bicep file. In the parameter file, specify a parameter that matches the name of the parameter in the Bicep file. For the parameter value, reference the secret from the key vault. You reference the secret by passing the resource identifier of the key vault and the name of the secret:
+Now, create a parameters file for the preceding Bicep file. In the parameters file, specify a parameter that matches the name of the parameter in the Bicep file. For the parameter value, reference the secret from the key vault. You reference the secret by passing the resource identifier of the key vault and the name of the secret:
-In the following parameter file, the key vault secret must already exist, and you provide a static value for its resource ID.
+In the following parameters file, the key vault secret must already exist, and you provide a static value for its resource ID.
```json {
If you need to use a version of the secret other than the current version, inclu
"secretVersion": "cd91b2b7e10e492ebb870a6ee0591b68" ```
-Deploy the template and pass in the parameter file:
+Deploy the template and pass in the parameters file:
# [Azure CLI](#tab/azure-cli)
az group create --name SqlGroup --location westus2
az deployment group create \ --resource-group SqlGroup \ --template-file <Bicep-file> \
- --parameters <parameter-file>
+ --parameters <parameters-file>
``` # [PowerShell](#tab/azure-powershell)
New-AzResourceGroup -Name $resourceGroupName -Location $location
New-AzResourceGroupDeployment ` -ResourceGroupName $resourceGroupName ` -TemplateFile <Bicep-file> `
- -TemplateParameterFile <parameter-file>
+ -TemplateParameterFile <parameters-file>
```
azure-resource-manager Parameter Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameter-files.md
Title: Create parameter file for Bicep
-description: Create parameter file for passing in values during deployment of a Bicep file
--
+ Title: Create parameters files for Bicep deployment
+description: Create parameters file for passing in values during deployment of a Bicep file
Previously updated : 11/14/2022 Last updated : 06/05/2023
-# Create Bicep parameter file
+# Create parameters files for Bicep deployment
-Rather than passing parameters as inline values in your script, you can use a JSON file that contains the parameter values. This article shows how to create a parameter file that you use with a Bicep file.
+Rather than passing parameters as inline values in your script, you can use a Bicep parameters file with the `.bicepparam` file extension or a JSON parameters file that contains the parameter values. This article shows how to create parameters files.
-## Parameter file
+A single Bicep file can have multiple Bicep parameters files associated with it. However, each Bicep parameters file is intended for one particular Bicep file. This relationship is established using the `using` statement within the Bicep parameters file. For more information, see [Bicep parameters file](#parameters-file).
-A parameter file uses the following format:
+jgao: list the versions for supporting Bicep parameters file. You can compile Bicep parameters files into JSON parameters files to deploy with a Bicep file.
+
+## Parameters file
+
+A parameters file uses the following format:
+
+# [Bicep parameters file](#tab/Bicep)
+
+```bicep
+using '<path>/<file-name>.bicep'
+
+param <first-parameter-name> = <first-value>
+param <second-parameter-name> = <second-value>
+```
+
+You can use expressions with the default value. For example:
+
+```bicep
+using 'storageaccount.bicep'
+
+param storageName = toLower('MyStorageAccount')
+param intValue = 2 + 2
+```
+
+You can reference environment variables as parameter values. For example:
+
+```bicep
+using './main.bicep'
+
+param intFromEnvironmentVariables = int(readEnvironmentVariable('intEnvVariableName'))
+```
+
+# [JSON parameters file](#tab/JSON)
```json {
A parameter file uses the following format:
} ```
-It's worth noting that the parameter file saves parameter values as plain text. For security reasons, this approach is not recommended for sensitive values such as passwords. If you must pass a parameter with a sensitive value, keep the value in a key vault. Instead of adding the sensitive value to your parameter file, use the [getSecret function](bicep-functions-resource.md#getsecret) to retrieve it. For more information, see [Use Azure Key Vault to pass secure parameter value during Bicep deployment](key-vault-parameter.md).
++
+It's worth noting that the parameters file saves parameter values as plain text. For security reasons, this approach isn't recommended for sensitive values such as passwords. If you must pass a parameter with a sensitive value, keep the value in a key vault. Instead of adding the sensitive value to your parameters file, use the [getSecret function](bicep-functions-resource.md#getsecret) to retrieve it. For more information, see [Use Azure Key Vault to pass secure parameter value during Bicep deployment](key-vault-parameter.md).
+
+## Parameter type formats
+
+The following example shows the formats of different parameter types: string, integer, boolean, array, and object.
+
+# [Bicep parameters file](#tab/Bicep)
+
+```bicep
+using './main.bicep'
+
+param exampleString = 'test string'
+param exampleInt = 2 + 2
+param exampleBool = true
+param exampleArray = [
+ 'value 1'
+ 'value 2'
+]
+param exampleObject = {
+ property1: 'value 1'
+ property2: 'value 2'
+}
+```
+
+Use Bicep syntax to declare [objects](./data-types.md#objects) and [arrays](./data-types.md#arrays).
+
+# [JSON parameters file](#tab/JSON)
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "exampleString": {
+ "value": "test string"
+ },
+ "exampleInt": {
+ "value": 4
+ },
+ "exampleBool": {
+ "value": true
+ },
+ "exampleArray": {
+ "value": [
+ "value 1",
+ "value 2"
+ ]
+ },
+ "exampleObject": {
+ "value": {
+ "property1": "value1",
+ "property2": "value2"
+ }
+ }
+ }
+}
+```
+++
+## File name
+
+# [Bicep parameters file](#tab/Bicep)
+
+Bicep parameters file has the file extension of `.bicepparam`.
+
+To deploy to different environments, you create more than one parameters file. When you name the parameters files, identify their use such as development and production. For example, use _main.dev.biceparam_ and _main.prod.json_ to deploy resources.
+
+# [JSON parameters file](#tab/JSON)
+
+The general naming convention for the parameters file is to include _parameters_ in the Bicep file name. For example, if your Bicep file is named _azuredeploy.bicep_, your parameters file is named _azuredeploy.parameters.json_. This naming convention helps you see the connection between the Bicep file and the parameters.
+
+To deploy to different environments, you create more than one parameters file. When you name the parameters files, identify their use such as development and production. For example, use _azuredeploy.parameters-dev.json_ and _azuredeploy.parameters-prod.json_ to deploy resources.
++ ## Define parameter values
-To determine how to define the parameter names and values, open your Bicep file. Look at the parameters section of the Bicep file. The following examples show the parameters from a Bicep file.
+To determine how to define the parameter names and values, open your Bicep file. Look at the parameters section of the Bicep file. The following examples show the parameters from a Bicep file called `main.bicep`.
```bicep @maxLength(11)
param storagePrefix string
param storageAccountType string = 'Standard_LRS' ```
-In the parameter file, the first detail to notice is the name of each parameter. The parameter names in your parameter file must match the parameter names in your Bicep file.
+In the parameters file, the first detail to notice is the name of each parameter. The parameter names in your parameters file must match the parameter names in your Bicep file.
+
+# [Bicep parameters file](#tab/Bicep)
+
+```bicep
+using 'main.bicep'
+
+param storagePrefix
+param storageAccountType
+```
+
+The `using` statement ties the Bicep parameters file to a Bicep file.
+
+After typing the keyword `param` in Visual Studio Code, it prompts you the available parameters and their descriptions from the linked Bicep file:
++
+When hovering over a param name, you can see the parameter data type and description.
++
+# [JSON parameters file](#tab/JSON)
```json {
In the parameter file, the first detail to notice is the name of each parameter.
} ```
-Notice the parameter type. The parameter types in your parameter file must use the same types as your Bicep file. In this example, both parameter types are strings.
++
+Notice the parameter type. The parameter types in your parameters file must use the same types as your Bicep file. In this example, both parameter types are strings.
+
+# [Bicep parameters file](#tab/Bicep)
+
+```bicep
+using 'main.bicep'
+
+param storagePrefix = ''
+param storageAccountType = ''
+```
+
+# [JSON parameters file](#tab/JSON)
```json {
Notice the parameter type. The parameter types in your parameter file must use t
} ```
-Check the Bicep file for parameters with a default value. If a parameter has a default value, you can provide a value in the parameter file but it's not required. The parameter file value overrides the Bicep file's default value.
++
+Check the Bicep file for parameters with a default value. If a parameter has a default value, you can provide a value in the parameters file, but it's not required. The parameters file value overrides the Bicep file's default value.
+
+# [Bicep parameters file](#tab/Bicep)
+
+```bicep
+using 'main.bicep'
+
+param storagePrefix = '' // This value must be provided.
+param storageAccountType = '' // This value is optional. Bicep will use default value if not provided.
+```
+
+# [JSON parameters file](#tab/JSON)
```json {
Check the Bicep file for parameters with a default value. If a parameter has a d
``` > [!NOTE]
-> For inline comments, you can use either // or /* ... */. In Visual Studio Code, save the parameter files with the **JSONC** file type, otherwise you will get an error message saying "Comments not permitted in JSON".
+> For inline comments, you can use either // or /* ... */. In Visual Studio Code, save the parameters files with the **JSONC** file type, otherwise you will get an error message saying "Comments not permitted in JSON".
++ Check the Bicep's allowed values and any restrictions such as maximum length. Those values specify the range of values you can provide for a parameter. In this example, `storagePrefix` can have a maximum of 11 characters and `storageAccountType` must specify an allowed value.
+# [Bicep parameters file](#tab/Bicep)
+
+```bicep
+using 'main.bicep'
+
+param storagePrefix = 'storage'
+param storageAccountType = 'Standard_ZRS'
+```
+
+# [JSON parameters file](#tab/JSON)
+ ```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
Check the Bicep's allowed values and any restrictions such as maximum length. Th
``` > [!NOTE]
-> Your parameter file can only contain values for parameters that are defined in the Bicep file. If your parameter file contains extra parameters that don't match the Bicep file's parameters, you receive an error.
+> Your parameters file can only contain values for parameters that are defined in the Bicep file. If your parameters file contains extra parameters that don't match the Bicep file's parameters, you receive an error.
-## Parameter type formats
+
-The following example shows the formats of different parameter types: string, integer, boolean, array, and object.
+## Generate parameters file
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "exampleString": {
- "value": "test string"
- },
- "exampleInt": {
- "value": 4
- },
- "exampleBool": {
- "value": true
- },
- "exampleArray": {
- "value": [
- "value 1",
- "value 2"
- ]
- },
- "exampleObject": {
- "value": {
- "property1": "value1",
- "property2": "value2"
- }
- }
- }
-}
-```
+To generate a parameters file, you have two options: either through Visual Studio Code or by using the Bicep CLI. Both methods allow you to derive the parameters file from a Bicep file. From Visual Studio Code, See [Generate parameters file](./visual-studio-code.md#generate-parameters-file). From Bicep CLI, see [Generate parameters file](./bicep-cli.md#generate-params).
+
+## Build Bicep parameters file
+
+From Bicep CLI, you can build a Bicep parameters file into a JSON parameters file. for more information, see [Build parameters file](./bicep-cli.md#build-params).
-## Deploy Bicep file with parameter file
+## Deploy Bicep file with parameters file
-From Azure CLI, pass a local parameter file using `@` and the parameter file name. For example, `@storage.parameters.json`.
+From Azure CLI, pass a local parameters file using `@` and the parameters file name. For example, `storage.bicepparam` or `@storage.parameters.json`.
```azurecli az deployment group create \ --name ExampleDeployment \ --resource-group ExampleGroup \ --template-file storage.bicep \
- --parameters @storage.parameters.json
+ --parameters @storage.bicepparam
``` For more information, see [Deploy resources with Bicep and Azure CLI](./deploy-cli.md#parameters). To deploy _.bicep_ files you need Azure CLI version 2.20 or higher.
-From Azure PowerShell, pass a local parameter file using the `TemplateParameterFile` parameter.
+From Azure PowerShell, pass a local parameters file using the `TemplateParameterFile` parameter.
```azurepowershell New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup ` -TemplateFile C:\MyTemplates\storage.bicep `
- -TemplateParameterFile C:\MyTemplates\storage.parameters.json
+ -TemplateParameterFile C:\MyTemplates\storage.bicepparam
``` For more information, see [Deploy resources with Bicep and Azure PowerShell](./deploy-powershell.md#parameters). To deploy _.bicep_ files you need Azure PowerShell version 5.6.0 or higher.
-## File name
-
-The general naming convention for the parameter file is to include _parameters_ in the Bicep file name. For example, if your Bicep file is named _azuredeploy.bicep_, your parameter file is named _azuredeploy.parameters.json_. This naming convention helps you see the connection between the Bicep file and the parameters.
-
-To deploy to different environments, you create more than one parameter file. When you name the parameter files, identify their use such as development and production. For example, use _azuredeploy.parameters-dev.json_ and _azuredeploy.parameters-prod.json_ to deploy resources.
- ## Parameter precedence
-You can use inline parameters and a local parameter file in the same deployment operation. For example, you can specify some values in the local parameter file and add other values inline during deployment. If you provide values for a parameter in both the local parameter file and inline, the inline value takes precedence.
+You can use inline parameters and a local parameters file in the same deployment operation. For example, you can specify some values in the local parameters file and add other values inline during deployment. If you provide values for a parameter in both the local parameters file and inline, the inline value takes precedence.
-It's possible to use an external parameter file, by providing the URI to the file. When you use an external parameter file, you can't pass other values either inline or from a local file. All inline parameters are ignored. Provide all parameter values in the external file.
+It's possible to use an external parameters file, by providing the URI to the file. When you use an external parameters file, you can't pass other values either inline or from a local file. All inline parameters are ignored. Provide all parameter values in the external file.
## Parameter name conflicts
azure-resource-manager Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/visual-studio-code.md
Title: Create Bicep files by using Visual Studio Code
description: Describes how to create Bicep files by using Visual Studio Code Previously updated : 05/12/2023 Last updated : 06/05/2023 # Create Bicep files by using Visual Studio Code
You can deploy Bicep files directly from Visual Studio Code. Select **Deploy Bic
### Generate parameters file
-This command creates a parameter file in the same folder as the Bicep file. The new parameter file name is `<bicep-file-name>.parameters.json`.
+This command creates a parameter file in the same folder as the Bicep file. You can choose to create a Bicep parameter file or a JSON parameter file. The new Bicep parameter file name is `<bicep-file-name>.bicepparam`, while the new JSON parameter file name is `<bicep-file-name>.parameters.json`.
### Import Kubernetes manifest (Preview)
azure-resource-manager Request Just In Time Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/request-just-in-time-access.md
# Enable and request just-in-time access for Azure Managed Applications
-Consumers of your managed application may be reluctant to grant you permanent access to the managed resource group. As a publisher of a managed application, you might prefer that consumers know exactly when you need to access the managed resources. To give consumers greater control over granting access to managed resources, Azure Managed Applications provides a feature called just-in-time (JIT) access. This feature is currently in preview.
-
+Consumers of your managed application may be reluctant to grant you permanent access to the managed resource group. As a publisher of a managed application, you might prefer that consumers know exactly when you need to access the managed resources. To give consumers greater control over granting access to managed resources, Azure Managed Applications provides a feature called just-in-time (JIT) access.
JIT access enables you to request elevated access to a managed application's resources for troubleshooting or maintenance. You always have read-only access to the resources, but for a specific time period you can have greater access. The work flow for granting access is:
The principal ID of the account requesting JIT access must be explicitly include
## Next steps
-To learn about approving requests for JIT access, see [Approve just-in-time access in Azure Managed Applications](approve-just-in-time-access.md).
+To learn about approving requests for JIT access, see [Approve just-in-time access in Azure Managed Applications](approve-just-in-time-access.md).
backup Backup Azure Enhanced Soft Delete About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-enhanced-soft-delete-about.md
Title: Overview of enhanced soft delete for Azure Backup (preview)
description: This article gives an overview of enhanced soft delete for Azure Backup. Previously updated : 05/15/2023 Last updated : 06/16/2023
The key benefits of enhanced soft delete are:
## Supported regions - Enhanced soft delete is available in all Azure public regions.-- Soft delete of recovery points is currently in preview in West Central US, North Europe, and Australia East. Support in other regions will be added shortly.
+- Soft delete of recovery points is currently in preview in West Central US, Australia East, North Europe, South Central US, Australia Central, Australia Central 2, Canada East, India Central, India South,Japan West, Japan East, Korea Central, Korea South, France South, France Central, Sweden Central, Sweden South, West Europe, UK South, Australia South East, Brazil South, Brazil South East, Canada Central, UK West.
+ ## Supported scenarios - Enhanced soft delete is supported for Recovery Services vaults and Backup vaults. Also, it's supported for new and existing vaults.
This feature helps to retain these recovery points for an additional duration, a
>[!Note] >- Soft delete of recovery points is not supported for log recovery points in SQL and SAP HANA workloads.
->- Thisfeature is currently available in selected Azure regions only. [Learn more](#supported-scenarios).
+>- This feature is currently available in selected Azure regions only. [Learn more](#supported-scenarios).
## Pricing
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
No. You can access your virtual machine from the Azure portal using your browser
### <a name="native-client"></a>Can I connect to my VM using a native client?
-Yes. You can connect to a VM from your local computer using a native client. See [Connect to a VM using a native client](connect-native-client-windows.md).
+Yes. You can connect to a VM from your local computer using a native client. See [Connect to a VM using a native client](native-client.md).
### <a name="agent"></a>Do I need an agent running in the Azure virtual machine?
bastion Connect Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-ip-address.md
Before you begin these steps, verify that you have the following environment set
## Connect to VM - native client
-You can connect to VMs using a specified IP address with native client via SSH, RDP, or tunnelling. Note that this feature does not support Azure Active Directory authentication or custom port and protocol at the moment. To learn more about configuring native client support, see [Connect to a VM - native client](connect-native-client-windows.md). Use the following commands as examples:
+You can connect to VMs using a specified IP address with native client via SSH, RDP, or tunnelling. Note that this feature does not support Azure Active Directory authentication or custom port and protocol at the moment. To learn more about configuring native client support, see [Configure Bastion native client support](native-client.md). Use the following commands as examples:
**RDP:**
bastion Connect Native Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-native-client-windows.md
- Title: 'Connect to a VM using a native client and Azure Bastion'-
-description: Learn how to connect to a VM from a Windows computer by using Bastion and a native client.
--- Previously updated : 05/18/2023---
-# Connect to a VM using a native client
-
-This article helps you configure your Bastion deployment, and then connect to a VM in the VNet using the native client (SSH or RDP) on your local computer. The native client feature lets you connect to your target VMs via Bastion using Azure CLI, and expands your sign-in options to include local SSH key pair and Azure Active Directory (Azure AD). Additionally with this feature, you can now also upload or download files, depending on the connection type and client.
--
-Your capabilities on the VM when connecting via native client are dependent on what is enabled on the native client. Controlling access to features such as file transfer via Bastion isn't supported.
-
-> [!NOTE]
-> This configuration requires the Standard SKU tier for Azure Bastion.
-
-After you deploy this feature, there are two different sets of connection instructions.
-
-* [Connect to a VM from the native client on a Windows local computer](#connect). This lets you do the following:
-
- * Connect using SSH or RDP.
- * [Upload and download files](vm-upload-download-native.md#rdp) over RDP.
- * If you want to connect using SSH and need to upload files to your target VM, use the **az network bastion tunnel** command instead.
-
-* [Connect to a VM using the **az network bastion tunnel** command](#connect-tunnel). This lets you do the following:
-
- * Use native clients on *non*-Windows local computers (example: a Linux PC).
- * Use the native client of your choice. (This includes the Windows native client.)
- * Connect using SSH or RDP. (The bastion tunnel doesn't relay web servers or hosts.)
- * Set up concurrent VM sessions with Bastion.
- * [Upload files](vm-upload-download-native.md#tunnel-command) to your target VM from your local computer. File download from the target VM to the local client is currently not supported for this command.
-
-**Limitations**
-
-* Signing in using an SSH private key stored in Azure Key Vault isnΓÇÖt supported with this feature. Before signing in to your Linux VM using an SSH key pair, download your private key to a file on your local machine.
-* This feature isn't supported on Cloud Shell.
-
-## <a name="prereq"></a>Prerequisites
-
-Before you begin, verify that you have the following prerequisites:
-
-* The latest version of the CLI commands (version 2.32 or later) is installed. For information about installing the CLI commands, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Get Started with Azure CLI](/cli/azure/get-started-with-azure-cli).
-* An Azure virtual network.
-* A virtual machine in the virtual network.
-* The VM's Resource ID. The Resource ID can be easily located in the Azure portal. Go to the Overview page for your VM and select the *JSON View* link to open the Resource JSON. Copy the Resource ID at the top of the page to your clipboard to use later when connecting to your VM.
-* If you plan to sign in to your virtual machine using your Azure AD credentials, make sure your virtual machine is set up using one of the following methods:
- * [Enable Azure AD sign-in for a Windows VM](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md) or [Linux VM](../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md).
- * [Configure your Windows VM to be Azure AD-joined](../active-directory/devices/concept-azure-ad-join.md).
- * [Configure your Windows VM to be hybrid Azure AD-joined](../active-directory/devices/concept-azure-ad-join-hybrid.md).
-
-## <a name="secure "></a>Secure your native client connection
-
-If you want to further secure your native client connection, you can limit port access by only providing access to port 22/3389. To restrict port access, you must deploy the following NSG rules on your AzureBastionSubnet to allow access to select ports and deny access from any other ports.
--
-## <a name="configure"></a>Configure the native client support feature
-
-You can configure this feature by either modifying an existing Bastion deployment, or you can deploy Bastion with the feature configuration already specified.
-
-### To modify an existing Bastion deployment
-
-If you've already deployed Bastion to your VNet, modify the following configuration settings:
-
-1. Navigate to the **Configuration** page for your Bastion resource. Verify that the SKU Tier is **Standard**. If it isn't, select **Standard**.
-1. Select the box for **Native Client Support**, then apply your changes.
-
- :::image type="content" source="./media/connect-native-client-windows/update-host.png" alt-text="Screenshot that shows settings for updating an existing host with Native Client Support box selected." lightbox="./media/connect-native-client-windows/update-host.png":::
-
-### To deploy Bastion with the native client feature
-
-If you haven't already deployed Bastion to your VNet, you can deploy with the native client feature specified by deploying Bastion using manual settings. For steps, see [Tutorial - Deploy Bastion with manual settings](tutorial-create-host-portal.md#createhost). When you deploy Bastion, specify the following settings:
-
-1. On the **Basics** tab, for **Instance Details -> Tier** select **Standard**. Native client support requires the Standard SKU.
-
- :::image type="content" source="./media/connect-native-client-windows/standard.png" alt-text="Settings for a new bastion host with Standard SKU selected." lightbox="./media/connect-native-client-windows/standard.png":::
-1. Before you create the bastion host, go to the **Advanced** tab and check the box for **Native Client Support**, along with the checkboxes for any other additional features that you want to deploy.
-
- :::image type="content" source="./media/connect-native-client-windows/new-host.png" alt-text="Screenshot that shows settings for a new bastion host with Native Client Support box selected." lightbox="./media/connect-native-client-windows/new-host.png":::
-
-1. Click **Review + create** to validate, then click **Create** to deploy your Bastion host.
-
-## <a name="verify"></a>Verify roles and ports
-
-Verify that the following roles and ports are configured in order to connect to the VM.
-
-### Required roles
-
-* Reader role on the virtual machine.
-* Reader role on the NIC with private IP of the virtual machine.
-* Reader role on the Azure Bastion resource.
-* Virtual Machine Administrator Login or Virtual Machine User Login role, if youΓÇÖre using the Azure AD sign-in method. You only need to do this if you're enabling Azure AD login using the processes outlined in one of these articles:
-
- * [Azure Windows VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md)
- * [Azure Linux VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md)
-
-### Ports
-
-To connect to a Linux VM using native client support, you must have the following ports open on your Linux VM:
-
-* Inbound port: SSH (22) *or*
-* Inbound port: Custom value (youΓÇÖll then need to specify this custom port when you connect to the VM via Azure Bastion)
-
-To connect to a Windows VM using native client support, you must have the following ports open on your Windows VM:
-
-* Inbound port: RDP (3389) *or*
-* Inbound port: Custom value (youΓÇÖll then need to specify this custom port when you connect to the VM via Azure Bastion)
-
-To learn about how to best configure NSGs with Azure Bastion, see [Working with NSG access and Azure Bastion](bastion-nsg.md).
-
-## <a name="connect"></a>Connect to VM - Windows native client
-
-This section helps you connect to your virtual machine from the native client on a local Windows computer. If you want to upload and download files after connecting, you must use an RDP connection. For more information about file transfers, see [Upload or download files](vm-upload-download-native.md).
-
-Use the example that corresponds to the type of target VM to which you want to connect.
-
-* [Windows VM](#connect-windows)
-* [Linux VM](#connect-linux)
-
-### <a name="connect-windows"></a>Connect to a Windows VM
-
-1. Sign in to your Azure account. If you have more than one subscription, select the subscription containing your Bastion resource.
-
- ```azurecli
- az login
- az account list
- az account set --subscription "<subscription ID>"
- ```
-
-1. Sign in to your target Windows VM using one of the following example options. If you want to specify a custom port value, you should also include the field **--resource-port** in the sign-in command.
-
- **RDP:**
-
- To connect via RDP, use the following command. YouΓÇÖll then be prompted to input your credentials. You can use either a local username and password, or your Azure AD credentials. For more information, see [Azure Windows VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md).
-
- ```azurecli
- az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>"
- ```
-
- > [!IMPORTANT]
- > Remote connection to VMs that are joined to Azure AD is allowed only from Windows 10 or later PCs that are Azure AD registered (starting with Windows 10 20H1), Azure AD joined, or hybrid Azure AD joined to the *same* directory as the VM.
-
- **SSH:**
-
- The extension can be installed by running, ```az extension add --name ssh```. To sign in using an SSH key pair, use the following example.
-
- ```azurecli
- az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
- ```
-
- Once you sign in to your target VM, the native client on your computer opens up with your VM session; **MSTSC** for RDP sessions, and **SSH CLI extension (az ssh)** for SSH sessions.
-
-### <a name="connect-linux"></a>Connect to a Linux VM
-
-1. Sign in to your Azure account. If you have more than one subscription, select the subscription containing your Bastion resource.
-
- ```azurecli
- az login
- az account list
- az account set --subscription "<subscription ID>"
- ```
-
-1. Sign in to your target Linux VM using one of the following example options. If you want to specify a custom port value, you should also include the field **--resource-port** in the sign-in command.
-
- **Azure AD:**
-
- If youΓÇÖre signing in to an Azure AD login-enabled VM, use the following command. For more information, see [Azure Linux VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md).
-
- ```azurecli
- az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --auth-type "AAD"
- ```
-
- **SSH:**
-
- The extension can be installed by running, ```az extension add --name ssh```. To sign in using an SSH key pair, use the following example.
-
- ```azurecli
- az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
- ```
-
- **Username/password:**
-
- If youΓÇÖre signing in using a local username and password, use the following command. YouΓÇÖll then be prompted for the password for the target VM.
-
- ```azurecli
- az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --auth-type "password" --username "<Username>"
- ```
-
- 1. Once you sign in to your target VM, the native client on your computer opens up with your VM session; **MSTSC** for RDP sessions, and **SSH CLI extension (az ssh)** for SSH sessions.
-
-## <a name="connect-tunnel"></a>Connect to VM - other native clients
-
-This section helps you connect to your virtual machine from native clients on *non*-Windows local computers (example: a Linux PC) using the **az network bastion tunnel** command. You can also connect using this method from a Windows computer. This is helpful when you require an SSH connection and want to upload files to your VM. The bastion tunnel supports RDP/SSH connection, but doesn't relay web servers or hosts.
-
-This connection supports file upload from the local computer to the target VM. For more information, see [Upload files](vm-upload-download-native.md).
-
-1. Sign in to your Azure account. If you have more than one subscription, select the subscription containing your Bastion resource.
-
- ```azurecli
- az login
- az account list
- az account set --subscription "<subscription ID>"
- ```
-
-1. Open the tunnel to your target VM using the following command.
-
- ```azurecli
- az network bastion tunnel --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --resource-port "<TargetVMPort>" --port "<LocalMachinePort>"
- ```
-
-1. Connect to your target VM using SSH or RDP, the native client of your choice, and the local machine port you specified in Step 2.
-
- For example, you can use the following command if you have the OpenSSH client installed on your local computer:
-
- ```azurecli
- ssh <username>@127.0.0.1 -p <LocalMachinePort>
- ```
-
-## <a name="connect-IP"></a>Connect to VM - IP Address
-
-This section helps you connect to your on-premises, non-Azure, and Azure virtual machines via Azure Bastion using a specified private IP address from native client. You can replace `--target-resource-id` with `--target-ip-address` in any of the above commands with the specified IP address to connect to your VM.
-
-> [!Note]
-> This feature does not support support Azure AD authentication or custom port and protocol at the moment. For more information on IP-based connection, see [Connect to a VM - IP address](connect-ip-address.md).
-
-Use the following commands as examples:
--
- **RDP:**
-
- ```azurecli
- az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>
- ```
-
- **SSH:**
-
- ```azurecli
- az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-addres "<VMIPAddress>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
- ```
-
- **Tunnel:**
-
- ```azurecli
- az network bastion tunnel --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>" --resource-port "<TargetVMPort>" --port "<LocalMachinePort>"
- ```
--
-## Next steps
-
-[Upload or download files](vm-upload-download-native.md)
bastion Connect Vm Native Client Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-vm-native-client-linux.md
+
+ Title: 'Connect to a VM using Bastion - Linux native client'
+
+description: Learn how to connect to a VM from a Linux computer by using Bastion and a native client.
+++ Last updated : 06/12/2023+++
+# Connect to a VM using Bastion and a Linux native client
+
+This article helps you connect to a VM in the VNet using the native client (SSH or RDP) on your local computer using the **az network bastion tunnel** command. The native client feature lets you connect to your target VMs via Bastion using Azure CLI, and expands your sign-in options to include local SSH key pair and Azure Active Directory (Azure AD). For more information and steps to configure Bastion for native client connections, see [Configure Bastion for native client connections](native-client.md). Connections via native client require the Bastion Standard SKU.
++
+After you've configured Bastion for native client support, you can connect to a VM using the **az network bastion tunnel** command. When you use this command, you can do the following:
+
+ * Use native clients on *non*-Windows local computers (example: a Linux computer).
+ * Use the native client of your choice. (This includes the Windows native client.)
+ * Connect using SSH or RDP. (The bastion tunnel doesn't relay web servers or hosts.)
+ * Set up concurrent VM sessions with Bastion.
+ * [Upload files](vm-upload-download-native.md#tunnel-command) to your target VM from your local computer. File download from the target VM to the local client is currently not supported for this command.
+
+Limitations:
+
+* Signing in using an SSH private key stored in Azure Key Vault isnΓÇÖt supported with this feature. Before signing in to your Linux VM using an SSH key pair, download your private key to a file on your local machine.
+* This feature isn't supported on Cloud Shell.
+
+## <a name="prereq"></a>Prerequisites
++
+## <a name="verify"></a>Verify roles and ports
+
+Verify that the following roles and ports are configured in order to connect to the VM.
++
+## <a name="connect-tunnel"></a>Connect to a VM
+
+This section helps you connect to your virtual machine from native clients on *non*-Windows local computers (example: Linux) using the **az network bastion tunnel** command. You can also connect using this method from a Windows computer. This is helpful when you require an SSH connection and want to upload files to your VM. The bastion tunnel supports RDP/SSH connection, but doesn't relay web servers or hosts.
+
+This connection supports file upload from the local computer to the target VM. For more information, see [Upload files](vm-upload-download-native.md).
++
+## <a name="connect-IP"></a>Connect to VM via IP Address
++
+Use the following command as an example:
+
+ **Tunnel:**
+
+ ```azurecli
+ az network bastion tunnel --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>" --resource-port "<TargetVMPort>" --port "<LocalMachinePort>"
+ ```
+
+## Next steps
+
+[Upload or download files](vm-upload-download-native.md)
bastion Connect Vm Native Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-vm-native-client-windows.md
+
+ Title: 'Connect to a VM using Bastion - Windows native client'
+
+description: Learn how to connect to a VM from a Windows computer by using Bastion and a native client.
+++ Last updated : 06/12/2023+++
+# Connect to a VM using Bastion and the Windows native client
+
+This article helps you connect to a VM in the VNet using the native client (SSH or RDP) on your local Windows computer. The native client feature lets you connect to your target VMs via Bastion using Azure CLI, and expands your sign-in options to include local SSH key pair and Azure Active Directory (Azure AD). For more information and steps to configure Bastion for native client connections, see [Configure Bastion for native client connections](native-client.md). Connections via native client require the Bastion Standard SKU.
++
+After you've configured Bastion for native client support, you can connect to a VM using the native Windows client. This lets you do the following:
+
+ * Connect using SSH or RDP.
+ * [Upload and download files](vm-upload-download-native.md#rdp) over RDP.
+ * If you want to connect using SSH and need to upload files to your target VM, you can use the instructions for the [az network bastion tunnel](connect-vm-native-client-linux.md) command instead.
+
+Limitations:
+
+* Signing in using an SSH private key stored in Azure Key Vault isnΓÇÖt supported with this feature. Before signing in to your Linux VM using an SSH key pair, download your private key to a file on your local machine.
+* This feature isn't supported on Cloud Shell.
+
+## <a name="prereq"></a>Prerequisites
++
+## <a name="verify"></a>Verify roles and ports
+
+Verify that the following roles and ports are configured in order to connect to the VM.
++
+## <a name="connect-windows"></a>Connect to a Windows VM
+
+1. Sign in to your Azure account. If you have more than one subscription, select the subscription containing your Bastion resource.
+
+ ```azurecli
+ az login
+ az account list
+ az account set --subscription "<subscription ID>"
+ ```
+
+1. Sign in to your target Windows VM using one of the following example options. If you want to specify a custom port value, you should also include the field **--resource-port** in the sign-in command.
+
+ **RDP:**
+
+ To connect via RDP, use the following command. YouΓÇÖll then be prompted to input your credentials. You can use either a local username and password, or your Azure AD credentials. For more information, see [Azure Windows VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md).
+
+ ```azurecli
+ az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>"
+ ```
+
+ > [!IMPORTANT]
+ > Remote connection to VMs that are joined to Azure AD is allowed only from Windows 10 or later PCs that are Azure AD registered (starting with Windows 10 20H1), Azure AD joined, or hybrid Azure AD joined to the *same* directory as the VM.
+
+ **SSH:**
+
+ The extension can be installed by running, ```az extension add --name ssh```. To sign in using an SSH key pair, use the following example.
+
+ ```azurecli
+ az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
+ ```
+
+ Once you sign in to your target VM, the native client on your computer opens up with your VM session; **MSTSC** for RDP sessions, and **SSH CLI extension (az ssh)** for SSH sessions.
+
+## <a name="connect-linux"></a>Connect to a Linux VM
+
+1. Sign in to your Azure account. If you have more than one subscription, select the subscription containing your Bastion resource.
+
+ ```azurecli
+ az login
+ az account list
+ az account set --subscription "<subscription ID>"
+ ```
+
+1. Sign in to your target Linux VM using one of the following example options. If you want to specify a custom port value, you should also include the field **--resource-port** in the sign-in command.
+
+ **Azure AD:**
+
+ If youΓÇÖre signing in to an Azure AD login-enabled VM, use the following command. For more information, see [Azure Linux VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md).
+
+ ```azurecli
+ az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --auth-type "AAD"
+ ```
+
+ **SSH:**
+
+ The extension can be installed by running, ```az extension add --name ssh```. To sign in using an SSH key pair, use the following example.
+
+ ```azurecli
+ az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
+ ```
+
+ **Username/password:**
+
+ If youΓÇÖre signing in using a local username and password, use the following command. YouΓÇÖll then be prompted for the password for the target VM.
+
+ ```azurecli
+ az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --auth-type "password" --username "<Username>"
+ ```
+
+ 1. Once you sign in to your target VM, the native client on your computer opens up with your VM session; **MSTSC** for RDP sessions, and **SSH CLI extension (az ssh)** for SSH sessions.
+
+## <a name="connect-IP"></a>Connect to VM via IP Address
++
+Use the following commands as examples:
+
+ **RDP:**
+
+ ```azurecli
+ az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>
+ ```
+
+ **SSH:**
+
+ ```azurecli
+ az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-addres "<VMIPAddress>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
+ ```
+
+## Next steps
+
+[Upload or download files](vm-upload-download-native.md)
bastion Native Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/native-client.md
+
+ Title: 'Configure Bastion for native client connections'
+
+description: Learn how to configure Bastion for native client connections.
+++ Last updated : 06/12/2023+++
+# Configure Bastion for native client connections
+
+This article helps you configure your Bastion deployment to accept connections from the native client (SSH or RDP) on your local computer to VMs located in the VNet. The native client feature lets you connect to your target VMs via Bastion using Azure CLI, and expands your sign-in options to include local SSH key pair and Azure Active Directory (Azure AD). Additionally, you can also upload or download files, depending on the connection type and client.
++
+* Your capabilities on the VM when connecting via native client are dependent on what is enabled on the native client.
+* You can configure this feature by either modifying an existing Bastion deployment, or you can deploy Bastion with the feature configuration already specified.
+
+> [!IMPORTANT]
+> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+>
+
+## Deploy Bastion with the native client feature
+
+If you haven't already deployed Bastion to your VNet, you can deploy with the native client feature specified by deploying Bastion using manual settings. For steps, see [Tutorial - Deploy Bastion with manual settings](tutorial-create-host-portal.md#createhost). When you deploy Bastion, specify the following settings:
+
+1. On the **Basics** tab, for **Instance Details -> Tier** select **Standard**. Native client support requires the Standard SKU.
+
+ :::image type="content" source="./media/native-client/standard.png" alt-text="Settings for a new bastion host with Standard SKU selected." lightbox="./media/native-client/standard.png":::
+1. Before you create the bastion host, go to the **Advanced** tab and check the box for **Native Client Support**, along with the checkboxes for any other features that you want to deploy.
+
+ :::image type="content" source="./media/native-client/new-host.png" alt-text="Screenshot that shows settings for a new bastion host with Native Client Support box selected." lightbox="./media/native-client/new-host.png":::
+
+1. Select **Review + create** to validate, then select **Create** to deploy your Bastion host.
+
+## Modify an existing Bastion deployment
+
+If you've already deployed Bastion to your VNet, modify the following configuration settings:
+
+1. Navigate to the **Configuration** page for your Bastion resource. Verify that the SKU Tier is **Standard**. If it isn't, select **Standard**.
+1. Select the box for **Native Client Support**, then apply your changes.
+
+ :::image type="content" source="./media/native-client/update-host.png" alt-text="Screenshot that shows settings for updating an existing host with Native Client Support box selected." lightbox="./media/native-client/update-host.png":::
+
+## <a name="secure "></a>Secure your native client connection
+
+If you want to further secure your native client connection, you can limit port access by only providing access to port 22/3389. To restrict port access, you must deploy the following NSG rules on your AzureBastionSubnet to allow access to select ports and deny access from any other ports.
++
+## Connecting to VMs
+
+After you deploy this feature, there are different connection instructions, depending on the host computer you're connecting from.
+
+* [Connect from the native client on a Windows computer](connect-vm-native-client-windows.md). This lets you do the following:
+
+ * Connect using SSH or RDP.
+ * [Upload and download files](vm-upload-download-native.md#rdp) over RDP.
+ * If you want to connect using SSH and need to upload files to your target VM, you can use the instructions for the [az network bastion tunnel](connect-vm-native-client-linux.md) command instead.
+
+* [Connect using the **az network bastion tunnel** command](connect-vm-native-client-linux.md). This lets you do the following:
+
+ * Use native clients on *non*-Windows local computers (example: a Linux PC).
+ * Use the native client of your choice. (This includes the Windows native client.)
+ * Connect using SSH or RDP. (The bastion tunnel doesn't relay web servers or hosts.)
+ * Set up concurrent VM sessions with Bastion.
+ * [Upload files](vm-upload-download-native.md#tunnel-command) to your target VM from your local computer. File download from the target VM to the local client is currently not supported for this command.
+
+### Limitations
+
+* Signing in using an SSH private key stored in Azure Key Vault isnΓÇÖt supported with this feature. Before signing in to a Linux VM using an SSH key pair, download your private key to a file on your local machine.
+* Connecting using a native client isn't supported on Cloud Shell.
+
+## Next steps
+
+* [Connect from a Windows native client](connect-vm-native-client-windows.md)
+* [Connect using the az network bastion tunnel command](connect-vm-native-client-linux.md)
+* [Upload or download files](vm-upload-download-native.md)
bastion Vm Upload Download Native https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/vm-upload-download-native.md
# File transfer using a native client
-Azure Bastion offers support for file transfer between your target VM and local computer using Bastion and a native RDP or native SSH client. To learn more about native client support, refer to [Connect to a VM using the native client](connect-native-client-windows.md). While it may be possible to use third-party clients and tools to upload or download files, this article focuses on working with supported native clients.
+Azure Bastion offers support for file transfer between your target VM and local computer using Bastion and a native RDP or native SSH client. To learn more about native client support, refer to [Configure Bastion native client support](native-client.md). While it may be possible to use third-party clients and tools to upload or download files, this article focuses on working with supported native clients.
* File transfers are supported using the native client only. You can't upload or download files using PowerShell or via the Azure portal. * To both [upload and download files](#rdp), you must use the Windows native client and RDP.
Azure Bastion offers support for file transfer between your target VM and local
## <a name="rdp"></a>Upload and download files - RDP
-The steps in this section apply when connecting to a target VM from a Windows local computer using the native Windows client and RDP. The **az network bastion rdp** command uses the native client MSTSC. Once connected to the target VM, you can upload and download files using **right-click**, then **Copy** and **Paste**. To learn more about this command and how to connect, see [Connect to a VM using a native client](connect-native-client-windows.md).
+The steps in this section apply when connecting to a target VM from a Windows local computer using the native Windows client and RDP. The **az network bastion rdp** command uses the native client MSTSC. Once connected to the target VM, you can upload and download files using **right-click**, then **Copy** and **Paste**. To learn more about this command and how to connect, see [Connect from a Windows native client](connect-vm-native-client-windows.md).
> [!NOTE] > File transfer over SSH is not supported using this method. Instead, use the [az network bastion tunnel command](#tunnel-command) to upload files over SSH.
The steps in this section apply when connecting to a target VM from a Windows lo
## <a name="tunnel-command"></a>Upload files - SSH and RDP The steps in this section apply to native clients other than Windows, as well as Windows native clients that want to connect over SSH to upload files.
-This section helps you upload files from your local computer to your target VM over SSH or RDP using the **az network bastion tunnel** command. To learn more about the tunnel command and how to connect, see [Connect to a VM using a native client](connect-native-client-windows.md).
+This section helps you upload files from your local computer to your target VM over SSH or RDP using the **az network bastion tunnel** command. To learn more about the tunnel command and how to connect, see [Connect from a Linux native client](connect-vm-native-client-linux.md).
> [!NOTE] > This command can be used to upload files from your local computer to the target VM. File download is not supported.
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md
Title: Call Automation overview
description: Learn about Azure Communication Services Call Automation. - -++ Last updated 09/06/2022
Azure Communication Services(ACS) Call Automation provides developers the ability to build server-based, intelligent call workflows, and call recording for voice and Public Switched Telephone Network(PSTN) channels. The SDKs, available for .NET and Java, use an action-event model to help you build personalized customer interactions. Your communication applications can listen to real-time call events and perform control plane actions (like answer, transfer, play audio, start recording, etc.) to steer and control calls based on your business logic. > [!NOTE]
-> Call Automation currently doesn't interoperate with Microsoft Teams. Actions like making or redirecting a call to a Teams user or adding them to a call using Call Automation aren't supported.
+> Call Automation currently doesn't interoperate with Microsoft Teams. Actions like making or redirecting a call to a Teams user or adding them to a call using Call Automation aren't supported.
+> Call Automation currently doesn't support [Rooms](../rooms/room-concept.md) calls.
## Common use cases
The Call Automation events are sent to the web hook callback URI specified when
| CallTransferAccepted | Your applicationΓÇÖs call leg has been transferred to another endpoint | | CallTransferFailed | The transfer of your applicationΓÇÖs call leg failed | | AddParticipantSucceeded| Your application added a participant |
-|AddParticipantFailed | Your application was unable to add a participant |
-| ParticipantUpdated | The status of a participant changed while your applicationΓÇÖs call leg was connected to a call |
+| AddParticipantFailed | Your application was unable to add a participant |
+| RemoveParticipantSucceeded| Your application has successfuly removed a participant from the call. |
+| RemoveParticipantFailed | Your application was unable to remove a participant from the call. |
+| ParticipantsUpdated | The status of a participant changed while your applicationΓÇÖs call leg was connected to a call |
| PlayCompleted | Your application successfully played the audio file provided | | PlayFailed | Your application failed to play audio |
-| PlayCanceled | Your application canceled the play operation |
+| PlayCanceled | The requested play action has been canceled. |
| RecognizeCompleted | Recognition of user input was successfully completed |
+| RecognizeCanceled | The requested recognize action has been canceled. |
| RecognizeFailed | Recognition of user input was unsuccessful <br/>*to learn more about recognize action events view our how-to guide for [gathering user input](../../how-tos/call-automation/recognize-action.md)*|
-| RecognizeCanceled | Your application canceled the request to recognize user input |
-
+|RecordingStateChanged | Status of recording action has changed from active to inactive or vice versa. |
To understand which events are published for different actions, refer to [this guide](../../how-tos/call-automation/actions-for-call-control.md) that provides code samples as well as sequence diagrams for various call control flows. To learn how to secure the callback event delivery, refer to [this guide](../../how-tos/call-automation/secure-webhook-endpoint.md).
-## Known issues
-
-1. Using the incorrect IdentifierType for endpoints for `Transfer` requests (like using CommunicationUserIdentifier to specify a phone number) returns a 500 error instead of a 400 error code. Solution: Use the correct type, CommunicationUserIdentifier for Communication Users and PhoneNumberIdentifier for phone numbers.
-2. Taking a pre-call action like Answer/Reject on the original call after redirected it gives a 200 success instead of failing on 'call not found'.
-3. Transferring a call with more than two participants is currently not supported.
-4. After transferring a call, you may receive two `CallDisconnected` events and will need to handle this behavior by ignoring the duplicate.
- ## Next steps > [!div class="nextstepaction"]
communication-services Incoming Call Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/incoming-call-notification.md
Below is an example of an advanced filter on an Event Grid subscription watching
Since the `IncomingCall` notification doesn't have a specific destination other than the Event Grid subscription you've created, you're free to associate any particular number to any endpoint in Azure Communication Services. For example, if you acquired a PSTN phone number of `+14255551212` and want to assign it to a user with an identity of `375f0e2f-e8db-4449-9bf7-2054b02e42b4` in your application, you'll maintain a mapping of that number to the identity. When an `IncomingCall` notification is sent matching the phone number in the **to** field, you'll invoke the `Redirect` API and supply the identity of the user. In other words, you maintain the number assignment within your application and route or answer calls at runtime.
+## Best Practices
+1. Event Grid requires you to prove ownership of your Webhook endpoint before it starts delivering events to that endpoint. This requirement prevents a malicious user from flooding your endpoint with events. If you are facing issues with receiving events, ensure the webhook configured is verified by handling `SubscriptionValidationEvent`. For more information, see this [guide](../../../event-grid/webhook-event-delivery.md).
+2. Upon the receipt of an incoming call event, if your application does not respond back with 200Ok to Event Grid in time, Event Grid will use exponential backoff retry to send the again. However, an incoming call only rings for 30 seconds, and acting on a call after that will not work. To avoid retries for expired or stale calls, we recommend setting the retry policy as - Max Event Delivery Attempts to 2 and Event Time to Live to 1 minute. These settings can be found under Additional Features tab of the event subscription. Learn more about retries [here](../../../event-grid/delivery-and-retry.md).
+
+3. We recommend you to enable logging for your Event Grid resource to monitor events that failed to deliver. Navigate to the system topic under Events tab of your Communication resource and enable logging from the Diagnostic settings. Failure logs can be found in 'AegDeliveryFailureLogs' table.
+
+ ```sql
+ AegDeliveryFailureLogs
+ | limit 10
+ | where Message has "incomingCall"
+ ```
+ ## Next steps-- [Build a Call Automation application](../../quickstarts/call-automation/callflows-for-customer-interactions.md) to simulate a customer interaction.-- [Redirect an inbound PSTN call](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md) to your resource.
+- Try out the quickstart to [place an outbound call](../../quickstarts/call-automation/quickstart-make-an-outbound-call.md).
communication-services Enable User Engagement Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/enable-user-engagement-tracking.md
Configuring email engagement enables the insights on your customers' engagement with emails to help build customer relationships. Only the emails that are sent from Azure Communication Services verified Email Domains that are enabled for user engagement analysis will get the engagement tracking metrics. > [!IMPORTANT]
-> By enabling this feature, you are acknowledging that you are enabling open/click tracking and giving consent to collect your customers' email activity
+> By enabling this feature, you are acknowledging that you are enabling open/click tracking and giving consent to collect your customers' email activity.
In this quick start, you'll learn about how to enable user engagement tracking for verified domain in Azure Communication Services.
In this quick start, you'll learn about how to enable user engagement tracking f
6. Click turn on to enable engagement tracking.
-**Your email domain is now ready to send emails with user engagement tracking.**
+**Your email domain is now ready to send emails with user engagement tracking. Please be aware that user engagement tracking is applicable to HTML content and will not function if you submit the payload in plaintext.**
You can now subscribe to Email User Engagement operational logs - provides information related to 'open' and 'click' user engagement metrics for messages sent from the Email service.
communication-services Number Lookup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/number-lookup.md
cd NumberLookupQuickstart
dotnet build ```
+### Connect to dev package feed
+The private preview version of the SDK is published to a dev package feed. You can add the dev feed using the [NuGet CLI](https://docs.microsoft.com/nuget/reference/nuget-exe-cli-reference), which will add it to the NuGet.Config file.
+
+```console
+nuget sources add -Name "Azure SDK for .NET Dev Feed" -Source "https://pkgs.dev.azure.com/azure-sdk/public/_packaging/azure-sdk-for-net/nuget/v3/index.json"
+```
+
+More detailed information and other options for connecting to the dev feed can be found in the [contributing guide](https://github.com/Azure/azure-sdk-for-net/blob/main/CONTRIBUTING.md#nuget-package-dev-feed).
+ ### Install the package While still in the application directory, install the Azure Communication Services PhoneNumbers client library for .NET package by using the following command.
In this quickstart you learned how to:
> [Number Lookup Concept](../../concepts/numbers/number-lookup-concept.md) > [!div class="nextstepaction"]
-> [Number Lookup SDK](../../concepts/numbers/number-lookup-sdk.md)
+> [Number Lookup SDK](../../concepts/numbers/number-lookup-sdk.md)
communication-services End Of Call Survey Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/end-of-call-survey-tutorial.md
This tutorial shows you how to use the Azure Communication Services End of Call
- An active Communication Services resource. [Create a Communication Services resource](../quickstarts/create-communication-resource.md). Survey results are tied to single Communication Services resources. - An active Log Analytics Workspace, also known as Azure Monitor Logs. See [End of Call Survey Logs](../concepts/analytics/logs/end-of-call-survey-logs.md).-- To conduct a survey with custom questions using free form text, you will need an [App Insight resource](../../azure-monitor/app/create-workspace-resource.md#create-a-workspace-based-resource).
+- To conduct a survey with custom questions using free form text, you need an [App Insight resource](../../azure-monitor/app/create-workspace-resource.md#create-a-workspace-based-resource).
> [!IMPORTANT]
Screenshare. However, each API value can be customized from a minimum of
## Custom questions In addition to using the End of Call Survey API you can create your own survey questions and incorporate them with the End of Call Survey results. Below you'll find steps to incorporate your own customer questions into a survey and query the results of the End of Call Survey API and your own survey questions. - [Create App Insight resource](../../azure-monitor/app/create-workspace-resource.md#create-a-workspace-based-resource).-- Embed Azure AppInsights into your application [Click here to know more about App Insight initialization using plain JavaScript](../../azure-monitor/app/javascript-sdk.md). Alternatively, you can use NPM to get the App Insights dependences. [Click here to know more about App Insight initialization using NPM](../../azure-monitor/app/javascript-sdk-advanced.md).
+- Embed Azure AppInsights into your application [Click here to know more about App Insight initialization using plain JavaScript](../../azure-monitor/app/javascript-sdk.md). Alternatively, you can use NPM to get the App Insights dependences. [Click here to know more about App Insight initialization using NPM](../../azure-monitor/app/javascript-sdk-configuration.md).
- Build a UI in your application that will serve custom questions to the user and gather their input, lets assume that your application gathered responses as a string in the `improvementSuggestion` variable - Submit survey results to ACS and send user response using App Insights:
In addition to using the End of Call Survey API you can create your own survey q
}); appInsights.flush(); ```
-User responses that were sent using AppInsights will be available under your App Insights workspace. You can use [Workbooks](../../update-center/workbooks.md) to query between multiple resources, correlate call ratings and custom survey data. Steps to correlate the call ratings and custom survey data:
+User responses that were sent using AppInsights are available under your App Insights workspace. You can use [Workbooks](../../update-center/workbooks.md) to query between multiple resources, correlate call ratings and custom survey data. Steps to correlate the call ratings and custom survey data:
- Create new [Workbooks](../../update-center/workbooks.md) (Your ACS Resource -> Monitoring -> Workbooks -> New) and query Call Survey data from your ACS resource. - Add new query (+Add -> Add query) - Make sure `Data source` is `Logs` and `Resource type` is `Communication` - You can rename the query (Advanced Settings -> Step name [example: call-survey])-- Please be aware that it could require a maximum of **2 hours** before the survey data becomes visible in the Azure portal.. Query the call rating data-
+- Be aware that it could require a maximum of **2 hours** before the survey data becomes visible in the Azure portal.. Query the call rating data-
```KQL ACSCallSurvey | where TimeGenerated > now(-24h)
confidential-computing Confidential Vm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-vm-overview.md
With Secure Boot, trusted publishers must sign OS boot components (including the
Azure confidential VMs use both the OS disk and a small encrypted virtual machine guest state (VMGS) disk of several megabytes. The VMGS disk contains the security state of the VM's components. Some components include the vTPM and UEFI bootloader. The small VMGS disk might incur a monthly storage cost.
-From July 2022, encrypted OS disks will incur higher costs. This change is because encrypted OS disks use more space, and compression isn't possible. For more information, see [the pricing guide for managed disks](https://azure.microsoft.com/pricing/details/managed-disks/).
+From July 2022, encrypted OS disks will incur higher costs. For more information, see [the pricing guide for managed disks](https://azure.microsoft.com/pricing/details/managed-disks/).
## Attestation and TPM
confidential-computing Skr Flow Confidential Containers Azure Container Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/skr-flow-confidential-containers-azure-container-instance.md
Secure Key Release (SKR) flow with Azure Key Vault (AKV) with confidential conta
An [open sourced GitHub project "confidential side-cars"](https://github.com/microsoft/confidential-sidecar-containers) details how to build this container and what parameters/environment variables are required for you to prepare and run this side-car container. The current side car implementation provides various HTTP REST APIs that your primary application container can use to fetch the key from AKV. The integration through Microsoft Azure Attestation(MAA) is already built in. The preparation steps to run the side-car SKR container can be found in details [here](https://github.com/microsoft/confidential-sidecar-containers/tree/main/examples/skr).
-Your main application container application can call the side-car WEB API end points as defined in the example blow. Side-cars runs within the same container group and is a local endpoint to your application container. Full details of the API can be found [here](https://github.com/microsoft/confidential-sidecar-containers/blob/main/cmd/skr/README.md)
+Your main application container application can call the side-car WEB API end points as defined in the example below. Side-cars runs within the same container group and is a local endpoint to your application container. Full details of the API can be found [here](https://github.com/microsoft/confidential-sidecar-containers/blob/main/cmd/skr/README.md)
The `key/release` POST method expects a JSON of the following format:
connectors Connectors Create Api Servicebus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-servicebus.md
ms.suite: integration Previously updated : 03/23/2023 Last updated : 06/13/2023 tags: connectors
The Service Bus connector has different versions, based on [logic app workflow t
For more information about managed identities, review [Authenticate access to Azure resources with managed identities in Azure Logic Apps](../logic-apps/create-managed-service-identity.md).
+* By default, the Service Bus built-in connector operations are stateless. To run these operations in stateful mode, see [Enable stateful mode for stateless built-in connectors](../connectors/enable-stateful-affinity-built-in-connectors.md).
+ ## Considerations for Azure Service Bus operations ### Infinite loops
The steps to add and use a Service Bus trigger differ based on whether you want
#### Built-in connector trigger
+The built-in Service Bus connector is a stateless connector, by default. To run this connector's operations in stateful mode, see [Enable stateful mode for stateless built-in connectors](enable-stateful-affinity-built-in-connectors.md).
+ 1. In the [Azure portal](https://portal.azure.com), and open your blank logic app workflow in the designer. 1. On the designer, select **Choose an operation**.
The steps to add and use a Service Bus action differ based on whether you want t
#### Built-in connector action
+The built-in Service Bus connector is a stateless connector, by default. To run this connector's operations in stateful mode, see [Enable stateful mode for stateless built-in connectors](enable-stateful-affinity-built-in-connectors.md).
+ 1. In the [Azure portal](https://portal.azure.com), and open your logic app workflow in the designer. 1. Under the trigger or action where you want to add the action, select the plus sign (**+**), and then select **Add an action**.
connectors Enable Stateful Affinity Built In Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/enable-stateful-affinity-built-in-connectors.md
+
+ Title: Enable stateful mode for stateless built-in connectors
+description: Enable stateless built-in connectors to run in stateful mode for Standard workflows in Azure Logic Apps.
+
+ms.suite: integration
++ Last updated : 06/13/2023++
+# Enable stateful mode for stateless built-in connectors in Azure Logic Apps
++
+In Standard logic app workflows, the following built-in, service provider-based connectors are stateless, by default:
+
+- Azure Service Bus
+- SAP
+
+To run these connector operations in stateful mode, you must enable this capability. This how-to guide shows how to enable stateful mode for these connectors.
+
+## Prerequisites
+
+- An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- The Standard logic app resource where you plan to create the workflow that uses the stateful mode-enabled connector operations. If you don't have this resource, [create your Standard logic app resource now](../logic-apps/create-single-tenant-workflows-azure-portal.md).
+
+- An Azure virtual network with a subnet to integrate with your logic app. If you don't have these items, see the following documentation:
+
+ - [Quickstart: Create a virtual network with the Azure portal](../virtual-network/quick-create-portal.md)
+ - [Add, change, or delete a virtual network subnet](../virtual-network/virtual-network-manage-subnet.md?tabs=azure-portal)
+
+## Enable stateful mode in the Azure portal
+
+1. In the [Azure portal](https://portal.azure.com), open the Standard logic app resource where you want to enable stateful mode for these connector operations.
+
+1. Enable virtual network integration for your logic app and add your logic app to the previously created subnet:
+
+ 1. On your logic app menu resource, under **Settings**, select **Networking**.
+
+ 1. In the **Outbound Traffic** section, select **VNET integration** > **Add VNet**.
+
+ 1. On the **Add VNet Integration** pane that opens, select your Azure subscription and your virtual network.
+
+ 1. Under **Subnet**, select **Select existing**. From the **Subnet** list, select the subnet where you want to add your logic app.
+
+ 1. When you're done, select **OK**.
+
+ On the **Networking** page, the **VNet integration** option now appears set to **On**, for example:
+
+ :::image type="content" source="media/enable-stateful-affinity-built-in-connectors/enable-virtual-network-integration.png" alt-text="Screenshot shows Azure portal, Standard logic app resource, Networking page, VNet integration set to On.":::
+
+ For general information about enabling virtual network integration with your app, see [Enable virtual network integration in Azure App Service](../app-service/configure-vnet-integration-enable.md).
+
+1. Next, update your logic app's underlying website configuration (**<*logic-app-name*>.azurewebsites.net**) by using either of the following tools:
+
+## Update website configuration for logic app
+
+After you enable virtual network integration for your logic app, you must update your logic app's underlying website configuration (**<*logic-app-name*>.azurewebsites.net**) by using one the following methods:
+
+- [Azure Resource Management API](#azure-resource-management-api) (bearer token required)
+- [Azure PowerShell](#azure-powershell) (bearer token *not* required)
+
+### Azure Resource Management API
+
+To complete this task with the [Azure Resource Management API - Update By Id](/rest/api/resources/resources/update-by-id), review the following requirements, syntax, and parameter values.
+
+#### Requirements
+
+OAuth authorization and the bearer token are required. To get the bearer token, follow these steps
+
+1. While you're signed in to the Azure portal, open your web browser's developer tools (F12).
+
+1. Get the token by sending any management request, for example, by saving a workflow in your Standard logic app.
+
+#### Syntax
+
+Updates a resource by using the specified resource ID:
+
+`PATCH https://management.azure.com/{resourceId}?api-version=2021-04-01`
+
+#### Parameter values
+
+| Element | Value | Description |
+||--|-|
+| HTTP request method | **PATCH** |
+| <*resourceId*> | **subscriptions/{yourSubscriptionID}/resourcegroups/{yourResourceGroup}/providers/Microsoft.Web/sites/{websiteName}/config/web** |
+| <*yourSubscriptionId*> | The ID for your Azure subscription |
+| <*yourResourceGroup*> | The resource group that contains your logic app resource |
+| <*websiteName*> | The name for your logic app resource, which is **mystandardlogicapp** in this example |
+| HTTP request body | **{"properties": {"vnetPrivatePortsCount": "2"}}** |
+
+#### Example
+
+`https://management.azure.com/subscriptions/XXxXxxXX-xXXx-XxxX-xXXX-XXXXxXxXxxXX/resourcegroups/My-Standard-RG/providers/Microsoft.Web/sites/mystandardlogicapp/config/web?api-version=2021-02-01`
+
+### Azure PowerShell
+
+To complete this task with Azure PowerShell, review the following requirements, syntax, and values. This method doesn't require that you manually get the bearer token.
+
+#### Syntax
+
+```powershell
+Set-AzContext -Subscription {yourSubscriptionID}
+$webConfig = Get-AzResource -ResourceId {resourceId}
+$webConfig.Properties.vnetPrivatePortsCount = 2
+$webConfig | Set-AzResource -ResourceId {resourceId}
+```
+
+For more information, see the following documentation:
+
+- [Set-AzContext](/powershell/module/az.accounts/set-azcontext)
+- [Get-AzResource](/powershell/module/az.resources/get-azresource)
+- [Set-AzResource](/powershell/module/az.resources/set-azresource)
+
+#### Parameter values
+
+| Element | Value |
+||--|
+| <*yourSubscriptionID*> | The ID for your Azure subscription |
+| <*resourceId*> | **subscriptions/{yourSubscriptionID}/resourcegroups/{yourResourceGroup}/providers/Microsoft.Web/sites/{websiteName}/config/web** |
+| <*yourResourceGroup*> | The resource group that contains your logic app resource |
+| <*websiteName*> | The name for your logic app resource, which is **mystandardlogicapp** in this example |
+
+#### Example
+
+`https://management.azure.com/subscriptions/XXxXxxXX-xXXx-XxxX-xXXX-XXXXxXxXxxXX/resourcegroups/My-Standard-RG/providers/Microsoft.Web/sites/mystandardlogicapp/config/web?api-version=2021-02-01`
+
+#### Troubleshoot errors
+
+##### Error: Reserved instance count is invalid
+
+If you get an error that says **Reserved instance count is invalid**, use the following workaround:
+
+```powershell
+$webConfig.Properties.preWarmedInstanceCount = $webConfig.Properties.reservedInstanceCount
+$webConfig.Properties.reservedInstanceCount = $null
+$webConfig | Set-AzResource -ResourceId {resourceId}
+```
+
+Error example:
+
+```powershell
+Set-AzResource :
+{
+ "Code":"BadRequest",
+ "Message":"siteConfig.ReservedInstanceCount is invalid. Please use the new property siteConfig.PreWarmedInstanceCount.",
+ "Target": null,
+ "Details":
+ [
+ {
+ "Message":"siteConfig.ReservedInstanceCount is invalid. Please use the new property siteConfig.PreWarmedInstanceCount."
+ },
+ {
+ "Code":"BadRequest"
+ },
+ {
+ "ErrorEntity":
+ {
+ "ExtendedCode":"51021",
+ "MessageTemplate":"{0} is invalid. {1}",
+ "Parameters":
+ [
+ "siteConfig.ReservedInstanceCount", "Please use the new property siteConfig.PreWarmedInstanceCount."
+ ],
+ "Code":"BadRequest",
+ "Message":"siteConfig.ReservedInstanceCount is invalid. Please use the new property siteConfig.PreWarmedInstanceCount."
+ }
+ }
+ ],
+ "Innererror": null
+}
+```
+
+## Prevent context loss during resource scale-in events
+
+Resource scale-in events might cause the loss of context for built-in connectors with stateful mode enabled. To prevent this potential loss before such events can happen, fix the number of instances available for your logic app resource. This way, no scale-in events can happen to cause this potential context loss.
+
+1. On your logic app resource menu, under **Settings**, select **Scale out**.
+
+1. Under **App Scale Out**, set **Enforce Scale Out Limit** to **Yes**, which shows the **Maximum Scale Out Limit**.
+
+1. On the **Scale out** page, under **App Scale out**, set the number for **Always Ready Instances** to the same number as **Maximum Scale Out Limit** and **Maximum Burst**, which appears under **Plan Scale Out**, for example:
+
+ :::image type="content" source="media/enable-stateful-affinity-built-in-connectors/scale-in-settings.png" alt-text="Screenshot shows Azure portal, Standard logic app resource, Scale out page, and Always Ready Instances number set to match Maximum Scale Out Limit and Maximum Burst.":::
+
+1. When you're done, on the **Scale out** toolbar, select **Save**.
+
+## Next steps
+
+- [Connect to Azure Service Bus](connectors-create-api-servicebus.md)
+- [Connect to SAP](../logic-apps/logic-apps-using-sap-connector.md)
container-apps Tutorial Ci Cd Runners Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-ci-cd-runners-jobs.md
Refer to [jobs preview limitations](jobs.md#jobs-preview-restrictions) for a lis
1. To sign in to Azure from the CLI, run the following command and follow the prompts to complete the authentication process.
+ # [Bash](#tab/bash)
```bash az login ```
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ az login
+ ```
+
+
+ 1. Ensure you're running the latest version of the CLI via the `upgrade` command.
+ # [Bash](#tab/bash)
```bash az upgrade ```
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ az upgrade
+ ```
+
+
+ 1. Install the latest version of the Azure Container Apps CLI extension.
+ # [Bash](#tab/bash)
```bash az extension add --name containerapp --upgrade ```
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ az extension add --name containerapp --upgrade
+ ```
+
+
+ 1. Register the `Microsoft.App` and `Microsoft.OperationalInsights` namespaces if you haven't already registered them in your Azure subscription.
+ # [Bash](#tab/bash)
```bash az provider register --namespace Microsoft.App az provider register --namespace Microsoft.OperationalInsights ```
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ az provider register --namespace Microsoft.App
+ az provider register --namespace Microsoft.OperationalInsights
+ ```
+
+
+ 1. Define the environment variables that are used throughout this article. ::: zone pivot="container-apps-jobs-self-hosted-ci-cd-github-actions"
+ # [Bash](#tab/bash)
```bash RESOURCE_GROUP="jobs-sample" LOCATION="northcentralus"
Refer to [jobs preview limitations](jobs.md#jobs-preview-restrictions) for a lis
JOB_NAME="github-actions-runner-job" ```
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ $RESOURCE_GROUP="jobs-sample"
+ $LOCATION="northcentralus"
+ $ENVIRONMENT="env-jobs-sample"
+ $JOB_NAME="github-actions-runner-job"
+ ```
+
+
+ ::: zone-end ::: zone pivot="container-apps-jobs-self-hosted-ci-cd-azure-pipelines"
+ # [Bash](#tab/bash)
```bash RESOURCE_GROUP="jobs-sample" LOCATION="northcentralus"
Refer to [jobs preview limitations](jobs.md#jobs-preview-restrictions) for a lis
PLACEHOLDER_JOB_NAME="placeholder-agent-job" ```
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ $RESOURCE_GROUP="jobs-sample"
+ $LOCATION="northcentralus"
+ $ENVIRONMENT="env-jobs-sample"
+ $JOB_NAME="azure-pipelines-agent-job"
+ $PLACEHOLDER_JOB_NAME="placeholder-agent-job"
+ ```
+
+
+ ::: zone-end ## Create a Container Apps environment The Azure Container Apps environment acts as a secure boundary around container apps and jobs so they can share the same network and communicate with each other.
+> [!NOTE]
+> To create a Container Apps environment that's integrated with an existing virtual network, see [Provide a virtual network to an internal Azure Container Apps environment](vnet-custom-internal.md?tabs=bash).
+ 1. Create a resource group using the following command.
+ # [Bash](#tab/bash)
```bash az group create \ --name "$RESOURCE_GROUP" \ --location "$LOCATION" ```
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ az group create `
+ --name "$RESOURCE_GROUP" `
+ --location "$LOCATION"
+ ```
+
+
+ 1. Create the Container Apps environment using the following command.
+ # [Bash](#tab/bash)
```bash az containerapp env create \ --name "$ENVIRONMENT" \
The Azure Container Apps environment acts as a secure boundary around container
--location "$LOCATION" ```
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ az containerapp env create `
+ --name "$ENVIRONMENT" `
+ --resource-group "$RESOURCE_GROUP" `
+ --location "$LOCATION"
+ ```
+
+
+ ::: zone pivot="container-apps-jobs-self-hosted-ci-cd-github-actions" ## Create a GitHub repository for running a workflow
To run a self-hosted runner, you need to create a personal access token (PAT) in
1. Define variables that are used to configure the runner and scale rule later.
+ # [Bash](#tab/bash)
```bash GITHUB_PAT="<GITHUB_PAT>" REPO_OWNER="<REPO_OWNER>" REPO_NAME="<REPO_NAME>" ```
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ $GITHUB_PAT="<GITHUB_PAT>"
+ $REPO_OWNER="<REPO_OWNER>"
+ $REPO_NAME="<REPO_NAME>"
+ ```
+
+
+ Replace the placeholders with the following values: | Placeholder | Value |
To create a self-hosted runner, you need to build a container image that execute
1. Define a name for your container image and registry.
+ # [Bash](#tab/bash)
```bash CONTAINER_IMAGE_NAME="github-actions-runner:1.0" CONTAINER_REGISTRY_NAME="<CONTAINER_REGISTRY_NAME>" ```
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ $CONTAINER_IMAGE_NAME="github-actions-runner:1.0"
+ $CONTAINER_REGISTRY_NAME="<CONTAINER_REGISTRY_NAME>"
+ ```
+
+
+ Replace `<CONTAINER_REGISTRY_NAME>` with a unique name for creating a container registry. Container registry names must be *unique within Azure* and be from 5 to 50 characters in length containing numbers and lowercase letters only. 1. Create a container registry.
+ # [Bash](#tab/bash)
```bash az acr create \ --name "$CONTAINER_REGISTRY_NAME" \
To create a self-hosted runner, you need to build a container image that execute
--admin-enabled true ```
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ az acr create `
+ --name "$CONTAINER_REGISTRY_NAME" `
+ --resource-group "$RESOURCE_GROUP" `
+ --location "$LOCATION" `
+ --sku Basic `
+ --admin-enabled true
+ ```
+
+
+ 1. The Dockerfile for creating the runner image is available on [GitHub](https://github.com/Azure-Samples/container-apps-ci-cd-runner-tutorial/tree/main/github-actions-runner). Run the following command to clone the repository and build the container image in the cloud using the `az acr build` command.
+ # [Bash](#tab/bash)
```bash az acr build \ --registry "$CONTAINER_REGISTRY_NAME" \
To create a self-hosted runner, you need to build a container image that execute
"https://github.com/Azure-Samples/container-apps-ci-cd-runner-tutorial.git" ```
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ az acr build `
+ --registry "$CONTAINER_REGISTRY_NAME" `
+ --image "$CONTAINER_IMAGE_NAME" `
+ --file "Dockerfile.github" `
+ "https://github.com/Azure-Samples/container-apps-ci-cd-runner-tutorial.git"
+ ```
+
+
+ The image is now available in the container registry. ## Deploy a self-hosted runner as a job
You can now create a job that uses to use the container image. In this section,
1. Create a job in the Container Apps environment.
+ # [Bash](#tab/bash)
```bash az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" \ --trigger-type Event \ --replica-timeout 300 \
- --replica-retry-limit 0 \
+ --replica-retry-limit 1 \
--replica-completion-count 1 \ --parallelism 1 \ --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" \
You can now create a job that uses to use the container image. In this section,
--registry-server "$CONTAINER_REGISTRY_NAME.azurecr.io" ```
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" `
+ --trigger-type Event `
+ --replica-timeout 300 `
+ --replica-retry-limit 1 `
+ --replica-completion-count 1 `
+ --parallelism 1 `
+ --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" `
+ --min-executions 0 `
+ --max-executions 10 `
+ --polling-interval 30 `
+ --scale-rule-name "github-runner" `
+ --scale-rule-type "github-runner" `
+ --scale-rule-metadata "github-runner=https://api.github.com" "owner=$REPO_OWNER" "runnerScope=repo" "repos=$REPO_NAME" "targetWorkflowQueueLength=1" `
+ --scale-rule-auth "personalAccessToken=personal-access-token" `
+ --cpu "2.0" `
+ --memory "4Gi" `
+ --secrets "personal-access-token=$GITHUB_PAT" `
+ --env-vars "GITHUB_PAT=secretref:personal-access-token" "REPO_URL=https://github.com/$REPO_OWNER/$REPO_NAME" "REGISTRATION_TOKEN_API_URL=https://api.github.com/repos/$REPO_OWNER/$REPO_NAME/actions/runners/registration-token" `
+ --registry-server "$CONTAINER_REGISTRY_NAME.azurecr.io"
+ ```
+
+
+ The following table describes the key parameters used in the command. | Parameter | Description |
To verify the job was configured correctly, you modify the workflow to use a sel
1. List the executions of the job to confirm a job execution was created and completed successfully.
+ # [Bash](#tab/bash)
```bash az containerapp job execution list \ --name "$JOB_NAME" \
To verify the job was configured correctly, you modify the workflow to use a sel
--query '[].{Status: properties.status, Name: name, StartTime: properties.startTime}' ```
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ az containerapp job execution list `
+ --name "$JOB_NAME" `
+ --resource-group "$RESOURCE_GROUP" `
+ --output table `
+ --query '[].{Status: properties.status, Name: name, StartTime: properties.startTime}'
+ ```
+
+
+ ::: zone-end ::: zone pivot="container-apps-jobs-self-hosted-ci-cd-azure-pipelines"
To run a self-hosted runner, you need to create a personal access token (PAT) in
1. Define variables that are used to configure the Container Apps jobs later.
+ # [Bash](#tab/bash)
```bash AZP_TOKEN="<AZP_TOKEN>" ORGANIZATION_URL="<ORGANIZATION_URL>" AZP_POOL="container-apps" ```
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ $AZP_TOKEN="<AZP_TOKEN>"
+ $ORGANIZATION_URL="<ORGANIZATION_URL>"
+ $AZP_POOL="container-apps"
+ ```
+
+
+ Replace the placeholders with the following values: | Placeholder | Value | Comments |
To create a self-hosted agent, you need to build a container image that runs the
1. Back in your terminal, define a name for your container image and registry.
+ # [Bash](#tab/bash)
```bash CONTAINER_IMAGE_NAME="azure-pipelines-agent:1.0" CONTAINER_REGISTRY_NAME="<CONTAINER_REGISTRY_NAME>" ```
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ $CONTAINER_IMAGE_NAME="azure-pipelines-agent:1.0"
+ $CONTAINER_REGISTRY_NAME="<CONTAINER_REGISTRY_NAME>"
+ ```
+
+
+ Replace `<CONTAINER_REGISTRY_NAME>` with a unique name for creating a container registry. Container registry names must be *unique within Azure* and be from 5 to 50 characters in length containing numbers and lowercase letters only. 1. Create a container registry.
+ # [Bash](#tab/bash)
```bash az acr create \ --name "$CONTAINER_REGISTRY_NAME" \
To create a self-hosted agent, you need to build a container image that runs the
--admin-enabled true ```
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ az acr create `
+ --name "$CONTAINER_REGISTRY_NAME" `
+ --resource-group "$RESOURCE_GROUP" `
+ --location "$LOCATION" `
+ --sku Basic `
+ --admin-enabled true
+ ```
+
+
+ 1. The Dockerfile for creating the runner image is available on [GitHub](https://github.com/Azure-Samples/container-apps-ci-cd-runner-tutorial/tree/main/azure-pipelines-agent). Run the following command to clone the repository and build the container image in the cloud using the `az acr build` command.
+ # [Bash](#tab/bash)
```bash az acr build \ --registry "$CONTAINER_REGISTRY_NAME" \
To create a self-hosted agent, you need to build a container image that runs the
"https://github.com/Azure-Samples/container-apps-ci-cd-runner-tutorial.git" ```
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ az acr build `
+ --registry "$CONTAINER_REGISTRY_NAME" `
+ --image "$CONTAINER_IMAGE_NAME" `
+ --file "Dockerfile.azure-pipelines" `
+ "https://github.com/Azure-Samples/container-apps-ci-cd-runner-tutorial.git"
+ ```
+
+
+ The image is now available in the container registry. ## Create a placeholder self-hosted agent
-Before you can run a self-hosted agent in your new agent pool, you need to create a placeholder agent. Pipelines that use the agent pool fail when there's no placeholder agent. You can create a placeholder agent by running a job that registers an offline placeholder agent.
+Before you can run a self-hosted agent in your new agent pool, you need to create a placeholder agent. The placeholder agent ensures the agent pool is available. Pipelines that use the agent pool fail when there's no placeholder agent.
+
+You can run a manual job to register an offline placeholder agent. The job runs once and can be deleted. The placeholder agent doesn't consume any resources in Azure Container Apps or Azure DevOps.
1. Create a manual job in the Container Apps environment that creates the placeholder agent.
+ # [Bash](#tab/bash)
```bash az containerapp job create -n "$PLACEHOLDER_JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" \ --trigger-type Manual \ --replica-timeout 300 \
- --replica-retry-limit 0 \
+ --replica-retry-limit 1 \
--replica-completion-count 1 \ --parallelism 1 \ --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" \
Before you can run a self-hosted agent in your new agent pool, you need to creat
--registry-server "$CONTAINER_REGISTRY_NAME.azurecr.io" ```
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ az containerapp job create -n "$PLACEHOLDER_JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" `
+ --trigger-type Manual `
+ --replica-timeout 300 `
+ --replica-retry-limit 1 `
+ --replica-completion-count 1 `
+ --parallelism 1 `
+ --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" `
+ --cpu "2.0" `
+ --memory "4Gi" `
+ --secrets "personal-access-token=$AZP_TOKEN" "organization-url=$ORGANIZATION_URL" `
+ --env-vars "AZP_TOKEN=secretref:personal-access-token" "AZP_URL=secretref:organization-url" "AZP_POOL=$AZP_POOL" "AZP_PLACEHOLDER=1" "AZP_AGENT_NAME=placeholder-agent" `
+ --registry-server "$CONTAINER_REGISTRY_NAME.azurecr.io"
+ ```
+
+
+ The following table describes the key parameters used in the command. | Parameter | Description |
Before you can run a self-hosted agent in your new agent pool, you need to creat
1. Execute the manual job to create the placeholder agent.
+ # [Bash](#tab/bash)
```bash az containerapp job start -n "$PLACEHOLDER_JOB_NAME" -g "$RESOURCE_GROUP" ```
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ az containerapp job start -n "$PLACEHOLDER_JOB_NAME" -g "$RESOURCE_GROUP"
+ ```
+
+
+ 1. List the executions of the job to confirm a job execution was created and completed successfully.
+ # [Bash](#tab/bash)
```bash az containerapp job execution list \ --name "$PLACEHOLDER_JOB_NAME" \
Before you can run a self-hosted agent in your new agent pool, you need to creat
--query '[].{Status: properties.status, Name: name, StartTime: properties.startTime}' ```
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ az containerapp job execution list `
+ --name "$PLACEHOLDER_JOB_NAME" `
+ --resource-group "$RESOURCE_GROUP" `
+ --output table `
+ --query '[].{Status: properties.status, Name: name, StartTime: properties.startTime}'
+ ```
+
+
+ 1. Verify the placeholder agent was created in Azure DevOps. 1. In Azure DevOps, navigate to your project. 1. Select **Project settings** > **Agent pools** > **container-apps** > **Agents**.
- 1. Confirm that a placeholder agent named `placeholder-agent` is listed.
+ 1. Confirm that a placeholder agent named `placeholder-agent` is listed and its status is offline.
+
+1. The job isn't needed again. You can delete it.
+
+ # [Bash](#tab/bash)
+ ```bash
+ az containerapp job delete -n "$PLACEHOLDER_JOB_NAME" -g "$RESOURCE_GROUP"
+ ```
+
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ az containerapp job delete -n "$PLACEHOLDER_JOB_NAME" -g "$RESOURCE_GROUP"
+ ```
+
+
## Create a self-hosted agent as an event-driven job Now that you have a placeholder agent, you can create a self-hosted agent. In this section, you create an event-driven job that runs a self-hosted agent when a pipeline is triggered.
+# [Bash](#tab/bash)
```bash az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" \ --trigger-type Event \ --replica-timeout 300 \
- --replica-retry-limit 0 \
+ --replica-retry-limit 1 \
--replica-completion-count 1 \ --parallelism 1 \ --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" \
az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$E
--registry-server "$CONTAINER_REGISTRY_NAME.azurecr.io" ```
+# [PowerShell](#tab/powershell)
+```powershell
+az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" \
+ --trigger-type Event \
+ --replica-timeout 300 \
+ --replica-retry-limit 1 \
+ --replica-completion-count 1 \
+ --parallelism 1 \
+ --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" \
+ --min-executions 0 \
+ --max-executions 10 \
+ --polling-interval 30 \
+ --scale-rule-name "azure-pipelines" \
+ --scale-rule-type "azure-pipelines" \
+ --scale-rule-metadata "poolName=container-apps" "targetPipelinesQueueLength=1" \
+ --scale-rule-auth "personalAccessToken=personal-access-token" "organizationURL=organization-url" \
+ --cpu "2.0" \
+ --memory "4Gi" \
+ --secrets "personal-access-token=$AZP_TOKEN" "organization-url=$ORGANIZATION_URL" \
+ --env-vars "AZP_TOKEN=secretref:personal-access-token" "AZP_URL=secretref:organization-url" "AZP_POOL=$AZP_POOL" \
+ --registry-server "$CONTAINER_REGISTRY_NAME.azurecr.io"
+```
+++ The following table describes the scale rule parameters used in the command. | Parameter | Description |
Now that you've configured a self-hosted agent job, you can run a pipeline and v
1. List the executions of the job to confirm a job execution was created and completed successfully.
+ # [Bash](#tab/bash)
```bash az containerapp job execution list \ --name "$JOB_NAME" \
Now that you've configured a self-hosted agent job, you can run a pipeline and v
--query '[].{Status: properties.status, Name: name, StartTime: properties.startTime}' ```
+ # [PowerShell](#tab/powershell)
+ ```powershell
+ az containerapp job execution list `
+ --name "$JOB_NAME" `
+ --resource-group "$RESOURCE_GROUP" `
+ --output table `
+ --query '[].{Status: properties.status, Name: name, StartTime: properties.startTime}'
+ ```
+
+
+ ::: zone-end > [!TIP]
Once you're done, run the following command to delete the resource group that co
>[!CAUTION] > The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this tutorial exist in the specified resource group, they will also be deleted.
+# [Bash](#tab/bash)
```bash az group delete \ --resource-group $RESOURCE_GROUP ```
+# [PowerShell](#tab/powershell)
+```powershell
+az group delete `
+ --resource-group $RESOURCE_GROUP
+```
+++ To delete your GitHub repository, see [Deleting a repository](https://docs.github.com/en/github/administering-a-repository/managing-repository-settings/deleting-a-repository). ## Next steps
cosmos-db Synapse Link Time Travel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link-time-travel.md
display(df)
| `spark.cosmos.timetravel.ignoreTransactionalUserDeletes` | `FALSE` | Ignore the records the user deleted from the transactional store. Set this setting to `TRUE` if you would like to see the records in time travel result set that is deleted from the transactional store. | | `spark.cosmos.timetravel.fullFidelity` | `FALSE` | Set this setting to `TRUE` if you would like to access all versions of records (including intermediate updates) at a specific point in history. |
+> [!IMPORTANT]
+> All configuration settings are used in UTC timezone.
+ ## Limitations - Time Travel is only available for Azure Synapse Spark.
cost-management-billing Cost Analysis Common Uses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-analysis-common-uses.md
description: This article explains how you can get results for common cost analysis tasks in Cost Management. Previously updated : 04/05/2023 Last updated : 06/15/2023
Each metric affects how data is shown for your reservation charges.
**Actual cost** - Shows the purchase as it appears on your bill. For example, if you bought a one-year reservation for $1200 in January, cost analysis shows a $1200 cost in the month of January for the reservation. It doesn't show a reservation cost for other months of the year. If you group your actual costs by VM, then a VM that received the reservation benefit for a given month would have zero cost for the month.
-**Amortized cost** - Shows a reservation purchase split as an amortized cost over the duration of the reservation term. Using the same example above, cost analysis shows a $100 cost for each month throughout the year, if you purchased a one-year reservation for $1200 in January. If you group costs by VM in this example, you'd see cost attributed to each VM that received the reservation benefit.
+**Amortized cost** - Shows a reservation purchase split as an amortized cost over the duration of the reservation term. Using the same example above, cost analysis shows a varying cost for each month throughout the year, because of the varying number of days in a month. If you group costs by VM in this example, you'd see cost attributed to each VM that received the reservation benefit.
## View your reservation utilization
cost-management-billing Overview Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/overview-cost-management.md
+
+ Title: Overview of Cost Management
+
+description: You use Cost Management features to monitor and control Azure spending and to optimize Azure resource use.
+keywords:
+++ Last updated : 06/12/2023+++++
+# What is Microsoft Cost Management
+
+Microsoft Cost Management is a suite of FinOps tools that help organizations analyze, monitor, and optimize their Microsoft Cloud costs. Cost Management is available to anyone with access to a billing account, subscription, resource group, or management group. You can access Cost Management within the billing and resource management experiences or separately as a standalone tool optimized for FinOps teams who manage cost across multiple scopes. You can also automate and extend native capabilities or enrich your own tools and processes with cost to maximize organizational visibility and accountability with all stakeholders and realize your optimization and efficiency goals faster.
+
+A few examples of what you can do in Cost Management include:
+
+- Report on and analyze costs in the Azure portal, Microsoft 365 admin center, or Power BI.
+- Monitor costs proactively with budget, anomaly, reservation utilization, and scheduled alerts.
+- Enable tag inheritance and split shared costs with cost allocation rules.
+- Automate business processes or integrate cost into external tools by exporting data.
+
+## How charges are processed
+
+To understand how Cost Management works, you should first understand the Commerce system. At its core, Microsoft Commerce is a data pipeline that underpins all Microsoft commercial transactions, whether consumer or commercial. While there are many inputs and connections to this pipeline, like the sign-up and Marketplace purchase experiences, this article focuses on the components that help you monitor, allocate, and optimize your costs.
++
+From the left, your Azure, Microsoft 365, Dynamics 365, and Power Platform services are all pushing data into the Commerce data pipeline. Each service publishes data on a different cadence. In general, if data for one service is slower than another, it's due to how frequently those services are publishing their usage and charges.
+
+As the data makes its way through the pipeline, the rating system applies discounts based on your specific price sheet and generates ΓÇ£rated usage,ΓÇ¥ which includes price and quantity for each cost record. It's the basis for what you see in Cost Management and it's covered later. At the end of the month, credits are applied and the invoice is published. This process starts 72 hours after your billing period ends, which is usually the last day of the calendar month for most accounts. For example, if your billing period ends on March 31, charges will be finalized on April 4 at midnight.
+
+>[!IMPORTANT]
+>Credits are applied like a gift card or other payment instrument before the invoice is generated. While credit status is tracked as new charges flow into the data pipeline, credits arenΓÇÖt explicitly applied to these charges until the end of the month.
+
+Everything up to this point makes up the billing process where charges are finalized, discounts are applied, and invoices are published. Billing account and billing profile owners may be familiar with this process as part of the Billing experience within the Azure portal or Microsoft 365 admin center. The Billing experience allows you to review credits, manage your billing address and payment methods, pay invoices, and more ΓÇô everything related to managing your billing relationship with Microsoft.
+
+- The [anomaly detection](../understand/analyze-unexpected-charges.md) model identifies anomalies daily based on normalized usage (not rated usage).
+- The cost allocation engine applies tag inheritance and [splits shared costs](allocate-costs.md).
+- AWS cost and usage reports are pulled based on any [connectors for AWS](aws-integration-manage.md) you may have configured.
+- Azure Advisor cost recommendations are pulled in to enable cost savings insights for subscriptions and resource groups.
+- Cost alerts are sent out for [budgets](tutorial-acm-create-budgets.md), [anomalies](../understand/analyze-unexpected-charges.md#create-an-anomaly-alert), [scheduled alerts](save-share-views.md#subscribe-to-scheduled-alerts), and more based on the configured settings.
+
+Lastly, cost details are made available from [cost analysis](quick-acm-cost-analysis.md) in the Azure portal and published to your storage account via [scheduled exports](tutorial-export-acm-data.md).
+
+## How Cost Management and Billing relate
+
+[Cost Management](https://portal.azure.com/#view/Microsoft_Azure_CostManagement/Menu) is a set of FinOps tools that enable you to analyze, manage, and optimize your costs.
+
+[Billing](https://portal.azure.com/#view/Microsoft_Azure_GTM/ModernBillingMenuBlade) provides all the tools you need to manage your billing account and pay invoices.
+
+While Cost Management is available from within the Billing experience, Cost Management is also available from every subscription, resource group, and management group in the Azure portal to ensure everyone has full visibility into the costs theyΓÇÖre responsible for and can optimize their workloads to maximize efficiency. Cost Management is also available independently to streamline the process for managing cost across multiple billing accounts, subscriptions, resource groups, and/or management groups.
++
+## What data is included in Cost Management and Billing?
+
+Within the Billing experience, you can manage all the products, subscriptions, and recurring purchases you use; review your credits and commitments; and view and pay your invoices. Invoices are available online or as PDFs and include all billed charges and any applicable taxes. Credits are applied to the total invoice amount when invoices are generated. This invoicing process happens in parallel to Cost Management data processing, which means Cost Management doesn't include credits, taxes, and some purchases, like support charges in non-MCA accounts.
+
+Classic Cloud Solution Provider (CSP) and sponsorship subscriptions aren't supported in Cost Management. These subscriptions will be supported after they transition to Microsoft Customer Agreement.
+
+For more information about supported offers, what data is included, or how data is refreshed and retained in Cost Management, see [Understand Cost Management data](understand-cost-mgt-data.md).
+
+## Estimate your cloud costs
+
+During your cloud journey, there are many tools available to help you understand pricing:
+
+- The [Total Cost of Ownership (TCO) calculator](https://azure.microsoft.com/pricing/tco/calculator/) should be your first stop if youΓÇÖre curious about how much it would cost to move your existing on-premises infrastructure to the cloud.
+- [Azure Migrate](https://azure.microsoft.com/products/azure-migrate/) is a free tool that helps you analyze your on-premises workloads and plan your cloud migration.
+- The [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) helps you estimate the cost of creating new or expanding existing deployments. In this tool, you're able to explore various configurations of many different Azure services as you identify which SKUs and how much usage keeps you within your desired price range. For more information, see the pricing details for each of the services you use.
+- The [Virtual Machine Selector Tool](https://azure.microsoft.com/pricing/vm-selector/) is your one-stop-shop for finding the best VMs for your intended solution.
+- The [Azure Hybrid Benefit savings calculator](https://azure.microsoft.com/pricing/hybrid-benefit/#calculator) helps you estimate the savings of using your existing Windows Server and SQL Server licenses on Azure.
+
+## Report on and analyze costs
+
+Cost Management and Billing include several tools to help you understand, report on, and analyze your invoiced Microsoft Cloud and AWS costs.
+
+- [**Cost analysis**](quick-acm-cost-analysis.md) is a tool for ad-hoc cost exploration. Get quick answers with lightweight insights and analytics.
+**Power BI** is an advanced solution to build more extensive dashboards and complex reports or combine costs with other data. Power BI is available for billing accounts and billing profiles.
+- [**Exports and the Cost Details API**](../automate/usage-details-best-practices.md) enable you to integrate cost details into external systems or business processes.
+- **Connectors for AWS** enable you to ingest your AWS cost details into Azure to facilitate managing Azure and AWS costs together. After configured, the connector also enables other capabilities, like budget and scheduled alerts.
+
+For more information, see [Get started with reporting](reporting-get-started.md).
+
+## Organize and allocate costs
+
+Organizing and allocating costs are critical to ensuring invoices are routed to the correct business units and can be further split for internal billing, also known as *chargeback*. The first step to allocating cloud costs is organizing subscriptions and resources in a way that facilitates natural reporting and chargeback. Microsoft offers the following options to organize resources and subscriptions:
+
+- MCA **billing profiles** and **invoice sections** are used to [group subscriptions into invoices](../manage/mca-section-invoice.md). Each billing profile represents a separate invoice that can be billed to a different business unit and each invoice section is segmented separately within those invoices. You can also view costs by billing profile or invoice section in costs analysis.
+- EA **departments** and **enrollment accounts** are conceptually similar to invoice sections, as groups of subscriptions, but they aren't represented within the invoice PDF. They're included within the cost details backing each invoice, however. You can also view costs by department or enrollment account in costs analysis.
+- **Management groups** also allow grouping subscriptions together, but offer a few key differences:
+ - Management group access is inherited down to the subscriptions and resources.
+ - Management groups can be layered into multiple levels and subscriptions can be placed at any level.
+ - Management groups aren't included in cost details.
+ - All historical costs are returned for management groups based on the subscriptions currently within that hierarchy. When a subscription moves, all historical cost moves.
+ - Azure Policy supports management groups and they can have rules assigned to automate compliance reporting for your cost governance strategy.
+- **Subscriptions** and **resource groups** are the lowest level at which you can organize your cloud solutions. At Microsoft, every product ΓÇô sometimes even limited to a single region ΓÇô is managed within its own subscription. It simplifies cost governance but requires more overhead for subscription management. Most organizations use subscriptions for business units and separating dev/test from production or other environments, then use resource groups for the products. It complicates cost management because resource group owners don't have a way to manage cost across resource groups. On the other hand, it's a straightforward way to understand who's responsible for most resource-based charges. Keep in mind that not all charges come from resources and some don't have resource groups or subscriptions associated with them. It also changes as you move to MCA billing accounts.
+- **Resource tags** are the only way to add your own business context to cost details and are perhaps the most flexible way to map resources to applications, business units, environments, owners, etc. For more information, see [How tags are used in cost and usage data](understand-cost-mgt-data.md#how-tags-are-used-in-cost-and-usage-data) for limitations and important considerations.
+
+Once your resources and subscriptions are organized using the subscription hierarchy and have the necessary metadata (tags) to facilitate further allocation, use the following tools in Cost Management to streamline cost reporting:
+
+- [Tag inheritance](enable-tag-inheritance.md) simplifies the application of tags by copying subscription and resource group tags down to the resources in cost data. These tags aren't saved on the resources themselves. The change only happens within Cost Management and isn't available to other services, like Azure Policy.
+- [Cost allocation](allocate-costs.md) offers the ability to ΓÇ£moveΓÇ¥ or split shared costs from one subscription, resource group, or tag to another subscription, resource group, or tag. Cost allocation doesn't change the invoice. The goal of cost allocation is to reduce overhead and more accurately report on where charges are ultimately coming from (albeit indirectly), which should drive more complete accountability.
+
+How you organize and allocate costs plays a huge role in how people within your organization can manage and optimize costs. Be sure to plan ahead and revisit your allocation strategy yearly.
++
+## Monitor costs with alerts
+
+Cost Management and Billing offer many different types of emails and alerts to keep you informed and help you proactively manage your account and incurred costs.
+
+- [**Budget alerts**](tutorial-acm-create-budgets.md) notify recipients when cost exceeds a predefined cost or forecast amount. Budgets can be visualized in cost analysis and are available on every scope supported by Cost Management. Subscription and resource group budgets can also be configured to notify an action group to take automated actions to reduce or even stop further charges.
+- [**Anomaly alerts**](../understand/analyze-unexpected-charges.md)notify recipients when an unexpected change in daily usage has been detected. It can be a spike or a dip. Anomaly detection is only available for subscriptions and can be viewed within the cost analysis preview. Anomaly alerts can be configured from the cost alerts page.
+- [**Scheduled alerts**](save-share-views.md#subscribe-to-scheduled-alerts) notify recipients about the latest costs on a daily, weekly, or monthly schedule based on a saved cost view. Alert emails include a visual chart representation of the view and can optionally include a CSV file. Views are configured in cost analysis, but recipients don't require access to cost in order to view the email, chart, or linked CSV.
+- **EA commitment balance alerts** are automatically sent to any notification contacts configured on the EA billing account when the balance is 90% or 100% used.
+- **Invoice alerts** can be configured for MCA billing profiles and Microsoft Online Services Program (MOSP) subscriptions. For details, see [View and download your Azure invoice](../understand/download-azure-invoice.md).
+
+For for information, see [Monitor usage and spending with cost alerts](cost-mgt-alerts-monitor-usage-spending.md).
+
+## Optimize costs
+
+Microsoft offers a wide range of tools for optimizing your costs. Some of these tools are available outside the Cost Management and Billing experience, but are included for completeness.
+
+- There are many [**free services**](https://azure.microsoft.com/pricing/free-services/) available in Azure. Be sure to pay close attention to the constraints. Different services are free indefinitely, for 12 months, or 30 days. Some are free up to a specific amount of usage and some may have dependencies on other services that aren't free.
+- [**Azure Advisor cost recommendations**](tutorial-acm-opt-recommendations.md) should be your first stop when interested in optimizing existing resources. Advisor recommendations are updated daily and are based on your usage patterns. Advisor is available for subscriptions and resource groups. Management group users can also see recommendations but they need to select the desired subscriptions. Billing users can only see recommendations for subscriptions they have resource access to.
+- [**Azure savings plans**](../savings-plan/index.yml) save you money when you have consistent usage of Azure compute resources. A savings plan can significantly reduce your resource costs by up to 65% from pay-as-you-go prices.
+- [**Azure reservations**](https://azure.microsoft.com/reservations/) help you save up to 72% compared to pay-as-you-go rates by precommitting to specific usage amounts for a set time duration.
+- [**Azure Hybrid Benefit**](https://azure.microsoft.com/pricing/hybrid-benefit/) helps you significantly reduce costs by using on-premises Windows Server and SQL Server licenses or RedHat and SUSE Linux subscriptions on Azure.
+
+For other options, see [Azure benefits and incentives](https://azure.microsoft.com/pricing/offers/#cloud).
+
+## Next steps
+
+For other options, see [Azure benefits and incentives](https://azure.microsoft.com/pricing/offers/#cloud).
cost-management-billing Permission View Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/permission-view-manage.md
Previously updated : 02/03/2023 Last updated : 06/16/2023
If you're a billing administrator, use following steps to view and manage all sa
- If you're a Microsoft Customer Agreement billing profile owner, in the left menu, select **Billing profiles**. In the list of billing profiles, select one. 1. In the left menu, select **Products + services** > **Savings plans**. The complete list of savings plans for your EA enrollment or billing profile is shown.
-1. Billing administrators can take ownership of a savings plan by selecting one or multiple savings plans, selecting **Grant access** and selecting **Grant access** in the window that appears.
+1. Billing administrators can take ownership of a savings plan with the [Savings Plan Order - Elevate REST API](/rest/api/billingbenefits/savings-plan-order/elevate) to give themselves Azure RBAC roles.
### Adding billing administrators
databox Data Box Disk System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-system-requirements.md
Title: Microsoft Azure Data Box Disk system requirements| Microsoft Docs
-description: Learn about the software and networking requirements for your Azure Data Box Disk
+description: Learn about the software and networking requirements for your Azure Data Box Disk
The client computer containing the data must have a USB 3.0 or later port. The d
## Supported storage accounts
+> [!Note]
+> Classic storage accounts will not be supported starting **August 1, 2023**.
+ Here is a list of the supported storage types for the Data Box Disk. | **Storage account** | **Supported access tiers** |
databox Data Box Heavy System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-heavy-system-requirements.md
Previously updated : 10/07/2022 Last updated : 06/13/2023 # Azure Data Box Heavy system requirements
The software requirements include the information on the supported operating sys
### Supported storage accounts
+> [!Note]
+> Classic storage accounts will not be supported starting **August 1, 2023**.
+ [!INCLUDE [data-box-supported-storage-accounts](../../includes/data-box-supported-storage-accounts.md)] ### Supported storage types
databox Data Box System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-system-requirements.md
Title: Microsoft Azure Data Box system requirements| Microsoft Docs
-description: Learn about important system requirements for your Azure Data Box and for clients that connect to the Data Box.
+description: Learn about important system requirements for your Azure Data Box and for clients that connect to the Data Box.
The software requirements include supported operating systems, file transfer pro
### Supported storage accounts
+> [!Note]
+> Classic storage accounts will not be supported starting **August 1, 2023**.
+ [!INCLUDE [data-box-supported-storage-accounts](../../includes/data-box-supported-storage-accounts.md)] ### Supported storage types
ddos-protection Ddos Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md
Azure DDoS Protection, combined with application design best practices, provides
:::image type="content" source="./media/ddos-best-practices/ddos-protection-overview-architecture.png" alt-text="Diagram of the reference architecture for a DDoS protected PaaS web application.":::
+Azure DDoS Protection protects at layer 3 and layer 4 network layers. For web applications protection at layer 7, you need to add protection at the application layer using a WAF offering. For more information, see [Application DDoS protection](../web-application-firewall/shared/application-ddos-protection.md).
+ ## Key benefits ### Always-on traffic monitoring
ddos-protection Ddos Protection Reference Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-reference-architectures.md
Previously updated : 10/12/2022 Last updated : 06/15/2023
Azure DDoS Protection is designed [for services that are deployed in a virtual network](../virtual-network/virtual-network-for-azure-services.md). The following reference architectures are arranged by scenarios, with architecture patterns grouped together.
-> [!NOTE]
-> Protected resources include public IPs attached to an IaaS VM (except for single VM running behind a public IP), Load Balancer (Classic & Standard Load Balancers), Application Gateway (including WAF) cluster, Firewall, Bastion, VPN Gateway, Service Fabric, IaaS based Network Virtual Appliance (NVA) or Azure API Management (Premium tier only), connected to a virtual network (VNet) in the external mode. Protection also covers public IP ranges brought to Azure via Custom IP Prefixes (BYOIPs). PaaS services (multi-tenant), which includes Azure App Service Environment for Power Apps, Azure API Management in deployment modes other than those supported above, or Azure Virtual WAN are not supported at present.
+## Protected Resources
+
+Supported resources include:
+* Public IPs attached to:
+ * An IaaS virtual machine.
+ * Application Gateway (including WAF) cluster.
+ * Azure API Management (Premium tier only).
+ * Bastion.
+ * Connected to a virtual network (VNet) in the external mode.
+ * Firewall.
+ * IaaS based Network Virtual Appliance (NVA).
+ * Load Balancer (Classic & Standard Load Balancers).
+ * Service Fabric.
+ * VPN Gateway.
+* Protection also covers public IP ranges brought to Azure via Custom IP Prefixes (BYOIPs).
+
+
+Unsupported resources include:
+
+* Azure Virtual WAN.
+* Azure API Management in deployment modes other than the supported modes.
+* PaaS services (multi-tenant) including Azure App Service Environment for Power Apps.
+* Protected resources that include public IPs created from public IP address prefix.
+ > [!NOTE]
-> Protected resources that include public IPs created from public IP address prefix are not supported at present.
+> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to deploy [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md).
## Virtual machine (Windows/Linux) workloads
ddos-protection Manage Ddos Ip Protection Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-ip-protection-portal.md
Get started with Azure DDoS IP Protection by using the Azure portal. In this quickstart, you'll enable DDoS IP protection and link it to a public IP address. + ## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
ddos-protection Manage Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection.md
A DDoS protection plan defines a set of virtual networks that have DDoS Network
In this quickstart, you'll create a DDoS protection plan and link it to a virtual network. - ## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
defender-for-iot How To Work With The Sensor Device Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-the-sensor-device-map.md
By default, IT devices are automatically aggregated by [subnet](how-to-control-w
1. Sign into your OT sensor and select **Device map**. 1. Select one or more expanded subnets and then select **Collapse All**.
+### View traffic details between connected devices
+
+**To view traffic details between connected devices**:
+
+1. Sign into your OT sensor and select **Device map**.
+1. Locate two connected devices on the map. You might need to zoom in on the map to view a device icon, which looks like a monitor.
+1. Click on the line connecting two devices on the map and then :::image type="icon" source="media/how-to-work-with-maps/expand-pane-icon.png" border="false"::: expand the **Connection Properties** pane on the right. For example:
+
+ :::image type="content" source="media/how-to-work-with-maps/connection-properties.png" alt-text="Screenshot of connection properties on the device map." lightbox="media/how-to-work-with-maps/connection-properties.png":::
+
+1. In the **Connection Properties** pane, you can view traffic details between the two devices, such as:
+
+ - How long ago the connection was first detected.
+ - The IP address of each device.
+ - The status of each device.
+ - The number of alerts for each device.
+ - A chart for total bandwidth.
+ - A chart for top traffic by port.
+ ## Create a custom device group In addition to OT sensor's [built-in device groups](#built-in-device-map-groups), create new custom groups as needed to use when highlighting or filtering devices on the map.
devtest-labs Configure Lab Remote Desktop Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/configure-lab-remote-desktop-gateway.md
Last updated 05/30/2023+ # Configure and use a remote desktop gateway in Azure DevTest Labs
Follow these steps to set up a sample remote desktop gateway farm.
|`signCertificate` |**Required** |The Base64 encoding for the signing certificate for the gateway machine. | |`signCertificatePassword` |**Required** |The password for the signing certificate for the gateway machine. | |`signCertificateThumbprint` |**Required** |The certificate thumbprint for identification in the local certificate store of the signing certificate. |
- |`_artifactsLocation` |**Required** |The URI location to find artifacts this template requires. This value must be a fully qualified URI, not a relative path. The artifacts include other templates, PowerShell scripts, and the Remote Desktop Gateway Pluggable Authentication module, expected to be named *RDGatewayFedAuth.msi*, that supports token authentication. |
+ |`_artifactsLocation` |**Required** |The URI location to find artifacts this template requires. This value must be a fully qualified URI, not a relative path. The artifacts include other templates, PowerShell scripts, and the Remote Desktop Gateway Pluggable Authentication module, expected to be named *RDGatewayFedAuth.msi* that supports token authentication. |
|`_artifactsLocationSasToken`|**Required** |The shared access signature (SAS) token to access artifacts, if the `_artifactsLocation` is an Azure storage account. | 1. Run the following Azure CLI command to deploy *azuredeploy.json*:
Once you configure both the gateway and the lab, the RDP connection file created
### Automate lab configuration -- Powershell: [Set-DevTestLabGateway.ps1](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/tools/Set-DevTestLabGateway.ps1) is a sample PowerShell script to automatically set **Gateway hostname** and **Gateway token secret** settings.
+- PowerShell: [Set-DevTestLabGateway.ps1](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/tools/Set-DevTestLabGateway.ps1) is a sample PowerShell script to automatically set **Gateway hostname** and **Gateway token secret** settings.
- ARM: Use the [Gateway sample ARM templates](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/lab) in the Azure DevTest Labs GitHub repository to create or update labs with **Gateway hostname** and **Gateway token secret** settings.
devtest-labs Connect Virtual Machine Through Browser https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/connect-virtual-machine-through-browser.md
Last updated 06/14/2023+ # Connect to DevTest Labs VMs through a browser with Azure Bastion
devtest-labs Devtest Lab Add Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-add-vm.md
Last updated 05/22/2023+ # Create lab virtual machines in Azure DevTest Labs
devtest-labs Devtest Lab Attach Detach Data Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-attach-detach-data-disk.md
Last updated 04/24/2023+ # Attach or detach a data disk for a lab virtual machine in Azure DevTest Labs
devtest-labs Devtest Lab Auto Shutdown https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-auto-shutdown.md
Last updated 04/24/2023+ # Configure auto shutdown for labs and VMs in DevTest Labs
devtest-labs Devtest Lab Auto Startup Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-auto-startup-vm.md
Last updated 04/24/2023+ # Automatically start lab VMs with auto-start in Azure DevTest Labs
devtest-labs Devtest Lab Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-concepts.md
Last updated 06/14/2023+ # DevTest Labs concepts
devtest-labs Devtest Lab Create Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-lab.md
Last updated 05/22/2023-+ # Quickstart: Create a lab in the Azure portal
devtest-labs Devtest Lab Guidance Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-guidance-get-started.md
Last updated 05/12/2023 + # Azure DevTest Labs scenarios
devtest-labs Devtest Lab Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-overview.md
Last updated 04/20/2023+ # What is Azure DevTest Labs?
devtest-labs Devtest Lab Troubleshoot Apply Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-troubleshoot-apply-artifacts.md
description: Troubleshoot issues with applying artifacts on an Azure DevTest Lab
Previously updated : 03/31/2022 Last updated : 06/15/2023+ # Troubleshoot issues applying artifacts on DevTest Labs virtual machines
You can troubleshoot artifact failures from the Azure portal or from the VM wher
## Troubleshoot artifact failures from the Azure portal
-If you can't apply an artifact to a VM, first check the following in the Azure portal:
+If you can't apply an artifact to a VM, first check the following items in the Azure portal:
- Make sure that the VM is running. - Navigate to the **Artifacts** page for the lab VM to make sure the VM is ready for applying artifacts. If the Apply artifacts feature isn't available, you see a message at the top of the page.
An artifact can stop responding, and finally appear as **Failed**. To investigat
1. On your lab **Overview** page, from the list under **My virtual machines**, select the VM that has the artifact you want to investigate. 1. On the VM **Overview** page, select **Artifacts** in the left navigation. The **Artifacts** page lists artifacts associated with the VM, and their status.
- ![Screenshot showing the list of artifacts and their status.](./media/devtest-lab-troubleshoot-apply-artifacts/artifact-list.png)
+ :::image type="content" source="media/devtest-lab-troubleshoot-apply-artifacts/artifact-list.png" alt-text="Screenshot showing the list of artifacts and their status.":::
1. Select the artifact that shows a **Failed** status. The artifact opens with an extension message that includes details about the artifact failure.
- ![Screenshot of the error message for a failed artifact.](./media/devtest-lab-troubleshoot-apply-artifacts/artifact-failure.png)
+ :::image type="content" source="media/devtest-lab-troubleshoot-apply-artifacts/artifact-failure.png" alt-text="Screenshot of the error message for a failed artifact.":::
### Inspect the Activity logs
Select the failed entry to see the error details. On the failure page, select **
### Investigate the private artifact repository and lab storage account
-When DevTest Labs applies an artifact, it reads the artifact configuration and files from connected repositories. By default, DevTest Labs has access to the DevTest Labs [public Artifact repository](https://github.com/Azure/azure-devtestlab/tree/master/Artifacts). You can also connect a lab to a private repository to access custom artifacts. If a custom artifact fails to install, make sure the personal access token (PAT) for the private repository hasn't expired. If the PAT is expired, the artifact won't be listed, and any scripts that refer to artifacts from that repository will fail.
+When DevTest Labs applies an artifact, it reads the artifact configuration and files from connected repositories. By default, DevTest Labs has access to the DevTest Labs [public Artifact repository](https://github.com/Azure/azure-devtestlab/tree/master/Artifacts). You can also connect a lab to a private repository to access custom artifacts. If a custom artifact fails to install, make sure the personal access token (PAT) for the private repository hasn't expired. If the PAT is expired, the artifact won't be listed, and any scripts that refer to artifacts from that repository fail.
Depending on configuration, lab VMs might not have direct access to the artifact repository. DevTest Labs caches the artifacts in a lab storage account that's created when the lab first initializes. If access to this storage account is blocked, such as when traffic is blocked from the VM to the Azure Storage service, you might see an error similar to this:
To troubleshoot connectivity issues to the Azure Storage account:
1. Navigate to the lab's resource group. 1. Locate the resource of type **Storage account** whose name matches the convention.
- 1. On the storage account **Overview** page, select **Firewalls and virtual networks** in the left navigation.
- 1. Ensure that **Firewalls and virtual networks** is set to **All networks**. Or, if the **Selected networks** option is selected, make sure the lab's virtual networks used to create VMs are added to the list.
+ 1. On the storage account **Overview** page, select **Networking** in the left navigation.
+ 1. On the **Firewalls and virtual networks** tab, ensure that **Public network access** is set to **Enabled from all networks**. Or, if the **Enabled from selected virtual networks and IP addresses** option is selected, make sure the lab's virtual networks used to create VMs are added to the list.
For in-depth troubleshooting, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md).
You can connect to the lab VM where the artifact failed, and investigate the iss
1. On the lab VM, go to *C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\\*1.10.12\*\\Status\\*, where *\*1.10.12\** is the CSE version number.
- ![Screenshot of the Status folder on the lab V M.](./media/devtest-lab-troubleshoot-apply-artifacts/status-folder.png)
+ :::image type="content" source="media/devtest-lab-troubleshoot-apply-artifacts/status-folder.png" alt-text="Screenshot of the Status folder on the lab V M.":::
1. Open and inspect the *STATUS* file to view the error.
For general information about Azure extensions, see [Azure virtual machine exten
The artifact installation could fail because of the way the artifact installation script is authored. For example: -- The script has mandatory parameters, but fails to pass a value, either by allowing the user to leave it blank, or because there's no default value in the *artifactfile.json* definition file. The script stops responding because it's awaiting user input.
+- The script has mandatory parameters but fails to pass a value, either by allowing the user to leave it blank, or because there's no default value in the *artifactfile.json* definition file. The script stops responding because it's awaiting user input.
- The script requires user input as part of execution. Scripts should work silently without requiring user intervention.
If you need more help, try one of the following support channels:
- Contact the Azure DevTest Labs experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/forums/). - Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums). - Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.-- Go to the [Azure support site](https://azure.microsoft.com/support/options) and select **Get Support** to file an Azure support incident.
+- Go to the [Azure support site](https://azure.microsoft.com/support/options) and select **Submit a support ticket** to file an Azure support incident.
devtest-labs Devtest Lab Use Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-use-resource-manager-template.md
Last updated 06/09/2023+ # Use ARM templates to create DevTest Labs virtual machines
You can customize and use an ARM template from any Azure VM base to deploy more
1. On the **Advanced Settings** tab, select **View ARM template**. 1. Copy and [save the ARM template](#store-arm-templates-in-git-repositories) to use for creating more VMs.
- ![Screenshot that shows an ARM template to save for later use.](./media/devtest-lab-use-arm-template/devtestlab-lab-copy-rm-template.png)
+ :::image type="content" source="media/devtest-lab-use-arm-template/devtestlab-lab-copy-rm-template.png" alt-text="Screenshot that shows an ARM template to save for later use.":::
1. If you want to create an instance of the VM now, on the **Basic Settings** tab, select **Create**.
Use the following file structure to store an ARM template in a source control re
- To reuse the ARM template, you need to update the `parameters` section of *azuredeploy.json*. You can create a *parameter.json* file that customizes just the parameters, without having to edit the main template file. Name this parameter file *azuredeploy.parameters.json*.
- ![Customize parameters using a JSON file](./media/devtest-lab-use-arm-template/devtestlab-lab-custom-params.png)
+ :::image type="content" source="media/devtest-lab-use-arm-template/devtestlab-lab-custom-params.png" alt-text="Customize parameters using a JSON file.":::
In the parameters file, you can use the parameters `_artifactsLocation` and `_artifactsLocationSasToken` to construct a `parametersLink` URI value for automatically managing nested templates. For more information about nested templates, see [Deploy nested Azure Resource Manager templates for testing environments](deploy-nested-template-environments.md).
Use the following file structure to store an ARM template in a source control re
The following screenshot shows a typical ARM template folder structure in a repository.
-![Screenshot that shows key ARM template files in a repository.](./media/devtest-lab-create-environment-from-arm/main-template.png)
## Add template repositories to labs
Add your template repositories to your lab so all lab users can access the templ
1. To add your private ARM template repository to the lab, select **Add** in the top menu bar.
- ![Screenshot that shows the Repositories configuration screen.](./media/devtest-lab-create-environment-from-arm/public-repo.png)
+ :::image type="content" source="media/devtest-lab-create-environment-from-arm/public-repo.png" alt-text="Screenshot that shows the Repositories configuration screen.":::
1. In the **Repositories** pane, enter the following information:
Add your template repositories to your lab so all lab users can access the templ
1. Select **Save**.
- ![Screenshot that shows adding a new template repository to a lab.](./media/devtest-lab-create-environment-from-arm/repo-values.png)
+ :::image type="content" source="media/devtest-lab-create-environment-from-arm/repo-values.png" alt-text="Screenshot that shows adding a new template repository to a lab.":::
The repository now appears in the **Repositories** list for the lab. Users can now use the repository templates to [create multi-VM DevTest Labs environments](devtest-lab-create-environment-from-arm.md). Lab administrators can use the templates to [automate lab deployment and management tasks](devtest-lab-use-arm-and-powershell-for-lab-resources.md#arm-template-automation).
devtest-labs Create Lab Windows Vm Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/quickstarts/create-lab-windows-vm-terraform.md
Title: 'Quickstart: Create a lab in Azure DevTest Labs using Terraform'
description: 'In this article, you create a Windows virtual machine in a lab within Azure DevTest Labs using Terraform' Last updated 4/14/2023-+
devtest-labs Troubleshoot Vm Deployment Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/troubleshoot-vm-deployment-failures.md
Last updated 02/27/2023+ # Troubleshoot virtual machine (VM) deployment failures in Azure DevTest Labs
devtest-labs Tutorial Create Custom Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/tutorial-create-custom-lab.md
Last updated 05/22/2023+ # Tutorial: Create a DevTest Labs lab and VM and add a user in the Azure portal
devtest-labs Tutorial Use Custom Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/tutorial-use-custom-lab.md
Last updated 05/22/2023+ # Tutorial: Access a lab in Azure DevTest Labs
devtest-labs Use Command Line Start Stop Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/use-command-line-start-stop-virtual-machines.md
Last updated 04/24/2023-+ ms.devlang: azurecli
event-grid Event Handlers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-handlers.md
Title: Azure Event Grid event handlers description: Describes supported event handlers for Azure Event Grid. Azure Automation, Functions, Event Hubs, Hybrid Connections, Logic Apps, Service Bus, Queue Storage, Webhooks. Previously updated : 03/15/2022 Last updated : 06/16/2023 # Event handlers in Azure Event Grid
event-hubs Event Hubs Dotnet Standard Getstarted Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dotnet-standard-getstarted-send.md
This section shows you how to create a .NET Core console application to send eve
## Receive events from the event hub This section shows how to write a .NET Core console application that receives events from an event hub using an event processor. The event processor simplifies receiving events from event hubs.
-> [!WARNING]
-> If you run this code on **Azure Stack Hub**, you will experience runtime errors unless you target a specific Storage API version. That's because the Event Hubs SDK uses the latest available Azure Storage API available in Azure that may not be available on your Azure Stack Hub platform. Azure Stack Hub may support a different version of Storage Blob SDK than those typically available on Azure. If you are using Azure Blob Storage as a checkpoint store, check the [supported Azure Storage API version for your Azure Stack Hub build](/azure-stack/user/azure-stack-acs-differences?#api-version) and target that version in your code.
->
-> For example, If you are running on Azure Stack Hub version 2005, the highest available version for the Storage service is version 2019-02-02. By default, the Event Hubs SDK client library uses the highest available version on Azure (2019-07-07 at the time of the release of the SDK). In this case, besides following steps in this section, you will also need to add code to target the Storage service API version 2019-02-02. For an example on how to target a specific Storage API version, see [this sample on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples/).
--- ### Create an Azure Storage Account and a blob container In this quickstart, you use Azure Storage as the checkpoint store. Follow these steps to create an Azure Storage account. 1. [Create an Azure Storage account](../storage/common/storage-account-create.md?tabs=azure-portal) 2. [Create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) 3. Authenticate to the blob container using either Azure AD (passwordless) authentication or a connection string to the namespace.++ ## [Passwordless (Recommended)](#tab/passwordless)
event-hubs Event Hubs Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-features.md
If a reader disconnects from a partition, when it reconnects it begins reading a
> [!IMPORTANT] > Offsets are provided by the Event Hubs service. It's the responsibility of the consumer to checkpoint as events are processed.
-> [!NOTE]
-> If you are using Azure Blob Storage as the checkpoint store in an environment that supports a different version of Storage Blob SDK than those typically available on Azure, you'll need to use code to change the Storage service API version to the specific version supported by that environment. For example, if you are running [Event Hubs on an Azure Stack Hub version 2002](/azure-stack/user/event-hubs-overview), the highest available version for the Storage service is version 2017-11-09. In this case, you need to use code to target the Storage service API version to 2017-11-09. For an example on how to target a specific Storage API version, see these samples on GitHub:
-> - [.NET](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples/).
-> - [Java](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/eventhubs/azure-messaging-eventhubs-checkpointstore-blob/src/samples/java/com/azure/messaging/eventhubs/checkpointstore/blob/)
-> - [JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventhub/eventhubs-checkpointstore-blob/samples/v1/javascript) or [TypeScript](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventhub/eventhubs-checkpointstore-blob/samples/v1/typescript)
-> - [Python](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub-checkpointstoreblob-aio/samples/)
### Log compaction
event-hubs Event Hubs Go Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-go-get-started-send.md
Don't run the application yet. You first need to run the receiver app and then t
State such as leases on partitions and checkpoints in the event stream are shared between receivers using an Azure Storage container. You can create a storage account and container with the Go SDK, but you can also create one by following the instructions in [About Azure storage accounts](../storage/common/storage-account-create.md). + ### Go packages To receive the messages, get the Go packages for Event Hubs as shown in the following example.
event-hubs Event Hubs Java Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-java-get-started-send.md
Build the program, and ensure that there are no errors. You'll run this program
The code in this tutorial is based on the [EventProcessorClient sample on GitHub](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/eventhubs/azure-messaging-eventhubs-checkpointstore-blob/src/samples/java/com/azure/messaging/eventhubs/checkpointstore/blob/EventProcessorBlobCheckpointStoreSample.java), which you can examine to see the full working application.
-> [!WARNING]
-> If you run this code on Azure Stack Hub, you will experience runtime errors unless you target a specific Storage API version. That's because the Event Hubs SDK uses the latest available Azure Storage API available in Azure that may not be available on your Azure Stack Hub platform. Azure Stack Hub may support a different version of Azure Blob Storage SDK than those typically available on Azure. If you are using Azure Blob Storage as a checkpoint store, check the [supported Azure Storage API version for your Azure Stack Hub build](/azure-stack/user/azure-stack-acs-differences?#api-version) and target that version in your code.
->
-> For example, If you are running on Azure Stack Hub version 2005, the highest available version for the Storage service is version 2019-02-02. By default, the Event Hubs SDK client library uses the highest available version on Azure (2019-07-07 at the time of the release of the SDK). In this case, besides following steps in this section, you will also need to add code to target the Storage service API version 2019-02-02. For an example on how to target a specific Storage API version, see [this sample on GitHub](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/eventhubs/azure-messaging-eventhubs-checkpointstore-blob/src/samples/java/com/azure/messaging/eventhubs/checkpointstore/blob/EventProcessorWithCustomStorageVersion.java).
- ### Create an Azure Storage and a blob container
event-hubs Event Hubs Node Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-node-get-started-send.md
Congratulations! You have now sent events to an event hub.
## Receive events In this section, you receive events from an event hub by using an Azure Blob storage checkpoint store in a JavaScript application. It performs metadata checkpoints on received messages at regular intervals in an Azure Storage blob. This approach makes it easy to continue receiving messages later from where you left off.
-> [!WARNING]
-> If you run this code on Azure Stack Hub, you will experience runtime errors unless you target a specific Storage API version. That's because the Event Hubs SDK uses the latest available Azure Storage API available in Azure that may not be available on your Azure Stack Hub platform. Azure Stack Hub may support a different version of Storage Blob SDK than those typically available on Azure. If you are using Azure Blog Storage as a checkpoint store, check the [supported Azure Storage API version for your Azure Stack Hub build](/azure-stack/user/azure-stack-acs-differences?#api-version) and target that version in your code.
->
-> For example, If you are running on Azure Stack Hub version 2005, the highest available version for the Storage service is version 2019-02-02. By default, the Event Hubs SDK client library uses the highest available version on Azure (2019-07-07 at the time of the release of the SDK). In this case, besides following steps in this section, you will also need to add code to target the Storage service API version 2019-02-02. For an example on how to target a specific Storage API version, see [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/eventhub/eventhubs-checkpointstore-blob/samples/v1/javascript/receiveEventsWithApiSpecificStorage.js) and [TypeScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/eventhub/eventhubs-checkpointstore-blob/samples/v1/typescript/src/receiveEventsWithApiSpecificStorage.ts) samples on GitHub.
### Create an Azure storage account and a blob container
event-hubs Event Hubs Python Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-python-get-started-send.md
In this section, create a Python script to send events to the event hub that you
This quickstart uses Azure Blob storage as a checkpoint store. The checkpoint store is used to persist checkpoints (that is, the last read positions).
-> [!WARNING]
-> If you run this code on Azure Stack Hub, you will experience runtime errors unless you target a specific Storage API version. That's because the Event Hubs SDK uses the latest available Azure Storage API available in Azure that may not be available on your Azure Stack Hub platform. Azure Stack Hub may support a different version of Storage Blob SDK than those typically available on Azure. If you are using Azure Blog Storage as a checkpoint store, check the [supported Azure Storage API version for your Azure Stack Hub build](/azure-stack/user/azure-stack-acs-differences?#api-version) and target that version in your code.
->
-> For example, If you are running on Azure Stack Hub version 2005, the highest available version for the Storage service is version 2019-02-02. By default, the Event Hubs SDK client library uses the highest available version on Azure (2019-07-07 at the time of the release of the SDK). In this case, besides following steps in this section, you will also need to add code to target the Storage service API version 2019-02-02. For an example on how to target a specific Storage API version, see the [synchronous](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub-checkpointstoreblob/samples/receive_events_using_checkpoint_store_storage_api_version.py) and [asynchronous](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub-checkpointstoreblob-aio/samples/receive_events_using_checkpoint_store_storage_api_version_async.py) samples on GitHub.
### Create an Azure storage account and a blob container Create an Azure storage account and a blob container in it by doing the following steps:
event-hubs Event Processor Balance Partition Load https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-processor-balance-partition-load.md
If an event processor disconnects from a partition, another instance can resume
When the checkpoint is performed to mark an event as processed, an entry in checkpoint store is added or updated with the event's offset and sequence number. Users should decide the frequency of updating the checkpoint. Updating after each successfully processed event can have performance and cost implications as it triggers a write operation to the underlying checkpoint store. Also, checkpointing every single event is indicative of a queued messaging pattern for which a Service Bus queue might be a better option than an event hub. The idea behind Event Hubs is that you get "at least once" delivery at great scale. By making your downstream systems idempotent, it's easy to recover from failures or restarts that result in the same events being received multiple times.
-> [!NOTE]
-> If you are using Azure Blob Storage as the checkpoint store in an environment that supports a different version of Storage Blob SDK than those typically available on Azure, you'll need to use code to change the Storage service API version to the specific version supported by that environment. For example, if you are running [Event Hubs on an Azure Stack Hub version 2002](/azure-stack/user/event-hubs-overview), the highest available version for the Storage service is version 2017-11-09. In this case, you need to use code to target the Storage service API version to 2017-11-09. For an example on how to target a specific Storage API version, see these samples on GitHub:
-> - [.NET](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples/).
-> - [Java](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/eventhubs/azure-messaging-eventhubs-checkpointstore-blob/src/samples/java/com/azure/messaging/eventhubs/checkpointstore/blob/)
-> - [JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventhub/eventhubs-checkpointstore-blob/samples/v1/javascript) or [TypeScript](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventhub/eventhubs-checkpointstore-blob/samples/v1/typescript)
-> - [Python](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub-checkpointstoreblob-aio/samples/)
++ ## Thread safety and processor instances
event-hubs Troubleshoot Checkpoint Store Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/troubleshoot-checkpoint-store-issues.md
+
+ Title: Troubleshoot storage checkpoint store issues in Azure Event Hubs
+description: This article describes how to troubleshoot checkpoint store issues when using Azure Blob Storage as the checkpoint store in Azure Event Hubs.
+ Last updated : 06/16/2023++
+# Troubleshoot checkpoint store issues
+This article discusses issues with using Blob Storage as a checkpoint store.
+
+## Issues with using Blob Storage as a checkpoint store
+You may see issues when using a blob storage account as a checkpoint store that are related to delays in processing, or failures to create checkpoints when using the SDK, etc.
++
+## Using Blob Storage checkpoint store on Azure Stack Hub
+If you're using Azure Blob Storage as the checkpoint store in an environment that supports a different version of Storage Blob SDK than the ones that are typically available on Azure, you need to use code to change the Storage service API version to the specific version supported by that environment. For example, if you're running [Event Hubs on an Azure Stack Hub version 2002](/azure-stack/user/event-hubs-overview), the highest available version for the Storage service is version 2017-11-09. In this case, you need to use code to target the Storage service API version to 2017-11-09. For an example of how to target a specific Storage API version, see these samples on GitHub:
+
+- [.NET](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples/)
+- [Java](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/eventhubs/azure-messaging-eventhubs-checkpointstore-blob/src/samples/java/com/azure/messaging/eventhubs/checkpointstore/blob/EventProcessorWithCustomStorageVersion.java).
+- [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/eventhub/eventhubs-checkpointstore-blob/samples/v1/javascript/receiveEventsWithApiSpecificStorage.js) or [TypeScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/eventhub/eventhubs-checkpointstore-blob/samples/v1/typescript/src/receiveEventsWithApiSpecificStorage.ts)
+- Python - [Synchronous](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub-checkpointstoreblob/samples/receive_events_using_checkpoint_store_storage_api_version.py), [Asynchronous](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub-checkpointstoreblob-aio/samples/receive_events_using_checkpoint_store_storage_api_version_async.py)
+
+If you run Event Hubs receiver that uses Blob Storage as the checkpoint store without targeting the version that Azure Stack Hub supports, you receive the following error message:
+
+```
+The value for one of the HTTP headers is not in the correct format
+```
++
+### Sample error message in Python
+For Python, an error of `azure.core.exceptions.HttpResponseError` is passed to the error handler `on_error(partition_context, error)` of `EventHubConsumerClient.receive()`. But, the method `receive()` doesn't raise an exception. `print(error)` prints the following exception information:
+
+```bash
+The value for one of the HTTP headers is not in the correct format.
+
+RequestId:f048aee8-a90c-08ba-4ce1-e69dba759297
+Time:2020-03-17T22:04:13.3559296Z
+ErrorCode:InvalidHeaderValue
+Error:None
+HeaderName:x-ms-version
+HeaderValue:2019-07-07
+```
+
+The logger logs two warnings like the following ones:
+
+```bash
+WARNING:azure.eventhub.extensions.checkpointstoreblobaio._blobstoragecsaio:
+An exception occurred during list_ownership for namespace '<namespace-name>.eventhub.<region>.azurestack.corp.microsoft.com' eventhub 'python-eh-test' consumer group '$Default'.
+
+Exception is HttpResponseError('The value for one of the HTTP headers is not in the correct format.\nRequestId:f048aee8-a90c-08ba-4ce1-e69dba759297\nTime:2020-03-17T22:04:13.3559296Z\nErrorCode:InvalidHeaderValue\nError:None\nHeaderName:x-ms-version\nHeaderValue:2019-07-07')
+
+WARNING:azure.eventhub.aio._eventprocessor.event_processor:EventProcessor instance '26d84102-45b2-48a9-b7f4-da8916f68214' of eventhub 'python-eh-test' consumer group '$Default'. An error occurred while load-balancing and claiming ownership.
+
+The exception is HttpResponseError('The value for one of the HTTP headers is not in the correct format.\nRequestId:f048aee8-a90c-08ba-4ce1-e69dba759297\nTime:2020-03-17T22:04:13.3559296Z\nErrorCode:InvalidHeaderValue\nError:None\nHeaderName:x-ms-version\nHeaderValue:2019-07-07'). Retrying after 71.45254944090853 seconds
+```
++
+## Next steps
+
+See the following article learn about partitioning and checkpointing: [Balance partition load across multiple instances of your application](event-processor-balance-partition-load.md)
expressroute Designing For Disaster Recovery With Expressroute Privatepeering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md
Previously updated : 05/25/2022 Last updated : 06/15/2023
ExpressRoute is designed for high availability to provide carrier grade private network connectivity to Microsoft resources. In other words, there's no single point of failure in the ExpressRoute path within Microsoft network. For design considerations to maximize the availability of an ExpressRoute circuit, see [Designing for high availability with ExpressRoute][HA] and [Well-Architectured Framework](/azure/architecture/framework/services/networking/expressroute/reliability)
-However, taking Murphy's popular adage--*if anything can go wrong, it will*--into consideration, in this article let us focus on solutions that go beyond failures that can be addressed using a single ExpressRoute circuit. We'll be looking into network architecture considerations for building robust backend network connectivity for disaster recovery using geo-redundant ExpressRoute circuits.
+However, taking Murphy's popular adage--*if anything can go wrong, it will*--into consideration, in this article let us focus on solutions that go beyond failures that can be addressed using a single ExpressRoute circuit. We'll look into network architecture considerations for building a robust backend network connectivity for disaster recovery using geo-redundant ExpressRoute circuits.
->[!NOTE]
->The concepts described in this article equally applies when an ExpressRoute circuit is created under Virtual WAN or outside of it.
+> [!NOTE]
+> The concepts described in this article equally applies when an ExpressRoute circuit is created under Virtual WAN or outside of it.
> ## Need for redundant connectivity solution
-There are possibilities and instances where an ExpressRoute peering locations or an entire regional service (be it that of Microsoft, network service providers, customer, or other cloud service providers) gets degraded. The root cause for such regional wide service impact include natural calamity. That's why, for business continuity and mission critical applications it's important to plan for disaster recovery.
+There are possibilities and instances where an ExpressRoute peering locations or an entire regional service gets degraded. The root cause for such regional wide service outage include natural calamity. Therefore it's important to plan for disaster recovery for business continuity and mission critical applications.
No matter what, whether you run your mission critical applications in an Azure region or on-premises or anywhere else, you can use another Azure region as your failover site. The following articles addresses disaster recovery from applications and frontend access perspectives: - [Enterprise-scale disaster recovery][Enterprise DR] - [SMB disaster recovery with Azure Site Recovery][SMB DR]
-If you rely on ExpressRoute connectivity between your on-premises network and Microsoft for mission critical operations, you need to consider the following to plan for disaster recovery over ExpressRoute
+If you rely on ExpressRoute connectivity between your on-premises network and Microsoft, you need to consider the following to plan for disaster recovery over ExpressRoute:
- using geo-redundant ExpressRoute circuits - using diverse service provider network(s) for different ExpressRoute circuit
If you rely on ExpressRoute connectivity between your on-premises network and Mi
## Challenges of using multiple ExpressRoute circuits
-When you interconnect the same set of networks using more than one connection, you introduce parallel paths between the networks. Parallel paths, when not properly architected, could lead to asymmetrical routing. If you have stateful entities (for example, NAT, firewall) in the path, asymmetrical routing could block traffic flow. Typically, over the ExpressRoute private peering path you won't come across stateful entities such as NAT or Firewalls. That's why, asymmetrical routing over ExpressRoute private peering doesn't necessarily block traffic flow.
+When you interconnect the same set of networks using more than one connection, you introduce parallel paths between the networks. Parallel paths, when not properly architected, could lead to asymmetrical routing. If you have stateful entities, for example, a NAT or firewall in the path, asymmetrical routing could block traffic flow. Typically, over the ExpressRoute private peering path you don't come across stateful entities such as NAT or Firewalls. Therefore, asymmetrical routing over ExpressRoute private peering doesn't necessarily block traffic flow.
However, if you load balance traffic across geo-redundant parallel paths, regardless of whether you have stateful entities or not, you would experience inconsistent network performance. These geo-redundant parallel paths can be through the same metro or different metro found on the [providers by location](expressroute-locations-providers.md#partners) page. ### Redundancy with ExpressRoute circuits in same metro
-[Many metros](expressroute-locations-providers.md#global-commercial-azure) have two ExpressRoute locations. An example would be *Amsterdam* and *Amsterdam2*. When designing redundancy, you could build two parallel paths to Azure with both locations in the same metro. You could do this with the same provider or choose to work with a different service provider to improve resiliency. Another advantage of this design is when application failover happens, end-to-end latency between your on-premises applications and Microsoft stays approximately the same. However, if there is a natural disaster such as an earthquake, connectivity for both paths may no longer be available.
+[Many metros](expressroute-locations-providers.md#global-commercial-azure) have two ExpressRoute locations. An example would be *Amsterdam* and *Amsterdam2*. When designing redundancy, you could build two parallel paths to Azure with both locations in the same metro. You accomplish this task with the same provider or choose to work with a different service provider to improve resiliency. Another advantage of this design is when application failover happens, end-to-end latency between your on-premises applications and Microsoft stays approximately the same. However, if there's a natural disaster such as an earthquake, connectivity for both paths may no longer be available.
### Redundancy with ExpressRoute circuits in different metros
-When using different metros for redundancy, you should select the secondary location in the same [geo-political region](expressroute-locations-providers.md#locations). To choose a location outside of the geo-political region, you'll need to use Premium SKU for both circuits in the parallel paths. The advantage of this configuration is the chances of a natural disaster causing an outage to both links are much lower but at the cost of increased latency end-to-end.
+When using different metros for redundancy, you should select the secondary location in the same [geo-political region](expressroute-locations-providers.md#locations). To choose a location outside of the geo-political region, you need to use Premium SKU for both circuits in the parallel paths. The advantage of this configuration is the chances of a natural disaster causing an outage to both links are lower but at the cost of increased latency end-to-end.
->[!NOTE]
->Enabling BFD on the ExpressRoute circuits will help with faster link failure detection between Microsoft Enterprise Edge (MSEE) devices and the Customer/Partner Edge routers. However, the overall failover and convergence to redundant site may take up to 180 seconds under some failure conditions and you may experience increased latency or performance degradation during this time.
+> [!NOTE]
+> Enabling BFD on the ExpressRoute circuits will help with faster link failure detection between Microsoft Enterprise Edge (MSEE) devices and the Customer/Partner Edge routers. However, the overall failover and convergence to redundant site may take up to 180 seconds under some failure conditions and you may experience increased latency or performance degradation during this time.
In this article, let's discuss how to address challenges you may face when configuring geo-redundant paths.
Let's consider the example network illustrated in the following diagram. In the
:::image type="content" source="./media/designing-for-disaster-recovery-with-expressroute-pvt/one-region.png" alt-text="Diagram of small to medium size on-premises network considerations.":::
-By default, if you advertise routes identically over all the ExpressRoute paths, Azure will load-balance on-premises bound traffic across all the ExpressRoute paths using Equal-cost multi-path (ECMP) routing.
+By default, if you advertise routes identically over all the ExpressRoute paths, Azure load-balances on-premises bound traffic across all the ExpressRoute paths using Equal-cost multi-path (ECMP) routing.
However, with the geo-redundant ExpressRoute circuits we need to take into consideration different network performances with different network paths (particularly for network latency). To get more consistent network performance during normal operation, you may want to prefer the ExpressRoute circuit that offers the minimal latency.
The following screenshot illustrates configuring the weight of an ExpressRoute c
:::image type="content" source="./media/designing-for-disaster-recovery-with-expressroute-pvt/configure-weight.png" alt-text="Screenshot of configuring connection weight via Azure portal.":::
-The following diagram illustrates influencing ExpressRoute path selection using connection weight. The default connection weight is 0. In the example below, the weight of the connection for ExpressRoute 1 is configured as 100. When a VNet receives a route prefix advertised via more than one ExpressRoute circuit, the VNet will prefer the connection with the highest weight.
+The following diagram illustrates influencing ExpressRoute path selection using connection weight. The default connection weight is 0. In the following example, the weight of the connection for ExpressRoute 1 is configured as 100. When a VNet receives a route prefix advertised via more than one ExpressRoute circuit, the VNet prefers the connection with the highest weight.
:::image type="content" source="./media/designing-for-disaster-recovery-with-expressroute-pvt/connection-weight.png" alt-text="Diagram of influencing path selection using connection weight.":::
Let's consider the example illustrated in the following diagram. In the example,
:::image type="content" source="./media/designing-for-disaster-recovery-with-expressroute-pvt/multi-region.png" alt-text="Diagram of large distributed on-premises network considerations.":::
-How we architect the disaster recovery has an impact on how cross-regional to cross location (region1/region2 to location2/location1) traffic is routed. Let's consider two different disaster architectures that routes cross region-location traffic differently.
+How we architect the disaster recovery has an effect on how cross-regional to cross location (region1/region2 to location2/location1) traffic is routed. Let's consider two different disaster architectures that routes cross region-location traffic differently.
### Scenario 1
You can architect the scenario using connection weight to influence VNets to pre
### Scenario 2
-The Scenario 2 is illustrated in the following diagram. In the diagram, green lines indicate paths for traffic flow between VNet1 and on-premises networks. The blue lines indicate paths for traffic flow between VNet2 and on-premises networks. In the steady-state (solid lines in the diagram), all the traffic between VNets and on-premises locations flow via Microsoft backbone for the most part, and flows through the interconnection between on-premises locations only in the failure state (dotted lines in the diagram) of an ExpressRoute.
+The Scenario 2 is illustrated in the following diagram. In the diagram, green lines indicate paths for traffic flow between VNet1 and on-premises networks. The blue lines indicate paths for traffic flow between VNet2 and on-premises networks. In the steady-state, solid lines in the diagram, all the traffic between VNets and on-premises locations flow using the Microsoft backbone normally, and flows through the interconnection between on-premises locations only in the failure state, dotted lines in the diagram, of an ExpressRoute.
:::image type="content" source="./media/designing-for-disaster-recovery-with-expressroute-pvt/multi-region-arch2.png" alt-text="Diagram of traffic flow for second scenario.":::
In this article, we discussed how to design for disaster recovery of an ExpressR
[HA]: ./designing-for-high-availability-with-expressroute.md [Enterprise DR]: https://azure.microsoft.com/solutions/architecture/disaster-recovery-enterprise-scale-dr/ [SMB DR]: https://azure.microsoft.com/solutions/architecture/disaster-recovery-smb-azure-site-recovery/
-[con wgt]: ./expressroute-optimize-routing.md#solution-assign-a-high-weight-to-local-connection
-[AS Path Pre]: ./expressroute-optimize-routing.md#solution-use-as-path-prepending
expressroute Designing For High Availability With Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/designing-for-high-availability-with-expressroute.md
Title: 'Azure ExpressRoute: Designing for high availability'
description: This page provides architectural recommendations for high availability while using Azure ExpressRoute. - Previously updated : 06/28/2019 Last updated : 06/15/2023 - # Designing for high availability with ExpressRoute
-ExpressRoute is designed for high availability to provide carrier grade private network connectivity to Microsoft resources. In other words, there is no single point of failure in the ExpressRoute path within Microsoft network. To maximize the availability, the customer and the service provider segment of your ExpressRoute circuit should also be architected for high availability. In this article, first let's look into network architecture considerations for building robust network connectivity using an ExpressRoute, then let's look into the fine-tuning features that help you to improve the high availability of your ExpressRoute circuit.
+ExpressRoute is designed for high availability to provide carrier grade private network connectivity to Microsoft resources. In other words, there's no single point of failure in the ExpressRoute path within Microsoft network. To maximize the availability, the customer and the service provider segment of your ExpressRoute circuit should also be architected for high availability. In this article, first let's look into network architecture considerations for building robust network connectivity using an ExpressRoute, then let's look into the fine-tuning features that help you to improve the high availability of your ExpressRoute circuit.
->[!NOTE]
->The concepts described in this article equally applies when an ExpressRoute circuit is created under Virtual WAN or outside of it.
+> [!NOTE]
+> The concepts described in this article equally applies when an ExpressRoute circuit is created under Virtual WAN or outside of it.
> ## Architecture considerations
The following figure illustrates the recommended way to connect using an Express
[![1]][1]
-For high availability, it's essential to maintain the redundancy of the ExpressRoute circuit throughout the end-to-end network. In other words, you need to maintain redundancy within your on-premises network, and shouldn't compromise redundancy within your service provider network. Maintaining redundancy at the minimum implies avoiding single point of network failures. Having redundant power and cooling for the network devices will further improve the high availability.
+For high availability, it's essential to maintain the redundancy of the ExpressRoute circuit throughout the end-to-end network. In other words, you need to maintain redundancy within your on-premises network, and shouldn't compromise redundancy within your service provider network. Maintaining redundancy at the minimum implies avoiding single point of network failures. Having redundant power and cooling for the network devices further improves the high availability.
### First mile physical layer design considerations
- If you terminate both the primary and secondary connections of an ExpressRoute circuits on the same Customer Premises Equipment (CPE), you're compromising the high availability within your on-premises network. Additionally, if you configure both the primary and secondary connections via the same port of a CPE (either by terminating the two connections under different subinterfaces or by merging the two connections within the partner network), you're forcing the partner to compromise high availability on their network segment as well. This compromise is illustrated in the following figure.
+ If you terminate both the primary and secondary connections of an ExpressRoute circuits on the same Customer Premises Equipment (CPE), you're compromising the high availability within your on-premises network. Additionally, if you configure both the primary and secondary connections using the same port of a CPE, you're forcing the partner to compromise high availability on their network segment as well. This event can happen by either terminating the two connections under different subinterfaces or by merging the two connections within the partner network. This compromise is illustrated in the following figure.
[![2]][2]
For geo-redundant design considerations, see [Designing for disaster recovery wi
### Active-active connections
-Microsoft network is configured to operate the primary and secondary connections of ExpressRoute circuits in active-active mode. However, through your route advertisements, you can force the redundant connections of an ExpressRoute circuit to operate in active-passive mode. Advertising more specific routes and BGP AS path prepending are the common techniques used to make one path preferred over the other.
+Microsoft network is configured to operate the primary and secondary connections of ExpressRoute circuits in active-active mode. However, through your route advertisements, you can force the redundant connections of an ExpressRoute circuit to operate in active-passive mode. Advertising more specific routes and BGP AS path prepending are the common techniques used to make one path prefer over the other.
-To improve high availability, it's recommended to operate both the connections of an ExpressRoute circuit in active-active mode. If you let the connections operate in active-active mode, Microsoft network will load balance the traffic across the connections on per-flow basis.
+To improve high availability, it's recommended to operate both the connections of an ExpressRoute circuit in active-active mode. If you let the connections operate in active-active mode, Microsoft network loads balance the traffic across the connections on per-flow basis.
Running the primary and secondary connections of an ExpressRoute circuit in active-passive mode face the risk of both the connections failing following a failure in the active path. The common causes for failure on switching over are lack of active management of the passive connection, and passive connection advertising stale routes.
-Alternatively, running the primary and secondary connections of an ExpressRoute circuit in active-active mode, results in only about half the flows failing and getting rerouted, following an ExpressRoute connection failure. Thus, active-active mode will significantly help improve the Mean Time To Recover (MTTR).
+Alternatively, running the primary and secondary connections of an ExpressRoute circuit in active-active mode, results in only about half the flows failing and getting rerouted. Therefore, an active-active connection significantly helps improve the Mean Time To Recover (MTTR).
> [!NOTE]
-> During a maintenance activity or in case of unplanned events impacting one of the connection, Microsoft will prefer to use AS path prepending to drain traffic over to the healthy connection. You will need to ensure the traffic is able to route over the healthy path when path prepend is configured from Microsoft and required route advertisements are configured appropriately to avoid any service disruption.
+> During a maintenance activity or in case of unplanned events impacting one of the connection, Microsoft will prefer to use AS path prepending to drain traffic over to the healthy connection. You will need to ensure the traffic is able to route over the healthy path when path prepend is configure from Microsoft and required route advertisements are configured appropriately to avoid any service disruption.
> ### NAT for Microsoft peering
-Microsoft peering is designed for communication between public end-points. So commonly, on-premises private endpoints are Network Address Translated (NATed) with public IP on the customer or partner network before they communicate over Microsoft peering. Assuming you use both the primary and secondary connections in active-active mode, where and how you NAT has an impact on how quickly you recover following a failure in one of the ExpressRoute connections. Two different NAT options are illustrated in the following figure:
+Microsoft peering is designed for communication between public end-points. So commonly, on-premises private endpoints are Network Address Translated (NATed) with public IP on the customer or partner network before they communicate over Microsoft peering. Assuming you use both the primary and secondary connections in an active-active setup. Where and how your NAT has an effect on how quickly you recover following a failure in one of the ExpressRoute connections. Two different NAT options are illustrated in the following figure:
[![3]][3] #### Option 1:
-NAT gets applied after splitting the traffic between the primary and secondary connections of the ExpressRoute circuit. To meet the stateful requirements of NAT, independent NAT pools are used for the primary and the secondary devices. The return traffic will arrive on the same edge device through which the flow egressed.
+NAT gets applied after splitting the traffic between the primary and secondary connections of the ExpressRoute circuit. To meet the stateful requirements of NAT, independent NAT pools are used for the primary and the secondary devices. The return traffic arrives on the same edge device through which the flow egressed.
-If the ExpressRoute connection fails, the ability to reach the corresponding NAT pool is then broken. That's why all broken network flows have to be re-established either by TCP or by the application layer following the corresponding window timeout. During the failure, Azure can't reach the on-premises servers using the corresponding NAT until connectivity has been restored for either the primary or secondary connections of the ExpressRoute circuit.
+If the ExpressRoute connection fails, the ability to reach the corresponding NAT pool is then broken. Therefore, all broken network flows have to get re-established either by TCP or by the application layer following the corresponding window timeout. During the failure, Azure can't reach the on-premises servers using the corresponding NAT until connectivity has been restored for either the primary or secondary connections of the ExpressRoute circuit.
#### Option 2:
-A common NAT pool is used before splitting the traffic between the primary and secondary connections of the ExpressRoute circuit. It's important to make the distinction that the common NAT pool before splitting the traffic doesn't mean it will introduce a single-point of failure as such compromising high-availability.
+A common NAT pool is used before splitting the traffic between the primary and secondary connections of the ExpressRoute circuit. It's important to make the distinction that the common NAT pool before splitting the traffic doesn't mean it introduces a single-point of failure as such compromising high-availability.
-The NAT pool is reachable even after the primary or secondary connection fail. That's why the network layer itself can reroute the packets and help recover faster following a failure.
+The NAT pool is reachable even after the primary or secondary connection fail. So the network layer itself can reroute the packets and help recover faster following a failure.
> [!NOTE] > * If you use NAT option 1 (independent NAT pools for primary and secondary ExpressRoute connections) and map a port of an IP address from one of the NAT pool to an on-premises server, the server will not be reachable via the ExpressRoute circuit when the corresponding connection fails.
ExpressRoute supports BFD over private peering. BFD reduces detection time of fa
## Next steps
-In this article, we discussed how to design for high availability of an ExpressRoute circuit connectivity. An ExpressRoute circuit peering point is pinned to a geographical location and therefore could be impacted by catastrophic failure that impacts the entire location.
+In this article, we discussed how to design for high availability of an ExpressRoute circuit connectivity. An ExpressRoute circuit peering point is pinned to a geographical location and therefore get affected by catastrophic failure that affects the entire location.
-For design considerations to build geo-redundant network connectivity to Microsoft backbone that can withstand catastrophic failures, which impact an entire region, see [Designing for disaster recovery with ExpressRoute private peering][DR].
+For design considerations to build geo-redundant network connectivity to Microsoft backbone that can withstand catastrophic failures, which affect an entire region, see [Designing for disaster recovery with ExpressRoute private peering][DR].
<!--Image References--> [1]: ./media/designing-for-high-availability-with-expressroute/exr-reco.png "Recommended way to connect using ExpressRoute"
expressroute Expressroute Howto Coexist Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-coexist-resource-manager.md
Title: 'Configure ExpressRoute and S2S VPN coexisting connections: Azure PowerShell'
-description: Configure ExpressRoute and a Site-to-Site VPN connection that can coexist for the Resource Manager model using PowerShell.
+ Title: Configure ExpressRoute and S2S VPN coexisting connections with Azure PowerShell
+description: Configure ExpressRoute and a site-to-site VPN connection that can coexist for the Resource Manager model using Azure PowerShell.
- Previously updated : 09/16/2021 Last updated : 06/15/2023 -
-# Configure ExpressRoute and Site-to-Site coexisting connections using PowerShell
+# Configure ExpressRoute and site-to-site coexisting connections using PowerShell
> [!div class="op_single_selector"] > * [PowerShell - Resource Manager](expressroute-howto-coexist-resource-manager.md) > * [PowerShell - Classic](expressroute-howto-coexist-classic.md) >
->
-This article helps you configure ExpressRoute and Site-to-Site VPN connections that coexist. Having the ability to configure Site-to-Site VPN and ExpressRoute has several advantages. You can configure Site-to-Site VPN as a secure failover path for ExpressRoute, or use Site-to-Site VPNs to connect to sites that are not connected through ExpressRoute. We will cover the steps to configure both scenarios in this article. This article applies to the Resource Manager deployment model.
+This article helps you configure ExpressRoute and site-to-site VPN connections that coexist. Having the ability to configure site-to-site VPN and ExpressRoute has several advantages. You can configure site-to-site VPN as a secure failover path for ExpressRoute, or use site-to-site VPNs to connect to sites that aren't connected through ExpressRoute. We cover the steps to configure both scenarios in this article. This article applies to the Resource Manager deployment model.
-Configuring Site-to-Site VPN and ExpressRoute coexisting connections has several advantages:
+Configuring site-to-site VPN and ExpressRoute coexisting connections has several advantages:
-* You can configure a Site-to-Site VPN as a secure failover path for ExpressRoute.
-* Alternatively, you can use Site-to-Site VPNs to connect to sites that are not connected through ExpressRoute.
+* You can configure a site-to-site VPN as a secure failover path for ExpressRoute.
+* Alternatively, you can use site-to-site VPNs to connect to sites that aren't connected through ExpressRoute.
-The steps to configure both scenarios are covered in this article. This article applies to the Resource Manager deployment model and uses PowerShell. You can also configure these scenarios using the Azure portal, although documentation is not yet available. You can configure either gateway first. Typically, you will incur no downtime when adding a new gateway or gateway connection.
+The steps to configure both scenarios are covered in this article. This article applies to the Resource Manager deployment model and uses PowerShell. You can also configure these scenarios using the Azure portal, although documentation isn't yet available. You can configure either gateway first. Typically, you don't experience any downtime when adding a new gateway or gateway connection.
->[!NOTE]
->If you want to create a Site-to-Site VPN over an ExpressRoute circuit, please see [this article](site-to-site-vpn-over-microsoft-peering.md).
+> [!NOTE]
+> If you want to create a site-to-site VPN over an ExpressRoute circuit, see [**site-to-site VPN over Microsoft peering**](site-to-site-vpn-over-microsoft-peering.md).
> ## Limits and limitations+ * **Only route-based VPN gateway is supported.** You must use a route-based [VPN gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md). You also can use a route-based VPN gateway with a VPN connection configured for 'policy-based traffic selectors' as described in [Connect to multiple policy-based VPN devices](../vpn-gateway/vpn-gateway-connect-multiple-policybased-rm-ps.md).
-* **ExpressRoute-VPN Gateway coexist configurations are not supported on the Basic SKU**.
-* **If you want to use transit routing between ExpressRoute and VPN, the ASN of Azure VPN Gateway must be set to 65515 and Azure Route Server should be used.** Azure VPN Gateway supports the BGP routing protocol. For ExpressRoute and Azure VPN to work together, you must keep the Autonomous System Number of your Azure VPN gateway at its default value, 65515. If you previously selected an ASN other than 65515 and you change the setting to 65515, you must reset the VPN gateway for the setting to take effect.
-* **The gateway subnet must be /27 or a shorter prefix**, (such as /26, /25), or you will receive an error message when you add the ExpressRoute virtual network gateway.
-* **Coexistence in a dual-stack vnet is not supported.** If you are using ExpressRoute IPv6 support and a dual-stack ExpressRoute gateway, coexistence with VPN Gateway will not be possible.
+* ExpressRoute-VPN Gateway coexist configurations are **not supported on the Basic SKU**.
+* If you want to use transit routing between ExpressRoute and VPN, **the ASN of Azure VPN Gateway must be set to 65515, and Azure Route Server should be used.** Azure VPN Gateway supports the BGP routing protocol. For ExpressRoute and Azure VPN to work together, you must keep the Autonomous System Number of your Azure VPN gateway at its default value, 65515. If you previously selected an ASN other than 65515 and you change the setting to 65515, you must reset the VPN gateway for the setting to take effect.
+* **The gateway subnet must be /27 or a shorter prefix**, such as /26, /25, or you receive an error message when you add the ExpressRoute virtual network gateway.
+* **Coexistence in a dual-stack virtual network is not supported.** If you're using ExpressRoute IPv6 support and a dual-stack ExpressRoute gateway, coexistence with VPN Gateway isn't possible.
## Configuration designs
-### Configure a Site-to-Site VPN as a failover path for ExpressRoute
-You can configure a Site-to-Site VPN connection as a backup for ExpressRoute. This connection applies only to virtual networks linked to the Azure private peering path. There is no VPN-based failover solution for services accessible through Azure Microsoft peering. The ExpressRoute circuit is always the primary link. Data flows through the Site-to-Site VPN path only if the ExpressRoute circuit fails. To avoid asymmetrical routing, your local network configuration should also prefer the ExpressRoute circuit over the Site-to-Site VPN. You can prefer the ExpressRoute path by setting higher local preference for the routes received the ExpressRoute.
->[!NOTE]
-> If you have ExpressRoute Microsoft Peering enabled, you can receive the public IP address of your Azure VPN gateway on the ExpressRoute connection. To set up your site-to-site VPN connection as a backup, you must configure your on-premises network so that the VPN connection is routed to the Internet.
->
+### Configure a site-to-site VPN as a failover path for ExpressRoute
+
+You can configure a site-to-site VPN connection as a backup for your ExpressRoute connection. This connection applies only to virtual networks linked to the Azure private peering path. There's no VPN-based failover solution for services accessible through Azure Microsoft peering. The ExpressRoute circuit is always the primary link. Data flows through the site-to-site VPN path only if the ExpressRoute circuit fails. To avoid asymmetrical routing, your local network configuration should also prefer the ExpressRoute circuit over the site-to-site VPN. You can prefer the ExpressRoute path by setting higher local preference for the routes received the ExpressRoute.
> [!NOTE]
-> While ExpressRoute circuit is preferred over Site-to-Site VPN when both routes are the same, Azure will use the longest prefix match to choose the route towards the packet's destination.
+> * If you have ExpressRoute Microsoft peering enabled, you can receive the public IP address of your Azure VPN gateway on the ExpressRoute connection. To set up your site-to-site VPN connection as a backup, you must configure your on-premises network so that the VPN connection is routed to the Internet.
+>
+> * While the ExpressRoute circuit path is preferred over the site-to-site VPN when both routes are the same, Azure uses the longest prefix match to choose the route towards the packet's destination.
>
-![Diagram that shows a Site-to-Site VPN connection as a backup for ExpressRoute.](media/expressroute-howto-coexist-resource-manager/scenario1.jpg)
-
-### Configure a Site-to-Site VPN to connect to sites not connected through ExpressRoute
-You can configure your network where some sites connect directly to Azure over Site-to-Site VPN, and some sites connect through ExpressRoute.
+![Diagram that shows a site-to-site VPN connection as a backup for ExpressRoute.](media/expressroute-howto-coexist-resource-manager/scenario1.jpg)
-![Coexist](media/expressroute-howto-coexist-resource-manager/scenario2.jpg)
+### Configure a site-to-site VPN to connect to sites not connected through ExpressRoute
+You can configure your network where some sites connect directly to Azure over site-to-site VPN, and some sites connect through ExpressRoute.
+![Coexist](media/expressroute-howto-coexist-resource-manager/scenario2.jpg)
## Selecting the steps to use+ There are two different sets of procedures to choose from. The configuration procedure that you select depends on whether you have an existing virtual network that you want to connect to, or you want to create a new virtual network. * I don't have a VNet and need to create one.
- If you donΓÇÖt already have a virtual network, this procedure walks you through creating a new virtual network using Resource Manager deployment model and creating new ExpressRoute and Site-to-Site VPN connections. To configure a virtual network, follow the steps in [To create a new virtual network and coexisting connections](#new).
+ If you donΓÇÖt already have a virtual network, this procedure walks you through creating a new virtual network using Resource Manager deployment model and creating new ExpressRoute and site-to-site VPN connections.
+ * I already have a Resource Manager deployment model VNet.
- You may already have a virtual network in place with an existing Site-to-Site VPN connection or ExpressRoute connection. In this scenario if the gateway subnet prefix is /28 or longer (/29, /30, etc.), you have to delete the existing gateway. The [To configure coexisting connections for an already existing VNet](#add) section walks you through deleting the gateway, and then creating new ExpressRoute and Site-to-Site VPN connections.
+ You may already have a virtual network in place with an existing site-to-site VPN connection or ExpressRoute connection. In this scenario if the gateway subnet prefix is /28 or longer (/29, /30, etc.), you have to delete the existing gateway. The steps to configure coexisting connections for an already existing VNet section walks you through deleting the gateway, and then creating new ExpressRoute and site-to-site VPN connections.
- If you delete and recreate your gateway, you will have downtime for your cross-premises connections. However, your VMs and services will still be able to communicate out through the load balancer while you configure your gateway if they are configured to do so.
+ If you delete and recreate your gateway, you experience downtime for your cross-premises connections. However, your VMs and services can connect through the internet while you configure your gateway if they're configured to do so.
## Before you begin
There are two different sets of procedures to choose from. The configuration pro
[!INCLUDE [working with cloud shell](../../includes/expressroute-cloudshell-powershell-about.md)]
-## <a name="new"></a>To create a new virtual network and coexisting connections
-This procedure walks you through creating a VNet and Site-to-Site and ExpressRoute connections that will coexist. The cmdlets that you use for this configuration may be slightly different than what you might be familiar with. Be sure to use the cmdlets specified in these instructions.
+#### [New virtual network and coexisting connections](#tab/new-virtual-network)
+
+This procedure walks you through creating a VNet and site-to-site and ExpressRoute connections that coexist. The cmdlets that you use for this configuration may be slightly different than what you might be familiar with. Be sure to use the cmdlets specified in these instructions.
1. Sign in and select your subscription. [!INCLUDE [sign in](../../includes/expressroute-cloud-shell-connect.md)]
-2. Set variables.
+
+2. Define variables and create resource group.
```azurepowershell-interactive $location = "Central US" $resgrp = New-AzResourceGroup -Name "ErVpnCoex" -Location $location $VNetASN = 65515 ```
-3. Create a virtual network including Gateway Subnet. For more information about creating a virtual network, see [Create a virtual network](../virtual-network/manage-virtual-network.md#create-a-virtual-network). For more information about creating subnets, see [Create a subnet](../virtual-network/virtual-network-manage-subnet.md#add-a-subnet)
+3. Create a virtual network including the `GatewaySubnet`. For more information about creating a virtual network, see [Create a virtual network](../virtual-network/manage-virtual-network.md#create-a-virtual-network). For more information about creating subnets, see [Create a subnet](../virtual-network/virtual-network-manage-subnet.md#add-a-subnet)
> [!IMPORTANT]
- > The Gateway Subnet must be /27 or a shorter prefix (such as /26 or /25).
- >
+ > The **GatewaySubnet** must be a /27 or a shorter prefix, such as /26 or /25.
>
- Create a new VNet.
+ Create a new virtual network.
```azurepowershell-interactive $vnet = New-AzVirtualNetwork -Name "CoexVnet" -ResourceGroupName $resgrp.ResourceGroupName -Location $location -AddressPrefix "10.200.0.0/16" ```
- Add subnets.
+ Add two subnets named **App** and **GatewaySubnet**.
```azurepowershell-interactive Add-AzVirtualNetworkSubnetConfig -Name "App" -VirtualNetwork $vnet -AddressPrefix "10.200.1.0/24" Add-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -VirtualNetwork $vnet -AddressPrefix "10.200.255.0/24" ```
- Save the VNet configuration.
+ Save the virtual network configuration.
```azurepowershell-interactive $vnet = Set-AzVirtualNetwork -VirtualNetwork $vnet ```
-4. <a name="vpngw"></a>Next, create your Site-to-Site VPN gateway. For more information about the VPN gateway configuration, see [Configure a VNet with a Site-to-Site connection](../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md). The GatewaySku is only supported for *VpnGw1*, *VpnGw2*, *VpnGw3*, *Standard*, and *HighPerformance* VPN gateways. ExpressRoute-VPN Gateway coexist configurations are not supported on the Basic SKU. The VpnType must be *RouteBased*.
+4. <a name="vpngw"></a>Next, create your site-to-site VPN gateway. For more information about the VPN gateway configuration, see [Configure a VNet with a site-to-site connection](../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md). The GatewaySku is only supported for *VpnGw1*, *VpnGw2*, *VpnGw3*, *Standard*, and *HighPerformance* VPN gateways. ExpressRoute-VPN Gateway coexist configurations aren't supported on the Basic SKU. The VpnType must be **RouteBased**.
```azurepowershell-interactive $gwSubnet = Get-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -VirtualNetwork $vnet
This procedure walks you through creating a VNet and Site-to-Site and ExpressRou
New-AzVirtualNetworkGateway -Name "VPNGateway" -ResourceGroupName $resgrp.ResourceGroupName -Location $location -IpConfigurations $gwConfig -GatewayType "Vpn" -VpnType "RouteBased" -GatewaySku "VpnGw1" ```
- Azure VPN gateway supports BGP routing protocol. You can specify ASN (AS Number) for that Virtual Network by adding the -Asn switch in the following command. Not specifying that parameter will default to AS number 65515.
+ The Azure VPN gateway supports BGP routing protocol. You can specify ASN (AS Number) for the virtual network by adding the `-Asn` flag in the following command. Not specifying the `Asn` parameter defaults to the AS number to **65515**.
```azurepowershell-interactive
- $azureVpn = New-AzVirtualNetworkGateway -Name "VPNGateway" -ResourceGroupName $resgrp.ResourceGroupName -Location $location -IpConfigurations $gwConfig -GatewayType "Vpn" -VpnType "RouteBased" -GatewaySku "VpnGw1" -Asn $VNetASN
+ $azureVpn = New-AzVirtualNetworkGateway -Name "VPNGateway" -ResourceGroupName $resgrp.ResourceGroupName -Location $location -IpConfigurations $gwConfig -GatewayType "Vpn" -VpnType "RouteBased" -GatewaySku "VpnGw1"
``` > [!NOTE]
- > For coexisting gateways, you must use the default ASN of 65515. See [limits and limitations](#limits-and-limitations).
+ > For coexisting gateways, you must use the default ASN of 65515. For more information, see [limits and limitations](#limits-and-limitations).
>
- You can find the BGP peering IP and the AS number that Azure uses for the VPN gateway in $azureVpn.BgpSettings.BgpPeeringAddress and $azureVpn.BgpSettings.Asn. For more information, see [Configure BGP](../vpn-gateway/vpn-gateway-bgp-resource-manager-ps.md) for Azure VPN gateway.
+ You can find the BGP peering IP and the AS number that Azure uses for the VPN gateway by running `$azureVpn.BgpSettings.BgpPeeringAddress` and `$azureVpn.BgpSettings.Asn`. For more information, see [Configure BGP](../vpn-gateway/vpn-gateway-bgp-resource-manager-ps.md) for Azure VPN gateway.
+ 5. Create a local site VPN gateway entity. This command doesnΓÇÖt configure your on-premises VPN gateway. Rather, it allows you to provide the local gateway settings, such as the public IP and the on-premises address space, so that the Azure VPN gateway can connect to it. If your local VPN device only supports static routing, you can configure the static routes in the following way:
This procedure walks you through creating a VNet and Site-to-Site and ExpressRou
$localVpn = New-AzLocalNetworkGateway -Name "LocalVPNGateway" -ResourceGroupName $resgrp.ResourceGroupName -Location $location -GatewayIpAddress *<Public IP>* -AddressPrefix $MyLocalNetworkAddress ```
- If your local VPN device supports the BGP and you want to enable dynamic routing, you need to know the BGP peering IP and the AS number that your local VPN device uses.
+ If your local VPN device supports the BGP and you want to enable dynamic routing, you need to know the BGP peering IP and the AS number of your local VPN device.
```azurepowershell-interactive $localVPNPublicIP = "<Public IP>"
This procedure walks you through creating a VNet and Site-to-Site and ExpressRou
``` 6. Configure your local VPN device to connect to the new Azure VPN gateway. For more information about VPN device configuration, see [VPN Device Configuration](../vpn-gateway/vpn-gateway-about-vpn-devices.md).
-7. Link the Site-to-Site VPN gateway on Azure to the local gateway.
+7. Link the site-to-site VPN gateway on Azure to the local gateway.
```azurepowershell-interactive $azureVpn = Get-AzVirtualNetworkGateway -Name "VPNGateway" -ResourceGroupName $resgrp.ResourceGroupName
This procedure walks you through creating a VNet and Site-to-Site and ExpressRou
```
-8. If you are connecting to an existing ExpressRoute circuit, skip steps 8 & 9 and, jump to step 10. Configure ExpressRoute circuits. For more information about configuring ExpressRoute circuit, see [create an ExpressRoute circuit](expressroute-howto-circuit-arm.md).
+8. If you're connecting to an existing ExpressRoute circuit, skip steps 8 & 9 and, jump to step 10. Configure ExpressRoute circuits. For more information about configuring ExpressRoute circuit, see [create an ExpressRoute circuit](expressroute-howto-circuit-arm.md).
9. Configure Azure private peering over the ExpressRoute circuit. For more information about configuring Azure private peering over the ExpressRoute circuit, see [configure peering](expressroute-howto-routing-arm.md)
This procedure walks you through creating a VNet and Site-to-Site and ExpressRou
New-AzVirtualNetworkGatewayConnection -Name "ERConnection" -ResourceGroupName $resgrp.ResourceGroupName -Location $location -VirtualNetworkGateway1 $gw -PeerId $ckt.Id -ConnectionType ExpressRoute ```
-## <a name="add"></a>To configure coexisting connections for an already existing VNet
-If you have a virtual network that has only one virtual network gateway (let's say, Site-to-Site VPN gateway) and you want to add another gateway of a different type (let's say, ExpressRoute gateway), check the gateway subnet size. If the gateway subnet is /27 or larger, you can skip the steps below and follow the steps in the previous section to add either a Site-to-Site VPN gateway or an ExpressRoute gateway. If the gateway subnet is /28 or /29, you have to first delete the virtual network gateway and increase the gateway subnet size. The steps in this section show you how to do that.
+#### [Existing virtual network with a gateway](#tab/existing-virtual-network)
-The cmdlets that you use for this configuration may be slightly different than what you might be familiar with. Be sure to use the cmdlets specified in these instructions.
+If you have a virtual network that has only one virtual network gateway and you want to add another gateway of a different type, first check the gateway subnet size. If the gateway subnet is /27 or larger, you can skip the steps in this section and follow the steps in the previous section to add either a site-to-site VPN gateway or an ExpressRoute gateway. If the gateway subnet is /28 or /29, you have to first delete the virtual network gateway and increase the gateway subnet size. The steps in this section show you how to do that.
-1. Delete the existing ExpressRoute or Site-to-Site VPN gateway.
+1. Delete the existing ExpressRoute or site-to-site VPN gateway.
```azurepowershell-interactive Remove-AzVirtualNetworkGateway -Name <yourgatewayname> -ResourceGroupName <yourresourcegroup>
The cmdlets that you use for this configuration may be slightly different than w
> [!NOTE] > If you don't have enough IP addresses left in your virtual network to increase the gateway subnet size, you need to add more IP address space. >
- >
```azurepowershell-interactive $vnet = Get-AzVirtualNetwork -Name <yourvnetname> -ResourceGroupName <yourresourcegroup>
The cmdlets that you use for this configuration may be slightly different than w
New-AzVirtualNetworkGatewayConnection -Name "ERConnection" -ResourceGroupName $resgrp.ResourceGroupName -Location $location -VirtualNetworkGateway1 $gw -PeerId $ckt.Id -ConnectionType ExpressRoute ``` ++ ## To add point-to-site configuration to the VPN gateway
-You can follow the steps below to add Point-to-Site configuration to your VPN gateway in a coexistence setup. To upload the VPN root certificate, you must either install PowerShell locally to your computer, or use the Azure portal.
+You can follow these steps to add a point-to-site configuration to your VPN gateway in a coexistence setup. To upload the VPN root certificate, you must either install PowerShell locally to your computer, or use the Azure portal.
1. Add VPN Client address pool.
You can follow the steps below to add Point-to-Site configuration to your VPN ga
$azureVpn = Get-AzVirtualNetworkGateway -Name "VPNGateway" -ResourceGroupName $resgrp.ResourceGroupName Set-AzVirtualNetworkGateway -VirtualNetworkGateway $azureVpn -VpnClientAddressPool "10.251.251.0/24" ```
-2. Upload the VPN [root certificate](../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md#Certificates) to Azure for your VPN gateway. In this example, it's assumed that the root certificate is stored in the local machine where the following PowerShell cmdlets are run and that you are running PowerShell locally. You can also upload the certificate using the Azure portal.
+2. Upload the VPN [root certificate](../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md#Certificates) to Azure for your VPN gateway. In this example, we assume the root certificate gets stored in the local machine where the following PowerShell cmdlets run and that you're running PowerShell locally. You can also upload the certificate using the Azure portal.
```powershell $p2sCertFullName = "RootErVpnCoexP2S.cer"
You can follow the steps below to add Point-to-Site configuration to your VPN ga
For more information on Point-to-Site VPN, see [Configure a Point-to-Site connection](../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md). ## To enable transit routing between ExpressRoute and Azure VPN
-If you want to enable connectivity between one of your local network that is connected to ExpressRoute and another of your local network that is connected to a site-to-site VPN connection, you'll need to set up [Azure Route Server](../route-server/expressroute-vpn-support.md).
+If you want to enable connectivity between one of your local networks that is connected to ExpressRoute and another of your local network that is connected to a site-to-site VPN connection, you need to set up [Azure Route Server](../route-server/expressroute-vpn-support.md).
## Next steps+ For more information about ExpressRoute, see the [ExpressRoute FAQ](expressroute-faqs.md).
expressroute Expressroute Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-prerequisites.md
Title: 'Azure ExpressRoute: Prerequisites'
description: This page provides a list of requirements to be met before you can order an Azure ExpressRoute circuit. It includes a checklist. - Previously updated : 09/18/2019 Last updated : 06/15/2023 -- + # ExpressRoute prerequisites & checklist+ To connect to Microsoft cloud services using ExpressRoute, you need to verify that the following requirements listed in the following sections have been met. [!INCLUDE [expressroute-office365-include](../../includes/expressroute-office365-include.md)] ## Azure account+ * A valid and active Microsoft Azure account. This account is required to set up the ExpressRoute circuit. ExpressRoute circuits are resources within Azure subscriptions. An Azure subscription is a requirement even if connectivity is limited to non-Azure Microsoft cloud services, such as Microsoft 365. * An active Microsoft 365 subscription (if using Microsoft 365 services). For more information, see the Microsoft 365 specific requirements section of this article. ## Connectivity provider * You can work with an [ExpressRoute connectivity partner](expressroute-locations.md#partners) to connect to the Microsoft cloud. You can set up a connection between your on-premises network and Microsoft in [three ways](expressroute-introduction.md).
-* If your provider is not an ExpressRoute connectivity partner, you can still connect to the Microsoft cloud through a [cloud exchange provider](expressroute-locations.md#connectivity-through-exchange-providers).
+* If your provider isn't an ExpressRoute connectivity partner, you can still connect to the Microsoft cloud through a [cloud exchange provider](expressroute-locations.md#connectivity-through-exchange-providers).
## Network requirements+ * **Redundancy at each peering location**: Microsoft requires redundant BGP sessions to be set up between Microsoft's routers and the peering routers on each ExpressRoute circuit (even when you have just [one physical connection to a cloud exchange](expressroute-faqs.md#onep2plink)). * **Redundancy for Disaster Recovery**: Microsoft strongly recommends you set up at least two ExpressRoute circuits in different peering locations to avoid a single point of failure. * **Routing**: depending on how you connect to the Microsoft Cloud, you or your provider needs to set up and manage the BGP sessions for [routing domains](expressroute-circuit-peerings.md). Some Ethernet connectivity providers or cloud exchange providers may offer BGP management as a value-add service.
-* **NAT**: Microsoft only accepts public IP addresses through Microsoft peering. If you are using private IP addresses in your on-premises network, you or your provider needs to translate the private IP addresses to the public IP addresses [using the NAT](expressroute-nat.md).
+* **NAT**: Microsoft only accepts public IP addresses through Microsoft peering. If you're using private IP addresses in your on-premises network, you or your provider needs to translate the private IP addresses to the public IP addresses [using the NAT](expressroute-nat.md).
* **QoS**: Skype for Business has various services (for example; voice, video, text) that require differentiated QoS treatment. You and your provider should follow the [QoS requirements](expressroute-qos.md). * **Network Security**: consider [network security](/azure/cloud-adoption-framework/reference/networking-vdc) when connecting to the Microsoft Cloud via ExpressRoute. ## Microsoft 365+ If you plan to enable Microsoft 365 on ExpressRoute, review the following documents for more information about Microsoft 365 requirements. * [Azure ExpressRoute for Microsoft 365](/microsoft-365/enterprise/azure-expressroute)
If you plan to enable Microsoft 365 on ExpressRoute, review the following docume
* ExpressRoute on Office 365 advanced training videos ## Next steps+ * For more information about ExpressRoute, see the [ExpressRoute FAQ](expressroute-faqs.md). * Find an ExpressRoute connectivity provider. See [ExpressRoute partners and peering locations](expressroute-locations.md).
+* Review [Azure Well-architected Framework for ExpressRoute](/azure/well-architected/services/networking/azure-expressroute) to learn about best practices for designing and implementing ExpressRoute.
* Refer to requirements for [Routing](expressroute-routing.md), [NAT](expressroute-nat.md), and [QoS](expressroute-qos.md). * Configure your ExpressRoute connection. * [Create an ExpressRoute circuit](expressroute-howto-circuit-arm.md)
expressroute Expressroute Troubleshooting Expressroute Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-troubleshooting-expressroute-overview.md
Title: 'Verify Azure ExpressRoute connectivity - troubleshooting guide'
description: This article provides instructions on troubleshooting and validating end-to-end connectivity of an ExpressRoute circuit. - Previously updated : 01/07/2022 Last updated : 06/15/2023 - + # Verify ExpressRoute connectivity
-This article helps you verify and troubleshoot Azure ExpressRoute connectivity. ExpressRoute extends an on-premises network into the Microsoft Cloud over a private connection that's commonly facilitated by a connectivity provider. ExpressRoute connectivity traditionally involves three distinct network zones:
+This article helps you verify and troubleshoot Azure ExpressRoute connectivity. ExpressRoute extends an on-premises network into the Microsoft Cloud over a private connection commonly facilitated by a connectivity provider. ExpressRoute connectivity traditionally involves three distinct network zones:
- Customer network - Provider network - Microsoft datacenter > [!NOTE]
-> In the ExpressRoute direct connectivity model (offered at a bandwidth of 10/100 Gbps), customers can directly connect to the port for Microsoft Enterprise Edge (MSEE) routers. The direct connectivity model includes only customer and Microsoft network zones.
+> In the ExpressRoute Direct connectivity model, you can directly connect to the port for Microsoft Enterprise Edge (MSEE) routers. The direct connectivity model includes only yours and Microsoft network zones.
This article helps you identify if and where a connectivity issue exists. You can then seek support from the appropriate team to resolve the issue.
In the preceding diagram, the numbers indicate key network points:
At times, this article references these network points by their associated number.
-Depending on the ExpressRoute connectivity model, network points 3 and 4 might be switches (layer 2 devices) or routers (layer 3 devices). The ExpressRoute connectivity models are cloud exchange co-location, point-to-point Ethernet connection, or any-to-any (IPVPN).
+Depending on the ExpressRoute connectivity model, network points 3 and 4 might be switches (layer 2 devices) or routers (layer 3 devices). The ExpressRoute connectivity models are cloud exchange colocation, point-to-point Ethernet connection, or any-to-any (IPVPN).
In the direct connectivity model, there are no network points 3 and 4. Instead, CEs (2) are directly connected to MSEEs via dark fiber.
-If the cloud exchange co-location, point-to-point Ethernet, or direct connectivity model is used, CEs (2) establish Border Gateway Protocol (BGP) peering with MSEEs (5).
+If the cloud exchange colocation, point-to-point Ethernet, or direct connectivity model is used, CEs (2) establish Border Gateway Protocol (BGP) peering with MSEEs (5).
If the any-to-any (IPVPN) connectivity model is used, PE-MSEEs (4) establish BGP peering with MSEEs (5). PE-MSEEs propagate the routes received from Microsoft back to the customer network via the IPVPN service provider network.
To list all the ExpressRoute circuits in a resource group, use the following com
Get-AzExpressRouteCircuit -ResourceGroupName "Test-ER-RG" ```
->[!TIP]
->If you're looking for the name of a resource group, you can get it by using the `Get-AzResourceGroup` command to list all the resource groups in your subscription.
+> [!TIP]
+> If you're looking for the name of a resource group, you can get it by using the `Get-AzResourceGroup` command to list all the resource groups in your subscription.
To select a particular ExpressRoute circuit in a resource group, use the following command:
ServiceProviderProvisioningState : Provisioned
``` > [!NOTE]
-> After you configure an ExpressRoute circuit, if **Circuit status** is stuck in a **Not enabled** status, contact [Microsoft Support][Support]. If **Provider status** is stuck in **Not provisioned** status, contact your service provider.
+> After you configure an ExpressRoute circuit, if the **Circuit status** is stuck in a **Not enabled** status, contact [Microsoft Support][Support]. If the **Provider status** is stuck in a **Not provisioned** status, contact your service provider.
## Validate peering configuration
-After the service provider has completed provisioning the ExpressRoute circuit, multiple routing configurations based on external BGP (eBGP) can be created over the ExpressRoute circuit between CEs/MSEE-PEs (2/4) and MSEEs (5). Each ExpressRoute circuit can have one or both of the following:
+After the service provider has completed provisioning the ExpressRoute circuit, multiple routing configurations based on external BGP (eBGP) can be created over the ExpressRoute circuit between CEs/MSEE-PEs (2/4) and MSEEs (5). Each ExpressRoute circuit can have one or both of the following peering configurations:
- Azure private peering: traffic to private virtual networks in Azure - Microsoft peering: traffic to public endpoints of platform as a service (PaaS) and software as a service (SaaS)
$ckt = Get-AzExpressRouteCircuit -ResourceGroupName "Test-ER-RG" -Name "Test-ER-
Get-AzExpressRouteCircuitPeeringConfig -Name "MicrosoftPeering" -ExpressRouteCircuit $ckt ```
-If a peering isn't configured, you'll get an error message. Here's an example response when the stated peering (Azure public peering in this case) isn't configured within the circuit:
+If a peering isn't configured, you get an error message. Here's an example response when the stated peering (Azure public peering in this case) isn't configured within the circuit:
```azurepowershell Get-AzExpressRouteCircuitPeeringConfig : Sequence contains no matching element
StatusCode: 400
## Test private peering connectivity
-Test your private peering connectivity by counting packets arriving at and leaving the Microsoft edge of your ExpressRoute circuit on the MSEE devices. This diagnostic tool works by applying an ACL to the MSEE to count the number of packets that hit specific ACL rules. Using this tool will allow you to confirm connectivity by answering questions such as:
+Test your private peering connectivity by counting packets arriving at and leaving the Microsoft edge of your ExpressRoute circuit on the MSEE devices. This diagnostic tool works by applying an ACL to the MSEE to count the number of packets that hit specific ACL rules. Using this tool allows you to confirm connectivity by answering questions such as:
* Are my packets getting to Azure? * Are they getting back to on-premises?
Test your private peering connectivity by counting packets arriving at and leavi
### Interpret results
-When your results are ready, you'll have two sets of them for the primary and secondary MSEE devices. Review the number of matches in and out, and use the following scenarios to interpret the results:
+When your results are ready, you have two sets of them for the primary and secondary MSEE devices. Review the number of matches in and out, and use the following scenarios to interpret the results:
* **You see packet matches sent and received on both MSEEs**: This result indicates healthy traffic inbound to and outbound from the MSEEs on your circuit. If loss is occurring either on-premises or in Azure, it's happening downstream from the MSEEs. * **If you're testing PsPing from on-premises to Azure, received results show matches, but sent results show no matches**: This result indicates that traffic is coming in to Azure but isn't returning to on-premises. Check for return-path routing issues. For example, are you advertising the appropriate prefixes to Azure? Is a user-defined route (UDR) overriding prefixes? * **If you're testing PsPing from Azure to on-premises, sent results show matches, but received results show no matches**: This result indicates that traffic is coming in to on-premises but isn't returning to Azure. Work with your provider to find out why traffic isn't being routed to Azure via your ExpressRoute circuit. * **One MSEE shows no matches, but the other shows good matches**: This result indicates that one MSEE isn't receiving or passing any traffic. It might be offline (for example, BGP/ARP is down).
-Your test results for each MSEE device will look like the following example:
+Your test results for each MSEE device look like the following example:
``` src 10.0.0.0 dst 20.0.0.0 dstport 3389 (received): 120 matches
This test result has the following properties:
## Verify availability of the virtual network gateway
-The ExpressRoute virtual network gateway facilitates the management and control plane connectivity to private link services and private IPs deployed to an Azure virtual network. The virtual network gateway infrastructure is managed by Microsoft and sometimes undergoes maintenance.
+The ExpressRoute virtual network gateway facilitates the management and control plane connectivity to private link services and private IPs deployed to an Azure virtual network. Microsoft manages the virtual network gateway infrastructure and sometimes undergoes maintenance.
-During a maintenance period, performance of the virtual network gateway might be reduced. To troubleshoot connectivity issues to the virtual network and reactively detect if recent maintenance events reduced capacity for the virtual network gateway:
+During a maintenance period, performance of the virtual network gateway may reduce. To troubleshoot connectivity issues to the virtual network and see if a recent maintenance event caused reduce capacity, follow these steps:
1. Select **Diagnose and solve problems** from your ExpressRoute circuit in the Azure portal.
During a maintenance period, performance of the virtual network gateway might be
:::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/gateway-result.png" alt-text="Screenshot of the diagnostic results.":::
- If maintenance on your virtual network gateway occurred during a period when you experienced packet loss or latency, it's possible that the reduced capacity of the gateway contributed to connectivity issues you're experiencing with the target virtual network. Follow the recommended steps. To support a higher network throughput and avoid connectivity issues during future maintenance events, consider upgrading the [virtual network gateway SKU](expressroute-about-virtual-network-gateways.md#gwsku).
+ If maintenance was done on your virtual network gateway during a period when you experienced packet loss or latency. It's possible that the reduced capacity of the gateway contributed to connectivity issues you're experiencing for the targeted virtual network. Follow the recommended steps. To support a higher network throughput and avoid connectivity issues during future maintenance events, consider upgrading the [virtual network gateway SKU](expressroute-about-virtual-network-gateways.md#gwsku).
## Next steps
For more information or help, check out the following links:
<!--Image References--> [1]: ./media/expressroute-troubleshooting-expressroute-overview/expressroute-logical-diagram.png "Diagram that shows logical ExpressRoute connectivity and connections between a customer network, a provider network, and a Microsoft datacenter."
-[2]: ./media/expressroute-troubleshooting-expressroute-overview/portal-all-resources.png "All resources icon"
[3]: ./media/expressroute-troubleshooting-expressroute-overview/portal-overview.png "Overview icon" [4]: ./media/expressroute-troubleshooting-expressroute-overview/portal-circuit-status.png "Screenshot that shows an example of ExpressRoute essentials listed in the Azure portal."
-[5]: ./media/expressroute-troubleshooting-expressroute-overview/portal-private-peering.png "Screenshot that shows an example ExpressRoute peerings listed in the Azure portal."
+[5]: ./media/expressroute-troubleshooting-expressroute-overview/portal-private-peering.png "Screenshot that shows an example ExpressRoute peering listed in the Azure portal."
<!--Link References--> [Support]: https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade
For more information or help, check out the following links:
[CreatePeering]: ./expressroute-howto-routing-portal-resource-manager.md [ARP]: ./expressroute-troubleshooting-arp-resource-manager.md [HA]: ./designing-for-high-availability-with-expressroute.md
-[DR-Pvt]: ./designing-for-disaster-recovery-with-expressroute-privatepeering.md
frontdoor Front Door Ddos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-ddos.md
Front Door is a large scaled, globally distributed service. We have many custome
* You can create [custom WAF rules](../web-application-firewall/afds/waf-front-door-custom-rules.md) to automatically block and rate limit HTTP or HTTPS attacks that have known signatures. * Using the bot protection managed rule set provides protection against known bad bots. For more information, see [Configuring bot protection](../web-application-firewall/afds/waf-front-door-policy-configure-bot-protection.md).
+Refer to [Application DDoS protection](../web-application-firewall/shared/application-ddos-protection.md) for guidance on how to use Azure WAF to protect against DDoS attacks.
+ ## Protect VNet origins Enable [Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md) on the origin VNet to protect your public IPs against DDoS attacks. DDoS Protection customers receive extra benefits including cost protection, SLA guarantee, and access to experts from the DDoS Rapid Response Team for immediate help during an attack.
frontdoor Front Door Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-overview.md
Modernize your internet first applications on Azure with Cloud Native experience
* Secure applications with built-in layer 3-4 DDoS protection, seamlessly attached [Web Application Firewall (WAF)](../web-application-firewall/afds/afds-overview.md), and [Azure DNS to protect your domains](how-to-configure-endpoints.md).
-* Protect your apps from malicious actors with Bot manager rules based on MicrosoftΓÇÖs own Threat Intelligence.
+* Protect your applications against layer 7 DDoS attacks using WAF. For more information, see [Application DDoS protection](../web-application-firewall/shared/application-ddos-protection.md).
+
+* Protect your applications from malicious actors with Bot manager rules based on MicrosoftΓÇÖs own Threat Intelligence.
* Privately connect to your backend behind Azure Front Door with [Private Link](private-link.md) and embrace a zero-trust access model.
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
Title: Understand how effects work description: Azure Policy definitions have various effects that determine how compliance is managed and reported. Previously updated : 02/22/2023 Last updated : 06/15/2023 + # Understand Azure Policy effects Each policy definition in Azure Policy has a single effect. That effect determines what happens when
These effects are currently supported in a policy definition:
## Interchanging effects
-Sometimes multiple effects can be valid for a given policy definition. Parameters are often used to specify allowed effect values so that a single definition can be more versatile. However, it's important to note that not all effects are interchangeable. Resource properties and logic in the policy rule can determine whether a certain effect is considered valid to the policy definition. For example, policy definitions with effect **AuditIfNotExists** require additional details in the policy rule that aren't required for policies with effect **Audit**. The effects also behave differently. **Audit** policies will assess a resource's compliance based on its own properties, while **AuditIfNotExists** policies will assess a resource's compliance based on a child or extension resource's properties.
+Sometimes multiple effects can be valid for a given policy definition. Parameters are often used to specify allowed effect values so that a single definition can be more versatile. However, it's important to note that not all effects are interchangeable. Resource properties and logic in the policy rule can determine whether a certain effect is considered valid to the policy definition. For example, policy definitions with effect **AuditIfNotExists** require other details in the policy rule that aren't required for policies with effect **Audit**. The effects also behave differently. **Audit** policies will assess a resource's compliance based on its own properties, while **AuditIfNotExists** policies will assess a resource's compliance based on a child or extension resource's properties.
-Below is some general guidance around interchangeable effects:
+The following list is some general guidance around interchangeable effects:
- **Audit**, **Deny**, and either **Modify** or **Append** are often interchangeable. - **AuditIfNotExists** and **DeployIfNotExists** are often interchangeable. - **Manual** isn't interchangeable.
manages the evaluation and outcome and reports the results back to Azure Policy.
- **denyAction** is evaluated last. After the Resource Provider returns a success code on a Resource Manager mode request,
-**AuditIfNotExists** and **DeployIfNotExists** evaluate to determine whether additional compliance
+**AuditIfNotExists** and **DeployIfNotExists** evaluate to determine whether more compliance
logging or action is required.
-Additionally, `PATCH` requests that only modify `tags` related fields restricts policy evaluation to
+`PATCH` requests that only modify `tags` related fields restricts policy evaluation to
policies containing conditions that inspect `tags` related fields. ## Append
-Append is used to add additional fields to the requested resource during creation or update. A
+Append is used to add more fields to the requested resource during creation or update. A
common example is specifying allowed IPs for a storage resource. > [!IMPORTANT]
Append evaluates before the request gets processed by a Resource Provider during
updating of a resource. Append adds fields to the resource when the **if** condition of the policy rule is met. If the append effect would override a value in the original request with a different value, then it acts as a deny effect and rejects the request. To append a new value to an existing
-array, use the **\[\*\]** version of the alias.
+array, use the `[*]` version of the alias.
When a policy definition using the append effect is run as part of an evaluation cycle, it doesn't make changes to resources that already exist. Instead, it marks any resource that meets the **if**
take either a single **field/value** pair or multiples. Refer to
### Append examples
-Example 1: Single **field/value** pair using a non-**\[\*\]**
+Example 1: Single **field/value** pair using a non-`[*]`
[alias](definition-structure.md#aliases) with an array **value** to set IP rules on a storage
-account. When the non-**\[\*\]** alias is an array, the effect appends the **value** as the entire
+account. When the non-`[*]` alias is an array, the effect appends the **value** as the entire
array. If the array already exists, a deny event occurs from the conflict. ```json
array. If the array already exists, a deny event occurs from the conflict.
} ```
-Example 2: Single **field/value** pair using an **\[\*\]** [alias](definition-structure.md#aliases)
-with an array **value** to set IP rules on a storage account. By using the **\[\*\]** alias, the
+Example 2: Single **field/value** pair using an `[*]` [alias](definition-structure.md#aliases)
+with an array **value** to set IP rules on a storage account. When you use the `[*]` alias, the
effect appends the **value** to a potentially pre-existing array. If the array doesn't exist yet, it's created.
resource is updated.
### Audit properties
-For a Resource Manager mode, the audit effect doesn't have any additional properties for use in the
+For a Resource Manager mode, the audit effect doesn't have any other properties for use in the
**then** condition of the policy definition. For a Resource Provider mode of `Microsoft.Kubernetes.Data`, the audit effect has the following
-additional subproperties of **details**. Use of `templateInfo` is required for new or updated policy
+subproperties of **details**. Use of `templateInfo` is required for new or updated policy
definitions as `constraintTemplate` is deprecated. - **templateInfo** (required)
definitions as `constraintTemplate` is deprecated.
- The CRD implementation of the Constraint template. Uses parameters passed via **values** as `{{ .Values.<valuename> }}`. In example 2 below, these values are `{{ .Values.excludedNamespaces }}` and `{{ .Values.allowedContainerImagesRegex }}`.
+- **constraintTemplate** (deprecated)
+ - Can't be used with `templateInfo`.
+ - Must be replaced with `templateInfo` when creating or updating a policy definition.
+ - The Constraint template CustomResourceDefinition (CRD) that defines new Constraints. The
+ template defines the Rego logic, the Constraint schema, and the Constraint parameters that are
+ passed via **values** from Azure Policy. For more information, go to [Gatekeeper constraints](https://open-policy-agent.github.io/gatekeeper/website/docs/howto/#constraints).
+- **constraintInfo** (optional)
+ - Can't be used with `constraint`, `constraintTemplate`, `apiGroups`, or `kinds`.
+ - If `constraintInfo` isn't provided, the constraint can be generated from `templateInfo` and policy.
+ - **sourceType** (required)
+ - Defines the type of source for the constraint. Allowed values: _PublicURL_ or _Base64Encoded_.
+ - If _PublicURL_, paired with property `url` to provide location of the constraint. The location must be publicly accessible.
+
+ > [!WARNING]
+ > Don't use SAS URIs or tokens in `url` or anything else that could expose a secret.
- **namespaces** (optional) - An _array_ of [Kubernetes namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/)
definitions as `constraintTemplate` is deprecated.
- **values** (optional) - Defines any parameters and values to pass to the Constraint. Each value must exist in the Constraint template CRD.-- **constraintTemplate** (deprecated)
- - Can't be used with `templateInfo`.
- - Must be replaced with `templateInfo` when creating or updating a policy definition.
- - The Constraint template CustomResourceDefinition (CRD) that defines new Constraints. The
- template defines the Rego logic, the Constraint schema, and the Constraint parameters that are
- passed via **values** from Azure Policy.
### Audit example
non-compliant.
### Deny properties
-For a Resource Manager mode, the deny effect doesn't have any additional properties for use in the
+For a Resource Manager mode, the deny effect doesn't have any more properties for use in the
**then** condition of the policy definition. For a Resource Provider mode of `Microsoft.Kubernetes.Data`, the deny effect has the following
-additional subproperties of **details**. Use of `templateInfo` is required for new or updated policy
+subproperties of **details**. Use of `templateInfo` is required for new or updated policy
definitions as `constraintTemplate` is deprecated. - **templateInfo** (required)
definitions as `constraintTemplate` is deprecated.
- The CRD implementation of the Constraint template. Uses parameters passed via **values** as `{{ .Values.<valuename> }}`. In example 2 below, these values are `{{ .Values.excludedNamespaces }}` and `{{ .Values.allowedContainerImagesRegex }}`.
+- **constraintTemplate** (deprecated)
+ - Can't be used with `templateInfo`.
+ - Must be replaced with `templateInfo` when creating or updating a policy definition.
+ - The Constraint template CustomResourceDefinition (CRD) that defines new Constraints. The
+ template defines the Rego logic, the Constraint schema, and the Constraint parameters that are
+ passed via **values** from Azure Policy. For more information, go to [Gatekeeper constraints](https://open-policy-agent.github.io/gatekeeper/website/docs/howto/#constraints).
+- **constraintInfo** (optional)
+ - Can't be used with `constraint`, `constraintTemplate`, `apiGroups`, or `kinds`.
+ - If `constraintInfo` isn't provided, the constraint can be generated from `templateInfo` and policy.
+ - **sourceType** (required)
+ - Defines the type of source for the constraint. Allowed values: _PublicURL_ or _Base64Encoded_.
+ - If _PublicURL_, paired with property `url` to provide location of the constraint. The location must be publicly accessible.
+
+ > [!WARNING]
+ > Don't use SAS URIs or tokens in `url` or anything else that could expose a secret.
- **namespaces** (optional) - An _array_ of [Kubernetes namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/)
definitions as `constraintTemplate` is deprecated.
- **values** (optional) - Defines any parameters and values to pass to the Constraint. Each value must exist in the Constraint template CRD.-- **constraintTemplate** (deprecated)
- - Can't be used with `templateInfo`.
- - Must be replaced with `templateInfo` when creating or updating a policy definition.
- - The Constraint template CustomResourceDefinition (CRD) that defines new Constraints. The
- template defines the Rego logic, the Constraint schema, and the Constraint parameters that are
- passed via **values** from Azure Policy. It's recommended to use the newer `templateInfo` to
- replace `constraintTemplate`.
### Deny example
location of the Constraint template to use in Kubernetes to limit the allowed co
} } ```+ ## DenyAction (preview)
-`DenyAction` is used to block requests on intended action to resources. The only supported action today is `DELETE`. This effect will help prevent any accidental deletion of critical resources.
+`DenyAction` is used to block requests on intended action to resources. The only supported action today is `DELETE`. This effect helps prevent any accidental deletion of critical resources.
### DenyAction evaluation
assignment.
> Under preview, assignments with `denyAction` effect will show a `Not Started` compliance state. #### Subscription deletion+ Policy won't block removal of resources that happens during a subscription deletion. #### Resource group deletion+ Policy will evaluate resources that support location and tags against `DenyAction` policies during a resource group deletion. Only policies that have the `cascadeBehaviors` set to `deny` in the policy rule will block a resource group deletion. Policy won't block removal of resources that don't support location and tags nor any policy with `mode:all`. #### Cascade deletion+ Cascade deletion occurs when deleting of a parent resource is implicitly deletes all its child resources. Policy won't block removal of child resources when a delete action targets the parent resources. For example, `Microsoft.Insights/diagnosticSettings` is a child resource of `Microsoft.Storage/storageaccounts`. If a `denyAction` policy targets `Microsoft.Insights/diagnosticSettings`, a delete call to the diagnostic setting (child) will fail, but a delete to the storage account (parent) will implicitly delete the diagnostic setting (child). [!INCLUDE [policy-denyAction](../../../../includes/azure-policy-deny-action.md)]
The **details** property of the DenyAction effect has all the subproperties that
- Default value is `deny`. ### DenyAction example
-Example: Deny any delete calls targeting database accounts that have a tag environment that equals prod. Since cascade behavior is set to deny, block any DELETE call that targets a resource group with an applicable database account.
+
+Example: Deny any delete calls targeting database accounts that have a tag environment that equals prod. Since cascade behavior is set to deny, block any `DELETE` call that targets a resource group with an applicable database account.
```json {
related resources to match and the template deployment to execute.
### DeployIfNotExists example
-Example: Evaluates SQL Server databases to determine whether transparentDataEncryption is enabled.
+Example: Evaluates SQL Server databases to determine whether `transparentDataEncryption` is enabled.
If not, then a deployment to enable is executed. ```json
The following operations are supported by Modify:
> [!IMPORTANT] > If you're managing tags, it's recommended to use Modify instead of Append as Modify provides
-> additional operation types and the ability to remediate existing resources. However, Append is
+> more operation types and the ability to remediate existing resources. However, Append is
> recommended if you aren't able to create a managed identity or Modify doesn't yet support the > alias for the resource property.
properties. Operation determines what the remediation task does to the tags, fie
tag is altered, and value defines the new setting for that tag. The following example makes the following tag changes: -- Sets the `environment` tag to "Test", even if it already exists with a different value.
+- Sets the `environment` tag to "Test" even if it already exists with a different value.
- Removes the tag `TempResource`. - Sets the `Dept` tag to the policy parameter _DeptName_ configured on the policy assignment.
with a parameterized value:
``` Example 3: Ensure that a storage account doesn't allow blob public access, the Modify operation
-is applied only when evaluating requests with API version greater or equals to '2019-04-01':
+is applied only when evaluating requests with API version greater or equals to `2019-04-01`:
```json "then": {
different scopes. Each of these assignments is also likely to have a different e
condition and effect for each policy is independently evaluated. For example: - Policy 1
- - Restricts resource location to 'westus'
+ - Restricts resource location to `westus`
- Assigned to subscription A - Deny effect - Policy 2
- - Restricts resource location to 'eastus'
+ - Restricts resource location to `eastus`
- Assigned to resource group B in subscription A - Audit effect This setup would result in the following outcome: -- Any resource already in resource group B in 'eastus' is compliant to policy 2 and non-compliant to
+- Any resource already in resource group B in `eastus` is compliant to policy 2 and non-compliant to
policy 1-- Any resource already in resource group B not in 'eastus' is non-compliant to policy 2 and
- non-compliant to policy 1 if not in 'westus'
-- Any new resource in subscription A not in 'westus' is denied by policy 1-- Any new resource in subscription A and resource group B in 'westus' is created and non-compliant
+- Any resource already in resource group B not in `eastus` is non-compliant to policy 2 and
+ non-compliant to policy 1 if not in `westus`
+- Any new resource in subscription A not in `westus` is denied by policy 1
+- Any new resource in subscription A and resource group B in `westus` is created and non-compliant
on policy 2 If both policy 1 and policy 2 had effect of deny, the situation changes to: -- Any resource already in resource group B not in 'eastus' is non-compliant to policy 2-- Any resource already in resource group B not in 'westus' is non-compliant to policy 1-- Any new resource in subscription A not in 'westus' is denied by policy 1
+- Any resource already in resource group B not in `eastus` is non-compliant to policy 2
+- Any resource already in resource group B not in `westus` is non-compliant to policy 1
+- Any new resource in subscription A not in `westus` is denied by policy 1
- Any new resource in resource group B of subscription A is denied Each assignment is individually evaluated. As such, there isn't an opportunity for a resource to
governance Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/scope.md
Title: Understand scope in Azure Policy description: Describes the concept of scope in Azure Resource Manager and how it applies to Azure Policy to control which resources Azure Policy evaluates. Previously updated : 08/17/2021 Last updated : 06/15/2023 + # Understand scope in Azure Policy There are many settings that determine which resources are capable of being evaluated and which resources are evaluated by Azure Policy. The primary concept for these controls is _scope_. Scope in Azure Policy is based on how scope works in Azure Resource Manager. For a high-level overview, see [Scope in Azure Resource Manager](../../../azure-resource-manager/management/overview.md#understand-scope).+ This article explains the importance of _scope_ in Azure Policy and it's related objects and properties.
properties.
The first instance scope used by Azure Policy is when a policy definition is created. The definition may be saved in either a management group or a subscription. The location determines the scope to which the initiative or policy can be assigned. Resources must be within the resource hierarchy of
-the definition location to target for assignment.
+the definition location to target for assignment. The [resources covered by Azure Policy](../overview.md#resources-covered-by-azure-policy) describes how policies are evaluated.
If the definition location is a:
The following table is a comparison of the scope options:
|**Resource Manager object** | - | - | &#10004; | |**Requires modifying policy assignment object** | &#10004; | &#10004; | - |
-So how do you choose whether to use an exclusion or exemption? Typically exclusions are recommended to permanently bypass evaluation for a broad scope like a test environment which doesn't require the same level of governance. Exemptions are recommended for time-bound or more specific scenarios where a resource or resource hierarchy should still be tracked and would otherwise be evaluated, but there is a specific reason it should not be assessed for compliance.
+So how do you choose whether to use an exclusion or exemption? Typically exclusions are recommended to permanently bypass evaluation for a broad scope like a test environment that doesn't require the same level of governance. Exemptions are recommended for time-bound or more specific scenarios where a resource or resource hierarchy should still be tracked and would otherwise be evaluated, but there's a specific reason it shouldn't be assessed for compliance.
## Next steps
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/overview.md
Title: Overview of Azure Policy description: Azure Policy is a service in Azure, that you use to create, assign and, manage policy definitions in your Azure environment. Previously updated : 12/02/2022 Last updated : 06/15/2023 + # What is Azure Policy? Azure Policy helps to enforce organizational standards and to assess compliance at-scale. Through
in their environment.
Azure RBAC focuses on managing user [actions](../../role-based-access-control/resource-provider-operations.md) at different scopes. If
-control of an action is required based on user information, then Azure RBAC is the correct tool to use. Even if an individual has access to perform an action, if the result is a non-compliant resource, Azure Policy still
-blocks the create or update.
+control of an action is required based on user information, then Azure RBAC is the correct tool to use. Even if an individual has access to perform an action, if the result is a non-compliant resource, Azure Policy still blocks the create or update.
The combination of Azure RBAC and Azure Policy provides full scope control in Azure.
permissions.
If none of the built-in roles have the permissions required, create a [custom role](../../role-based-access-control/custom-roles.md).
-Azure Policy operations can have a significant impact on your Azure environment. Only the minimum set of
-permissions necessary to perform a task should be assigned and these permissions should not be granted
-to users who do not need them.
+Azure Policy operations can have a significant effect on your Azure environment. Only the minimum set of permissions necessary to perform a task should be assigned and these permissions shouldn't be granted to users who don't need them.
> [!NOTE] > The managed identity of a **deployIfNotExists** or **modify** policy assignment needs enough
to users who do not need them.
To create, edit, or delete Azure Virtual Network Manager dynamic group policies, you need: - Read and write Azure RBAC permissions to the underlying policy-- Azure RBAC permissions to join the network group (Note: Classic Admin authorization is not supported)
+- Azure RBAC permissions to join the network group (Classic Admin authorization isn't supported).
Specifically, the required resource provider permission is `Microsoft.Network/networkManagers/networkGroups/join/action`.
Specifically, the required resource provider permission is `Microsoft.Network/ne
### Resources covered by Azure Policy
-Azure Policy evaluates all Azure resources at or below subscription-level, including Arc enabled
-resources. For certain resource providers such as
-[Machine configuration](../machine-configuration/overview.md),
-[Azure Kubernetes Service](../../aks/intro-kubernetes.md), and
-[Azure Key Vault](../../key-vault/general/overview.md), there's a deeper integration for managing
-settings and objects. To find out more, see
-[Resource Provider modes](./concepts/definition-structure.md).
+Although a policy can be assigned at the management group level, _only_ resources at the subscription or resource group level are evaluated.
+
+For certain resource providers such as [Machine configuration](../machine-configuration/overview.md), [Azure Kubernetes Service](../../aks/intro-kubernetes.md), and [Azure Key Vault](../../key-vault/general/overview.md), there's a deeper integration for managing settings and objects. To find out more, go to [Resource Provider modes](./concepts/definition-structure.md#resource-provider-modes).
### Recommendations for managing policies
In Azure Policy, we offer several built-in policies that are available by defaul
specified by the deploy request. - **Not allowed resource types** (Deny): Prevents a list of resource types from being deployed.
-To implement these policy definitions (both built-in and custom definitions), you'll need to assign
+To implement these policy definitions (both built-in and custom definitions), you need to assign
them. You can assign any of these policies through the Azure portal, PowerShell, or Azure CLI. Policy evaluation happens with several different actions, such as policy assignment or policy
on the child management group or subscription level. If any assignment results i
denied, then the only way to allow the resource is to modify the denying assignment. Policy assignments always use the latest state of their assigned definition or initiative when
-evaluating resources. If a policy definition that is already assigned is changed all existing
+evaluating resources. If a policy definition that's already assigned is changed, all existing
assignments of that definition will use the updated logic when evaluating. For more information on setting assignments through the portal, see [Create a policy assignment to
healthcare-apis Dicom Change Feed Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-change-feed-overview.md
The Change Feed provides logs of all the changes that occur in DICOM service. The Change Feed provides ordered, guaranteed, immutable, and read-only logs of these changes. The Change Feed offers the ability to go through the history of DICOM service and acts upon the creates and deletes in the service.
-Client applications can read these logs at any time, either in streaming, or in batch mode. The Change Feed enables you to build efficient and scalable solutions that process change events that occur in your DICOM service.
+Client applications can read these logs at any time in batches of any size. The Change Feed enables you to build efficient and scalable solutions that process change events that occur in your DICOM service.
You can process these change events asynchronously, incrementally or in-full. Any number of client applications can independently read the Change Feed, in parallel, and at their own pace.
+As of v2 of the API, the Change Feed can be queried for a particular time window.
+ Make sure to specify the version as part of the URL when making requests. More information can be found in the [API Versioning for DICOM service Documentation](api-versioning-dicom-service.md). ## API Design
-The API exposes two `GET` endpoints for interacting with the Change Feed. A typical flow for consuming the Change Feed is [provided below](#example-usage-flow).
+The API exposes two `GET` endpoints for interacting with the Change Feed. A typical flow for consuming the Change Feed is [provided below](#usage).
Verb | Route | Returns | Description : | :-- | :- | :
-GET | /changefeed | JSON Array | [Read the Change Feed](#read-change-feed)
-GET | /changefeed/latest | JSON Object | [Read the latest entry in the Change Feed](#get-latest-change-feed-item)
+GET | /changefeed | JSON Array | [Read the Change Feed](#change-feed)
+GET | /changefeed/latest | JSON Object | [Read the latest entry in the Change Feed](#latest-change-feed)
### Object model Field | Type | Description : | :-- | :
-Sequence | int | The sequence ID that can be used for paging (via offset) or anchoring
+Sequence | long | The unique ID per change event
StudyInstanceUid | string | The study instance UID SeriesInstanceUid | string | The series instance UID SopInstanceUid | string | The sop instance UID
current | This instance is the current version.
replaced | This instance has been replaced by a new version. deleted | This instance has been deleted and is no longer available in the service.
-### Read Change Feed
+## Change Feed
+
+The Change Feed resource is a collection of events that have occurred within the DICOM server.
+
+### Version 2
+
+#### Request
+```http
+GET /changefeed?startTime={datetime}&endtime={datetime}&offset={int}&limit={int}&includemetadata={bool} HTTP/1.1
+Accept: application/json
+Content-Type: application/json
+```
-**Route**: /changefeed?offset={int}&limit={int}&includemetadata={**true**|false}
+#### Response
```json [ {
deleted | This instance has been deleted and is no longer available in the serv
"Timestamp": "2020-03-04T01:03:08.4834Z", "State": "current|replaced|deleted", "Metadata": {
- "actual": "metadata"
+ // DICOM JSON
} }, {
deleted | This instance has been deleted and is no longer available in the serv
"Timestamp": "2020-03-05T07:13:16.4834Z", "State": "current|replaced|deleted", "Metadata": {
- "actual": "metadata"
+ // DICOM JSON
}
- }
- ...
+ },
+ //...
] ``` #### Parameters
-Name | Type | Description
-:-- | : | :
-offset | int | The number of records to skip before the values to return
-limit | int | The number of records to return (default: 10, min: 1, max: 100)
-includemetadata | bool | Whether or not to include the metadata (default: true)
+Name | Type | Description | Default | Min | Max |
+:-- | :- | :- | : | :-- | :-- |
+offset | long | The number of events to skip from the beginning of the result set | `0` | `0` | |
+limit | int | The maximum number of events to return | `100` | `1` | `200` |
+startTime | DateTime | The inclusive start time for change events | `"0001-01-01T00:00:00Z"` | `"0001-01-01T00:00:00Z"` | `"9999-12-31T23:59:59.9999998Z"`|
+endTime | DateTime | The exclusive end time for change events | `"9999-12-31T23:59:59.9999999Z"` | `"0001-01-01T00:00:00.0000001"` | `"9999-12-31T23:59:59.9999999Z"` |
+includeMetadata | bool | Indicates whether or not to include the DICOM metadata | `true` | | |
+
+### Version 1
+
+#### Request
+```http
+GET /changefeed?offset={int}&limit={int}&includemetadata={bool} HTTP/1.1
+Accept: application/json
+Content-Type: application/json
+```
-### Get latest Change Feed item
+#### Response
+```json
+[
+ {
+ "Sequence": 1,
+ "StudyInstanceUid": "{uid}",
+ "SeriesInstanceUid": "{uid}",
+ "SopInstanceUid": "{uid}",
+ "Action": "create|delete",
+ "Timestamp": "2020-03-04T01:03:08.4834Z",
+ "State": "current|replaced|deleted",
+ "Metadata": {
+ // DICOM JSON
+ }
+ },
+ {
+ "Sequence": 2,
+ "StudyInstanceUid": "{uid}",
+ "SeriesInstanceUid": "{uid}",
+ "SopInstanceUid": "{uid}",
+ "Action": "create|delete",
+ "Timestamp": "2020-03-05T07:13:16.4834Z",
+ "State": "current|replaced|deleted",
+ "Metadata": {
+ // DICOM JSON
+ }
+ },
+ // ...
+]
+```
-**Route**: /changefeed/latest?includemetadata={**true**|false}
+#### Parameters
+Name | Type | Description | Default | Min | Max |
+:-- | :- | :- | : | :-- | :-- |
+offset | long | The exclusive starting sequence number for events | `0` | `0` | |
+limit | int | The maximum value of the sequence number relative to the offset. For example, if the offset is 10 and the limit is 5, then the maximum sequence number returned will be 15. | `10` | `1` | `100` |
+includeMetadata | bool | Indicates whether or not to include the DICOM metadata | `true` | | |
+
+## Latest Change Feed
+The latest Change Feed resource represents the latest event that has occurred within the DICOM Server.
+
+### Request
+```http
+GET /changefeed/latest?includemetadata={bool} HTTP/1.1
+Accept: application/json
+Content-Type: application/json
+```
+### Response
```json { "Sequence": 2,
includemetadata | bool | Whether or not to include the metadata (default: true)
"Timestamp": "2020-03-05T07:13:16.4834Z", "State": "current|replaced|deleted", "Metadata": {
- "actual": "metadata"
+ //DICOM JSON
} } ```
-#### Parameters
+### Parameters
-Name | Type | Description
-:-- | : | :
-includemetadata | bool | Whether or not to include the metadata (default: true)
+Name | Type | Description | Default |
+:-- | : | :- | : |
+includeMetadata | bool | Indicates whether or not to include the metadata | `true` |
## Usage
-### Example usage flow
-
-Below is the usage flow for an example application that does other processing on the instances within DICOM service.
-
-1. Application that wants to monitor the Change Feed starts.
-2. It determines if there's a current state that it should start with:
- * If it has a state, it uses the offset (sequence) stored.
- * If it has never started and wants to start from beginning, it uses `offset=0`.
- * If it only wants to process from now, it queries `/changefeed/latest` to obtain the last sequence.
-3. It queries the Change Feed with the given offset `/changefeed?offset={offset}`
-4. If there are entries:
- * It performs extra processing.
- * It updates its current state.
- * It starts again above at step 2.
-5. If there are no entries, it sleeps for a configured amount of time and starts back at step 2.
+### User application
+
+#### Version 2
+
+1. An application regularly queries the Change Feed on some time interval
+ * For example, if querying every hour, a query for the Change Feed may look like `/changefeed?startTime=2023-05-10T16:00:00Z&endTime=2023-05-10T17:00:00Z`
+ * If starting from the beginning, the Change Feed query may omit the `startTime` to read all of the changes up to, but excluding, the `endTime`
+ * E.g. `/changefeed?endTime=2023-05-10T17:00:00Z`
+2. Based on the `limit` (if provided), an application continues to query for additional pages of change events if the number of returned events is equal to the `limit` (or default) by updating the offset on each subsequent query
+ * For example, if the `limit` is `100`, and 100 events are returned, then the subsequent query would include `offset=100` to fetch the next "page" of results. The below queries demonstrate the pattern:
+ * `/changefeed?offset=0&limit=100&startTime=2023-05-10T16:00:00Z&endTime=2023-05-10T17:00:00Z`
+ * `/changefeed?offset=100&limit=100&startTime=2023-05-10T16:00:00Z&endTime=2023-05-10T17:00:00Z`
+ * `/changefeed?offset=200&limit=100&startTime=2023-05-10T16:00:00Z&endTime=2023-05-10T17:00:00Z`
+ * If fewer events than the `limit` are returned, then the application can assume that there are no more results within the time range
+
+#### Version 1
+
+1. An application determines from which sequence number it wishes to start reading change events:
+ * To start from the first event, the application should use `offset=0`
+ * To start from the latest event, the application should specify the `offset` parameter with the value of `Sequence` from the latest change event using the `/changefeed/latest` resource
+2. On some regular polling interval, the application performs the following actions:
+ * Fetches the latest sequence number from the `/changefeed/latest` endpoint
+ * Fetches the next set of changes for processing by querying the change feed with the current offset
+ * For example, if the application has currently processed up to sequence number 15 and it only wants to process at most 5 events at once, then it should use the URL `/changefeed?offset=15&limit=5`
+ * Processes any entries return by the `/changefeed` resource
+ * Updates its current sequence number to either:
+ 1. The maximum sequence number returned by the `/changefeed` resource
+ 2. The `offset` + `limit` if no change events were returned from the `/changefeed` resource, but the latest sequence number returned by `/changefeed/latest` is greater than the current sequence number used for `offset`
### Other potential usage patterns
healthcare-apis Events Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-faqs.md
Title: Frequently asked questions about Events - Azure Health Data Services
-description: This article provides answers to the frequently asked questions about Events.
+description: Learn about the frequently asked questions about Events.
Previously updated : 04/04/2022 Last updated : 06/16/2022
## Events: The basics
-### Can I use Events with a different FHIR/DICOM service other than the Azure Health Data Services FHIR/DICOM service?
+## Can I use Events with a different FHIR/DICOM service other than the Azure Health Data Services FHIR/DICOM service?
No. The Azure Health Data Services Events feature only currently supports the Azure Health Data Services FHIR and DICOM services.
-### What FHIR resource events does Events support?
+## What FHIR resource events does Events support?
Events are generated from the following FHIR service types:
Events are generated from the following FHIR service types:
For more information about the FHIR service delete types, see [FHIR REST API capabilities for Azure Health Data Services FHIR service](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md).
-### Does Events support FHIR bundles?
+## Does Events support FHIR bundles?
Yes. The Events feature is designed to emit notifications of data changes at the FHIR resource level.
Events support these [FHIR bundle types](http://hl7.org/fhir/R4/valueset-bundle-
> [!NOTE] > Events are not sent in the sequence of the data operations in the FHIR bundle.
-### What DICOM image events does Events support?
+## What DICOM image events does Events support?
Events are generated from the following DICOM service types:
Events are generated from the following DICOM service types:
- **DicomImageUpdated** - The event emitted after a DICOM image gets updated successfully.
-### What is the payload of an Events message?
+## What is the payload of an Events message?
For a detailed description of the Events message structure and both required and nonrequired elements, see [Events troubleshooting guide](events-troubleshooting-guide.md).
-### What is the throughput for the Events messages?
+## What is the throughput for the Events messages?
The throughput of the FHIR or DICOM service and the Event Grid govern the throughput of FHIR and DICOM events. When a request made to the FHIR service is successful, it returns a 2xx HTTP status code. It also generates a FHIR resource or DICOM image changing event. The current limitation is 5,000 events/second per a workspace for all FHIR or DICOM service instances in it.
-### How am I charged for using Events?
+## How am I charged for using Events?
There are no extra charges for using [Azure Health Data Services Events](https://azure.microsoft.com/pricing/details/health-data-services/). However, applicable charges for the [Event Grid](https://azure.microsoft.com/pricing/details/event-grid/) are assessed against your Azure subscription.
-### How do I subscribe to multiple FHIR and/or DICOM services in the same workspace separately?
+## How do I subscribe to multiple FHIR and/or DICOM services in the same workspace separately?
You can use the Event Grid filtering feature. There are unique identifiers in the event message payload to differentiate different accounts and workspaces. You can find a global unique identifier for workspace in the `source` field, which is the Azure Resource ID. You can locate the unique FHIR account name in that workspace in the `data.resourceFhirAccount` field. You can locate the unique DICOM account name in that workspace in the `data.serviceHostName` field. When you create a subscription, you can use the filtering operators to select the events you want to get in that subscription. :::image type="content" source="media\event-grid\event-grid-filters.png" alt-text="Screenshot of the Event Grid filters tab." lightbox="media\event-grid\event-grid-filters.png":::
-### Can I use the same subscriber for multiple workspaces, FHIR accounts, or DICOM accounts?
+## Can I use the same subscriber for multiple workspaces, FHIR accounts, or DICOM accounts?
Yes. We recommend that you use different subscribers for each individual FHIR or DICOM account to process in isolated scopes.
-### Is Event Grid compatible with HIPAA and HITRUST compliance obligations?
+## Is Event Grid compatible with HIPAA and HITRUST compliance obligations?
Yes. Event Grid supports customer's Health Insurance Portability and Accountability Act (HIPAA) and Health Information Trust Alliance (HITRUST) obligations. For more information, see [Microsoft Azure Compliance Offerings](https://azure.microsoft.com/resources/microsoft-azure-compliance-offerings/).
-### What is the expected time to receive an Events message?
+## What is the expected time to receive an Events message?
On average, you should receive your event message within one second after a successful HTTP request. 99.99% of the event messages should be delivered within five seconds unless the limitation of either the FHIR service, DICOM service, or [Event Grid](../../event-grid/quotas-limits.md) has been met.
-### Is it possible to receive duplicate Events messages?
+## Is it possible to receive duplicate Events messages?
Yes. The Event Grid guarantees at least one Events message delivery with its push mode. There may be chances that the event delivery request returns with a transient failure status code for random reasons. In this situation, the Event Grid considers that as a delivery failure and resends the Events message. For more information, see [Azure Event Grid delivery and retry](../../event-grid/delivery-and-retry.md). Generally, we recommend that developers ensure idempotency for the event subscriber. The event ID or the combination of all fields in the `data` property of the message content are unique per each event. The developer can rely on them to deduplicate.
-### More frequently asked questions
+## More frequently asked questions
[FAQs about the Azure Health Data Services](../healthcare-apis-faqs.md)
healthcare-apis Concepts Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/concepts-machine-learning.md
Previously updated : 04/28/2023 Last updated : 06/15/2023
In this article, we explore using the MedTech service and the Azure Machine Lear
## The MedTech service and Azure Machine Learning Service reference architecture
-The MedTech service enables IoT devices to seamless integration with FHIR services. This reference architecture is designed to accelerate adoption of Internet of Things (IoT) projects. This solution uses Azure Databricks for the Machine Learning (ML) compute. However, Azure Machine Learning Services with Kubernetes or a partner ML solution could fit into the Machine Learning Scoring Environment.
+The MedTech service enables IoT devices to seamlessly integrate with FHIR services. This reference architecture is designed to accelerate adoption of Internet of Things (IoT) projects. This solution uses Azure Databricks for the Machine Learning (ML) compute. However, Azure Machine Learning Services with Kubernetes or a partner ML solution could fit into the Machine Learning Scoring Environment.
The four line colors show the different parts of the data journey.
The four line colors show the different parts of the data journey.
:::image type="content" source="media/concepts-machine-learning/iot-connector-machine-learning.png" alt-text="Screenshot of the MedTech service and Machine Learning Service reference architecture." lightbox="media/concepts-machine-learning/iot-connector-machine-learning.png":::
-**Data ingest ΓÇô Steps 1 through 5**
+## Data ingest: Steps 1 - 5
1. Data from IoT device or via device gateway sent to Azure IoT Hub/Azure IoT Edge. 2. Data from Azure IoT Edge sent to Azure IoT Hub. 3. Copy of raw IoT device data sent to a secure storage environment for device administration.
-4. PHI IoT payload moves from Azure IoT Hub to the MedTech service. The MedTech service icon represents multiple Azure services.
-5. Three parts to number 5:
- a. The MedTech service requests Patient resource from the FHIR service.
- b. The FHIR service sends Patient resource back to the MedTech service.
- c. IoT Patient Observation is record in the FHIR service.
+4. IoT payload moves from Azure IoT Hub to the MedTech service. The MedTech service icon represents multiple Azure services.
+5. Three parts to number five:
+ 1. The MedTech service requests Patient resource from the FHIR service.
+ 2. The FHIR service sends Patient resource back to the MedTech service.
+ 3. IoT Patient Observation is record in the FHIR service.
-**Machine Learning and AI Data Route ΓÇô Steps 6 through 11**
+## Machine Learning and AI Data Route: Steps 6 - 11
6. Normalized ungrouped data stream sent to an Azure Function (ML Input). 7. Azure Function (ML Input) requests Patient resource to merge with IoT payload.
-8. IoT payload with PHI is sent to an event hub for distribution to Machine Learning compute and storage.
-9. PHI IoT payload is sent to Azure Data Lake Storage Gen 2 for scoring observation over longer time windows.
-10. PHI IoT payload is sent to Azure Databricks for windowing, data fitting, and data scoring.
-11. The Azure Databricks requests more patient data from data lake as needed. a. Azure Databricks also sends a copy of the scored data to the data lake.
+8. IoT payload is sent to an event hub for distribution to Machine Learning compute and storage.
+9. IoT payload is sent to Azure Data Lake Storage Gen 2 for scoring observation over longer time windows.
+10. IoT payload is sent to Azure Databricks for windowing, data fitting, and data scoring.
+11. The Azure Databricks requests more patient data from data lake as needed.
+ 1. Azure Databricks also sends a copy of the scored data to the data lake.
-**Notification and Care Coordination ΓÇô Steps 12 - 18**
+## Notification and Care Coordination: Steps 12 - 18
**Hot path** 12. Azure Databricks sends a payload to an Azure Function (ML Output).
-13. RiskAssessment and/or Flag resource submitted to FHIR service. a. For each observation window, a RiskAssessment resource is submitted to the FHIR service. b. For observation windows where the risk assessment is outside the acceptable range a Flag resource should also be submitted to the FHIR service.
+13. RiskAssessment and/or Flag resource submitted to FHIR service.
+ 1. For each observation window, a RiskAssessment resource is submitted to the FHIR service.
+ 2. For observation windows where the risk assessment is outside the acceptable range a Flag resource should also be submitted to the FHIR service.
14. Scored data sent to data repository for routing to appropriate care team. Azure SQL Server is the data repository used in this design because of its native interaction with Power BI. 15. Power BI Dashboard is updated with Risk Assessment output in under 15 minutes.
For an overview of the MedTech service, see
> [!div class="nextstepaction"] > [What is the MedTech service?](overview.md)
+To learn about the MedTech service device message data transformation, see
+
+> [!div class="nextstepaction"]
+> [Understand the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
+
+To learn about methods for deploying the MedTech service, see
+
+> [!div class="nextstepaction"]
+> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
+ FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Concepts Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/concepts-power-bi.md
Previously updated : 04/28/2023 Last updated : 06/15/2023
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-In this article, we explore using the MedTech service and Microsoft Power Business Intelligence (BI).
+In this article, we explore using the MedTech service and Microsoft Power Business Intelligence (Power BI).
## The MedTech service and Power BI reference architecture This reference architecture shows the basic components of using the Microsoft cloud services to enable Power BI on top of Internet of Things (IoT) and FHIR data.
-You can even embed Power BI dashboards inside the Microsoft Teams client to further enhance care team coordination. For more information on embedding Power BI in Teams, visit [here](/power-bi/collaborate-share/service-embed-report-microsoft-teams).
+You can even embed Power BI dashboards inside the Microsoft Teams client to further enhance care team coordination. For more information on embedding Power BI in Teams, see [Embed Power BI content in Microsoft Teams](/power-bi/collaborate-share/service-embed-report-microsoft-teams).
:::image type="content" source="media/concepts-power-bi/iot-connector-power-bi.png" alt-text="Screenshot of the MedTech service and Power BI." lightbox="media/concepts-power-bi/iot-connector-power-bi.png":::
For an overview of the MedTech service, see
> [!div class="nextstepaction"] > [What is the MedTech service?](overview.md)
+To learn about the MedTech service device message data transformation, see
+
+> [!div class="nextstepaction"]
+> [Understand the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
+
+To learn about methods for deploying the MedTech service, see
+
+> [!div class="nextstepaction"]
+> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
+ FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Concepts Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/concepts-teams.md
Previously updated : 04/28/2023 Last updated : 06/15/2023
When combining the MedTech service, the FHIR service, and Teams, you can enable
The diagram is a MedTech service to Teams notifications conceptual architecture for enabling the MedTech service, the FHIR service, and the Teams Patient App.
-You can even embed Power BI Dashboards inside the Microsoft Teams client. For more information on embedding Power BI in Microsoft Team visit [here](/power-bi/collaborate-share/service-embed-report-microsoft-teams).
+You can even embed Power BI Dashboards inside the Microsoft Teams client. For more information on embedding Power BI in Microsoft Team, see [Embed Power BI content in Microsoft Teams](/power-bi/collaborate-share/service-embed-report-microsoft-teams).
:::image type="content" source="media/concepts-teams/iot-connector-teams.png" alt-text="Screenshot of the MedTech service and Teams." lightbox="media/concepts-teams/iot-connector-teams.png":::
For an overview of the MedTech service, see
> [!div class="nextstepaction"] > [What is the MedTech service?](overview.md)
+To learn about the MedTech service device message data transformation, see
+
+> [!div class="nextstepaction"]
+> [Understand the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
+
+To learn about methods for deploying the MedTech service, see
+
+> [!div class="nextstepaction"]
+> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
+ FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/frequently-asked-questions.md
Previously updated : 06/02/2023 Last updated : 06/15/2023
## MedTech service: The basics
-### Where is the MedTech service available?
+## Where is the MedTech service available?
The MedTech service is available in these Azure regions: [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=health-data-services).
-### Can I use the MedTech service with a different FHIR service other than the Azure Health Data Services FHIR service?
+## Can I use the MedTech service with a different FHIR service other than the Azure Health Data Services FHIR service?
No. The MedTech service currently only supports the Azure Health Data Services FHIR service for the persistence of transformed device data. The open-source version of the MedTech service supports the use of different FHIR services. To learn about the MedTech service open-source projects, see [Open-source projects](git-projects.md).
-### What versions of FHIR does the MedTech service support?
+## What versions of FHIR does the MedTech service support?
The MedTech service supports the [HL7 FHIR&#174; R4](https://www.hl7.org/implement/standards/product_brief.cfm?product_id=491) standard.
-### Why do I have to provide device and FHIR destination mappings to the MedTech service?
+## Why do I have to provide device and FHIR destination mappings to the MedTech service?
The MedTech service requires device and FHIR destination mappings to perform normalization and transformation processes on device data. To learn how the MedTech service transforms device data into [FHIR Observations](https://www.hl7.org/fhir/observation.html), see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md).
-### Is JsonPathContent still supported by the MedTech service device mapping?
+## Is JsonPathContent still supported by the MedTech service device mapping?
Yes. JsonPathContent can be used as a template type within [CollectionContent](overview-of-device-mapping.md#collectioncontent). It's recommended that [CalculatedContent](how-to-use-calculatedcontent-templates.md) is used as it supports all of the features of JsonPathContent with extra support for more advanced features.
-### How long does it take for device data to show up in the FHIR service?
+## How long does it take for device data to show up in the FHIR service?
The MedTech service buffers [FHIR Observations](https://www.hl7.org/fhir/observation.html) created during the transformation stage and provides near real-time processing. However, this buffer can potentially delay the persistence of FHIR Observations to the FHIR service up to ~five minutes. To learn how the MedTech service transforms device data into FHIR Observations, see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md).
-### Why are the device messages added to the event hub not showing up as FHIR Observations in the FHIR service?
+## Why are the device messages added to the event hub not showing up as FHIR Observations in the FHIR service?
> [!TIP] > Having access to MedTech service logs is essential for troubleshooting and assessing the overall health and performance of your MedTech service.
The MedTech service buffers [FHIR Observations](https://www.hl7.org/fhir/observa
\* Reference [Deploy the MedTech service using the Azure portal](deploy-manual-portal.md#configure-the-destination-tab) for a functional description of the MedTech service resolution types (**Create** or **Lookup**).
-### Does the MedTech service perform backups of device messages?
+## Does the MedTech service perform backups of device messages?
No. The MedTech service doesn't back up the device messages that is sent to the event hub. The event hub owner controls the device message retention period within their event hub, which can be from one to 90 days. Event hubs can be deployed in [three different service tiers](../../event-hubs/event-hubs-quotas.md?source=recommendations#basic-vs-standard-vs-premium-vs-dedicated-tiers). Message retention limits are tier-dependent: Basic one day, Standard 1-7 days, Premium 90 days. If the MedTech service successfully processes the device data, it's persisted in the FHIR service, and the FHIR service backup policy applies. To learn more about event hub message retention, see [What is the maximum retention period for events?](/azure/event-hubs/event-hubs-faq#what-is-the-maximum-retention-period-for-events-)
-### What are the subscription quota limits for the MedTech service?
+## What are the subscription quota limits for the MedTech service?
* (25) MedTech services per Azure subscription (not adjustable). * (10) MedTech services per Azure Health Data Services workspace (not adjustable).
To learn more about event hub message retention, see [What is the maximum retent
\* FHIR destination is a child resource of the MedTech service.
-### Can I use the MedTech service with device messages from Apple&#174;, Google&#174;, or Fitbit&#174; devices?
+## Can I use the MedTech service with device messages from Apple&#174;, Google&#174;, or Fitbit&#174; devices?
Yes. The MedTech service supports device messages from all these vendors through the open-source version of the MedTech service.
import-export Storage Import Export Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-requirements.md
To prepare the hard drives using the WAImportExport tool, the following **64-bit
## Supported storage accounts
+> [!Note]
+> Classic storage accounts will not be supported starting **August 1, 2023**.
+ Azure Import/Export service supports the following types of storage accounts: - Standard General Purpose v2 storage accounts (recommended for most scenarios)
internet-peering Overview Peering Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/overview-peering-service.md
Previously updated : 5/22/2020 Last updated : 05/22/2020
Internet peering refers to any interconnection between MicrosoftΓÇÖs global network (AS8075) and Carriers or Service Providers network. A Service Provider can become a Peering Service partner by implementing the Peering Service partnership requirements explained below to provide reliable and high-performing public connectivity with optimal routing from the customer to the Microsoft network. ## About Peering Service+ Peering Service is a partnership program with key service providers to provide best-in-class public Internet connectivity to their enterprise users. Partners who are part of the program will have direct, highly available, geo-redundant connections and optimized routing to Microsoft. Peering Service is an addition to the Microsoft connectivity portfolio: * ExpressRoute for private connectivity to IaaS or PaaS resources (support for private IP space) * Partner based connectivity
In the figure above each branch office of a global enterprise connects to the ne
* Route analytics and statistics - Events for Border Gateway Protocol ([BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol)) route anomalies (leak/hijack detection), and suboptimal routing. ## Peering Service partnership requirements+ * Connectivity to Microsoft Cloud at a location nearest to customer. A partner Service Provider will route user traffic to Microsoft edge closest to user. Similarly, on traffic towards the user, Microsoft will route traffic (using BGP tag) to the edge location closest to the user and Service Provider will deliver the traffic to the user. * Partner will maintain high available, high throughput, and geo-redundant connectivity with Microsoft Global Network. * Partner can utilize their existing peering to support Peering Service if it meets the requirement. ## FAQ
-For frequently asked questions, see [Peering Service - FAQ](service-faqs.yml).
+
+For frequently asked questions, see [Peering Service FAQ](service-faqs.yml).
## Next steps
internet-peering Walkthrough Communications Services Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-communications-services-partner.md
Title: Azure Internet peering for Communications Services walkthrough
-description: Azure Internet peering for Communications Services walkthrough.
+ Title: Internet peering for Communications Services walkthrough
+description: Learn about Internet peering for Communications Services, its requirements, the steps to establish direct interconnect, and how to register and activate a prefix.
Previously updated : 10/10/2022 Last updated : 06/15/2023
-# Azure Internet peering for Communications Services walkthrough
+# Internet peering for Communications Services walkthrough
-This section explains the steps a Communications Services Provider needs to follow to establish a Direct interconnect with Microsoft.
+In this article, you learn steps to establish a Direct interconnect between a Communications Services Provider and Microsoft.
-**Communications Services Providers:** Communications Services Providers are the organizations which offer communication services (Communications, messaging, conferencing etc.) and are looking to integrate their communications services infrastructure (SBC/SIP Gateway etc.) with Azure Communication Services and Microsoft Teams.
+**Communications Services Providers** are the organizations that offer communication services (messaging, conferencing, and other communications services.) and want to integrate their communications services infrastructure (SBC, SIP gateways, and other infrastructure device) with Azure Communication Services and Microsoft Teams.
-Azure Internet peering support Communications Services Providers to establish direct interconnect with Microsoft at any of its edge sites (pop locations). The list of all the public edges sites is available in [PeeringDB](https://www.peeringdb.com/net/694).
+Internet peering supports Communications Services Providers to establish direct interconnect with Microsoft at any of its edge sites (POP locations). The list of all the public edges sites is available in [PeeringDB](https://www.peeringdb.com/net/694).
-The Azure Internet peering provides highly reliable and QoS (Quality of Service) enabled interconnect for Communications services to ensure high quality and performance centric services.
+Internet peering provides highly reliable and QoS (Quality of Service) enabled interconnect for Communications Services to ensure high quality and performance centric services.
## Technical Requirements
-The technical requirements to establish direct interconnect for Communication Services are as following:
-- The Peer MUST provide own Autonomous System Number (ASN), which MUST be public.+
+To establish direct interconnect for Communication Services, follow these requirements:
+
+- The Peer MUST provide its own Autonomous System Number (ASN), which MUST be public.
- The peer MUST have redundant Interconnect (PNI) at each interconnect location to ensure local redundancy.-- The Peer MUST have geo redundancy in place to ensure failover in event of site failures in region/ metro.-- The Peer MUST has the BGP sessions as Active- Active to ensure high availability and faster convergence and should not be provisioned as Primary and backup.
+- The Peer MUST have geo redundancy in place to ensure failover in the event of site failures in region/metro.
+- The Peer MUST has the BGP sessions as Active-Active to ensure high availability and faster convergence and shouldn't be provisioned as Primary and Backup.
- The Peer MUST maintain a 1:1 ratio for Peer peering routers to peering circuits and no rate limiting is applied.-- The Peer MUST supply and advertise their own publicly routable IPv4 address space used by PeerΓÇÖs communications service endpoints (e.g. SBC).
+- The Peer MUST supply and advertise their own publicly routable IPv4 address space used by Peer's communications service endpoints (for example, SBC).
- The Peer MUST supply detail of what class of traffic and endpoints are housed in each advertised subnet. -- The Peer MUST run BGP over Bi-directional Forwarding Detection (BFD) to facilitate sub second route convergence.
+- The Peer MUST run BGP over Bidirectional Forwarding Detection (BFD) to facilitate sub second route convergence.
- All communications infrastructure prefixes are registered in Azure portal and advertised with community string 8075:8007. - The Peer MUST NOT terminate peering on a device running a stateful firewall. -- Microsoft will configure all the interconnect links as LAG (link bundles) by default, so, peer MUST support LACP (Link Aggregation Control Protocol) on the interconnect links.-
-## Establishing Direct Interconnect with Microsoft for Communications Services.
-
-To establish a direct interconnect using Azure Internet peering please follow the below steps:
-
-**1. Associate Peer public ASN to the Azure Subscription:**
-
-In case Peer already associated public ASN to Azure subscription, please ignore this step.
-
-[Associate peer ASN to Azure subscription using the portal - Azure | Microsoft Docs](./howto-subscription-association-portal.md)
+- Microsoft configures all the interconnect links as LAG (link bundles) by default, so, peer MUST support LACP (Link Aggregation Control Protocol) on the interconnect links.
-The next step is to create a Direct peering connection for Peering Service.
+## Establish Direct Interconnect with Microsoft for Communications Services
-> [!NOTE]
-> Once ASN association is approved, email us at peeringservice@microsoft.com with your ASN and subscription ID to associate your subscription with Communications services.
+To establish a direct interconnect with Microsoft using Internet peering, follow the following steps:
-**2. Create Direct peering connection for Peering Service:**
+1. **Associate Peer public ASN to the Azure Subscription:** [Associate peer ASN to Azure subscription using the Azure portal](./howto-subscription-association-portal.md). If the Peer has already associated a public ASN to Azure subscription, go to the next step.
-Follow the instructions to [Create or modify a Direct peering using the portal](./howto-direct-portal.md)
+2. **Create Direct peering connection for Peering Service:** [Create a Direct peering using the portal](./howto-direct-portal.md), and make sure you meet high-availability.requirement. In the **Configuration** tab of **Create a Peering**, select the following options:
-Ensure it meets high-availability requirement.
+ | Setting | Value |
+ | | |
+ | Peering type | Select **Direct**. |
+ | Microsoft network | Select **8075 (with Voice)**. |
+ | SKU | Select **Premium Free**. |
-Please ensure you are selecting following options on ΓÇ£Create a PeeringΓÇ¥ Page:
+ In **Direct Peering Connection**, select following options:
-Peering Type: **Direct**
+ | Setting | Value |
+ | | |
+ | Session Address provider | Select **Microsoft**. |
+ | Use for Peering Services | Select **Enabled**. |
-Microsoft Network: **8075 (with Voice)**
+ > [!NOTE]
+ > When activating Peering Service, ignore the following message: *Do not enable unless you have contacted peering@microsoft.com about becoming a MAPS provider.*
-SKU: **Premium Free**
+1. **Register your prefixes for Optimized Routing:** For optimized routing for your Communication services infrastructure prefixes, register all your prefixes with your peering interconnects.
+ Ensure that the registered prefixes are announced over the direct interconnects established in that location. If the same prefix is announced in multiple peering locations, it's sufficient to register them with just one of the peerings in order to retrieve the unique prefix keys after validation.
-Under ΓÇ£Direct Peering Connection PageΓÇ¥ select following options:
+ > [!NOTE]
+ > The Connection State of your peering connections must be **Active** before registering any prefixes.
-Session Address provider: **Microsoft**
+## Register the prefix
-Use for Peering
+1. If you're an Operator Connect Partner, you would be able to see the ΓÇ£Register PrefixΓÇ¥ tab on the left panel of your peering resource page.
-> [!NOTE]
-> Ignore the following message while selecting for activating for Peering Services.
-> *Do not enable unless you have contacted peering@microsoft.com about becoming a MAPS provider.*
+ :::image type="content" source="./media/walkthrough-communications-services-partner/registered-prefixes-under-direct-peering.png" alt-text="Screenshot of registered prefixes tab under a peering enabled for Peering Service." :::
-**3. Register your prefixes for Optimized Routing**
-
-For optimized routing for your Communication services infrastructure prefixes, you should register all your prefixes with your peering interconnects.
-
-Please ensure that the prefixes registered are being announced over the direct interconnects established in that location.
-If the same prefix is announced in multiple peering locations, it is sufficient to register them with just one of the peerings in order to retrieve the unique prefix keys after validation.
-
-> [!NOTE]
-> The Connection State of your peering connections must be Active before registering any prefixes.
-
-**Prefix Registration**
+2. Register prefixes to access the activation keys.
-1. If you are an Operator Connect Partner, you would be able to see the ΓÇ£Register PrefixΓÇ¥ tab on the left panel of your peering resource page.
+ :::image type="content" source="./media/walkthrough-communications-services-partner/registered-prefixes-blade.png" alt-text="Screenshot of registered prefixes blade with a list of prefixes and keys." :::
- :::image type="content" source="media/registered-prefixes-under-direct-peering.png" alt-text="Screenshot of registered prefixes tab under a peering enabled for Peering Service." :::
+ :::image type="content" source="./media/walkthrough-communications-services-partner/registered-prefix-example.png" alt-text="Screenshot showing a sample prefix being registered." :::
-2. Register prefixes to access the activation keys.
+ :::image type="content" source="./media/walkthrough-communications-services-partner/prefix-after-registration.png" alt-text="Screenshot of registered prefixes blade showing a new prefix added." :::
- :::image type="content" source="media/registered-prefixes-blade.png" alt-text="Screenshot of registered prefixes blade with a list of prefixes and keys." :::
+## Activate the prefix
- :::image type="content" source="media/registered-prefix-example.png" alt-text="Screenshot showing a sample prefix being registered." :::
+In the previous section, you registered the prefix and generated the prefix key. The prefix registration DOES NOT activate the prefix for optimized routing (and doesn't accept <\/24 prefixes). Prefix activation, alignment to the right OC partner, and appropriate interconnect location are requirements for optimized routing (to ensure cold potato routing).
- :::image type="content" source="media/prefix-after-registration.png" alt-text="Screenshot of registered prefixes blade showing a new prefix added." :::
+In this section, you activate the prefix:
-**Prefix Activation**
+1. In the search box at the top of the portal, enter *peering service*. Select **Peering Services** in the search results.
-In the previous steps, you registered the prefix and generated the prefix key. The prefix registration DOES NOT activate the prefix for optimized routing (and will not even accept <\/24 prefixes) and it requires prefix activation and alignment to the right partner (In this case the OC partner) and the appropriate interconnect location (to ensure cold potato routing).
+ :::image type="content" source="./media/walkthrough-communications-services-partner/peering-service-portal-search.png" alt-text="Screenshot shows how to search for Peering Service in the Azure portal.":::
-Below are the steps to activate the prefix.
+1. Select **+ Create** to create a new Peering Service connection.
-1. Look for ΓÇ£Peering ServicesΓÇ¥ resource
+ :::image type="content" source="./media/walkthrough-communications-services-partner/peering-service-list.png" alt-text="Screenshot shows the list of existing Peering Service connections in the Azure portal.":::
- :::image type="content" source="media/peering-service-search.png" alt-text="Screenshot on searching for Peering Service on Azure portal." :::
-
- :::image type="content" source="media/peering-service-list.png" alt-text="Screenshot of a list of existing peering services." :::
+1. In the **Basics** tab, enter or select your subscription, resource group, and Peering Service connection name.
-2. Create a new Peering Service resource
+ :::image type="content" source="./media/walkthrough-communications-services-partner/peering-service-basics.png" alt-text="Screenshot shows the Basics tab of creating a Peering Service connection in the Azure portal.":::
- :::image type="content" source="media/create-peering-service.png" alt-text="Screenshot showing how to create a new peering service." :::
+1. In the **Configuration** tab, provide details on the location, provider and primary and backup interconnect locations. If the backup location is set to **None**, the traffic will fail over to the internet.
-3. Provide details on the location, provider and primary and backup interconnect location. If backup location is set to ΓÇ£noneΓÇ¥, the traffic will fail over the internet.
+ > [!NOTE]
+ > - If you're an Operator Connect partner, your organization is available as a **Provider**.
+ > - The prefix key should be the same as the one obtained in the [Register the prefix](#register-the-prefix) step.
- If you are an Operator Connect partner, you would be able to see yourself as the provider.
- The prefix key should be the same as the one obtained in the "Prefix Registration" step.
+ :::image type="content" source="./media/walkthrough-communications-services-partner/peering-service-configuration.png" alt-text="Screenshot shows the Configuration tab of creating a Peering Service connection in the Azure portal.":::
- :::image type="content" source="media/peering-service-properties.png" alt-text="Screenshot of the fields to be filled to create a peering service." :::
+1. Select **Review + create**.
- :::image type="content" source="media/peering-service-deployment.png" alt-text="Screenshot showing the validation of peering service resource before deployment." :::
+1. Review the settings, and then select **Create**.
-## FAQs:
+## Frequently asked questions (FAQ):
**Q.** When will my BGP peer come up?
Below are the steps to activate the prefix.
**Q.** I have smaller subnets (</24) for my Communications services. Can I get the smaller subnets also routed?
-**A.** Yes, Microsoft Azure Peering service supports smaller prefix routing also. Please ensure that you are registering the smaller prefixes for routing and the same are announced over the interconnects.
+**A.** Yes, Microsoft Azure Peering service supports smaller prefix routing also. Ensure that you're registering the smaller prefixes for routing and the same are announced over the interconnects.
**Q.** What Microsoft routes will we receive over these interconnects?
-**A.** Microsoft announces all of Microsoft's public service prefixes over these interconnects. This will ensure not only Communications but other cloud services are accessible from the same interconnect.
+**A.** Microsoft announces all of Microsoft's public service prefixes over these interconnects. This ensures not only Communications but other cloud services are accessible from the same interconnect.
**Q.** Are there any AS path constraints?
-**A.** Yes, a private ASN cannot be in the AS path. For registered prefixes smaller than /24, the AS path must be less than four.
+**A.** Yes, a private ASN can't be in the AS path. For registered prefixes smaller than /24, the AS path must be less than four.
**Q.** I need to set the prefix limit, how many routes Microsoft would be announcing?
Below are the steps to activate the prefix.
**Q.** What is the minimum link speed for an interconnect?
-**A.** 10Gbps.
+**A.** 10 Gbps.
**Q.** Is the Peer bound to an SLA?
Below are the steps to activate the prefix.
**Q.** What is the advantage of this service over current direct peering or express route?
-**A.** Settlement free and entire path is optimized for voice traffic over Microsoft WAN and convergence is tuned for sub-second with BFD.
+**A.** Settlement free and entire path is optimized for voice traffic over Microsoft WAN and convergence is tuned for subsecond with BFD.
**Q.** How does it take to complete the onboarding process?
-**A.** Time will be variable depending on number and location of sites, and if Peer is migrating existing private peerings or establishing new cabling. Carrier should plan for 3+ weeks.
+**A.** Time is variable depending on number and location of sites, and if Peer is migrating existing private peerings or establishing new cabling. Carrier should plan for 3+ weeks.
**Q.** How is progress communicated outside of the portal status?
Below are the steps to activate the prefix.
**Q.** Can we use APIs for onboarding?
-**A.** Currently there is no API support, and configuration must be performed via web portal.
+**A.** Currently there's no API support, and configuration must be performed via web portal.
internet-peering Walkthrough Device Maintenance Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-device-maintenance-notification.md
+
+ Title: Device maintenance notification walkthrough
+
+description: Learn how to view current and past peering device maintenance events, and how to create alerts to receive notifications for the future events.
++++ Last updated : 06/15/2023++++
+# Azure Peering maintenance notification walkthrough
+
+In this article, you learn how to see active maintenance events and how to create alerts for future ones. Internet Peering partners and Peering Service customers can create alerts to receive notifications by email, voice, SMS, or the Azure mobile app.
+
+## View maintenance events
+
+If you're a partner who has Internet Peering or Peering Service resources in Azure, you receive notifications through the Azure Service Health page. In this section, you learn how to view active maintenance events in the Service Health page.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the search box at the top of the portal, enter *service health*. Select **Service Health** in the search results.
+
+ :::image type="content" source="./media/walkthrough-device-maintenance-notification/service-health-portal-search.png" alt-text="Screenshot shows how to search for Service Health in the Azure portal." lightbox="./media/walkthrough-device-maintenance-notification/service-health-portal-search.png":::
+
+1. Select **Planned maintenance** to see active maintenance events. Select **Azure Service Peering** for **Service** filter to only list maintenance events for Azure Peering Service.
+
+ :::image type="content" source="./media/walkthrough-device-maintenance-notification/planned-maintenance.png" alt-text="Screenshot shows planned maintenance events for Azure Peering Service in the Service Health page in the Azure portal." lightbox="./media/walkthrough-device-maintenance-notification/service-health-portal-search.png":::
+
+ The summary tab gives you information about the affected resource by a maintenance event, such as the Azure subscription, region, and peering location.
+
+ Once maintenance is completed, a status update is sent. You'll be able to view and review the maintenance event in the **Health history** page after it's completed.
+
+1. Select **Health history** to see past maintenance events.
+
+ :::image type="content" source="./media/walkthrough-device-maintenance-notification/health-history.png" alt-text="Screenshot shows how to view past maintenance events in the Azure portal." lightbox="./media/walkthrough-device-maintenance-notification/health-history.png":::
+
+> [!NOTE]
+> The end time listed for the maintenance is an estimate. Many maintenance events will complete before the end time that is shown in Service Health, but this is not guaranteed. Future developments to our maintenance notification service will allow for more accurate maintenance end times.
+
+## Create alerts
+
+Service Health supports forwarding rules, so you can set up your own alerts when maintenance events occur.
+
+1. To set up a forwarding rule, go to the **Planned maintenance** page, and then select **+ Add service health alert**.
+
+ :::image type="content" source="./media/walkthrough-device-maintenance-notification/add-service-health-alert.png" alt-text="Screenshot shows how to add an alert.":::
+
+1. In the **Scope** tab, select the Azure subscription your Internet Peering or Peering Service is associated with. When a maintenance event affects a resource, the alert in Service Health is associated with the Azure subscription ID of the resource.
+
+ :::image type="content" source="./media/walkthrough-device-maintenance-notification/create-alert-rule-scope.png" alt-text="Screenshot shows how to choose the Azure subscription of the resource.":::
+
+1. Select the **Condition** tab, or select the **Next: Condition** button at the bottom of the page.
+
+1. In the **Condition** tab, Select the following information:
+
+ | Setting | Value |
+ | | |
+ | Services | Select **Azure Peering Service**. |
+ | Regions | Select the Azure region(s) of the resources that you want to get notified whenever they have planned maintenance events. |
+ | Event types | Select **Planned maintenance**. |
+
+ :::image type="content" source="./media/walkthrough-device-maintenance-notification/create-alert-rule-condition.png" alt-text="Screenshot shows the Condition tab of creating an alert rule in the Azure portal.":::
+
+1. Select the **Actions** tab, or select the **Next: Actions** button.
+
+1. Select **Create action group** to create a new action group. If you previously created an action group, you can use it by selecting **Select action groups**.
+
+ :::image type="content" source="./media/walkthrough-device-maintenance-notification/create-alert-rule-actions.png" alt-text="Screenshot shows the Actions tab before creating a new action group.":::
+
+1. In the **Basics** tab of **Create action group**, enter or select the following information:
+
+ | Setting | Value |
+ | | |
+ | **Project Details** | |
+ | Subscription | Select the Azure subscription that you want to use for the action group. |
+ | Resource group | Select **Create new**. </br> Enter *myResourceGroup* in **Name**. </br> Select **OK**. </br> If you have an existing resource group that you want to use, select it instead of creating a new one. |
+ | Regions | Select **Global**. |
+ | **Instance details** | |
+ | Action group name | Enter a name for the action group. |
+ | Display name | Enter a short display name (up to 12 characters). |
+
+ :::image type="content" source="./media/walkthrough-device-maintenance-notification/create-action-group-basics.png" alt-text="Screenshot shows the Basics tab of creating an action group.":::
+
+1. Select the **Notifications** tab, or select the **Next: Notifications** button. Then, select **Email/SMS message/Push/Voice** for the **Notification type**, and enter a name for this notification. Enter the contact information for the type of notification that you want.
+
+ :::image type="content" source="./media/walkthrough-device-maintenance-notification/create-action-group-notifications-email-sms.png" alt-text="Screenshot shows how to add the required contact information for the notifications.":::
+
+1. Select **Review + create**.
+
+1. Review the settings, and then select **Create**.
+
+1. After creating the action group, you return to the **Actions** tab of **Create an alert rule**. Select **PeeringMaintenance** action group to edit it or send test notifications.
+
+ :::image type="content" source="./media/walkthrough-device-maintenance-notification/create-alert-rule-actions-group.png" alt-text="Screenshot shows the Actions tab after creating a new action group.":::
+
+1. Select **Test action group** to send test notification(s) to the contact information you previously entered in the action group (to change the contact information, select the pencil icon next to the notification).
+
+ :::image type="content" source="./media/walkthrough-device-maintenance-notification/edit-action-group.png" alt-text="Screenshot shows how to edit an action group in the Azure portal.":::
+
+1. In **Test PeeringMaintenance**, select **Resource health alert** for **Select sample type**, and then select **Test**. Select **Done** after you successfully test the notifications.
+
+ :::image type="content" source="./media/walkthrough-device-maintenance-notification/test-notifications.png" alt-text="Screenshot shows how to send test notifications.":::
+
+1. Select the **Details** tab, or select the **Next: Details** button. Enter or select the following information:
+
+ | Setting | Value |
+ | | |
+ | **Project Details** | |
+ | Subscription | Select the Azure subscription that you want to use for the alert rule. |
+ | Resource group | Select **myResourceGroup**. |
+ | **Alert rule details** | |
+ | Alert rule name | Enter a name for the rule. |
+ | Alert rule description | Enter an optional description. |
+ | **Advanced options** | Select **Enable alert rule upon creation**. |
+
+ :::image type="content" source="./media/walkthrough-device-maintenance-notification/create-alert-rule-details.png" alt-text="Screenshot shows the Details tab of creating an alert rule.":::
+
+1. Select **Review + create**, and finish your alert rule.
+
+1. Review the settings, and then select **Create**.
+
+Azure Peering Service notifications are forwarded to you based on your alert rule whenever maintenance events start, and whenever they're resolved.
+
+For more information on the notification platform of Service Health, see [Create activity log alerts on service notifications using the Azure portal](../service-health/alerts-activity-log-service-notifications-portal.md).
+
+## Receive notifications for legacy peerings
+
+Peering partners who haven't onboarded their peerings as Azure resources can't receive notifications in Service Health as they don't have subscriptions associated with their peerings. Instead, these partners receive maintenance notifications via their NOC contact email. Partners with legacy peerings don't have to opt in to receive these email notifications, they're sent automatically. This is an example of a maintenance notification email:
++
+## Next steps
+
+- Learn about the [Prerequisites to set up peering with Microsoft](prerequisites.md).
key-vault Common Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/common-error-codes.md
# Common error codes for Azure Key Vault
-The error codes listed in the following table may be returned by an operation on Azure key vault
+The error codes listed in the following table may be returned by an operation on Azure Key Vault.
| Error code | User message | |--|--|
key-vault Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/private-link-service.md
You can create a new key vault with the [Azure portal](../general/quick-create-p
After configuring the key vault basics, select the Networking tab and follow these steps:
-1. Select the Private Endpoint radio button in the Networking tab.
-1. Select the "+ Add" Button to add a private endpoint.
+1. Disable public access by toggling off the radio button.
+1. Select the "+ Create a private endpoint" Button to add a private endpoint.
- ![Screenshot that shows the 'Networking' tab on the 'Create key vault' page.](../media/private-link-service-1.png)
+ ![Screenshot that shows the 'Networking' tab on the 'Create key vault' page.](../media/private-link-service-10.png)
1. In the "Location" field of the Create Private Endpoint Blade, select the region in which your virtual network is located. 1. In the "Name" field, create a descriptive name that will allow you to identify this private endpoint.
There are four provisioning states:
1. In the search bar, type in "key vaults" 1. Select the key vault that you want to manage. 1. Select the "Networking" tab.
-1. If there are any connections that are pending, you will see a connection listed with "Pending" in the provisioning state.
+1. If there are any connections that are pending, you'll see a connection listed with "Pending" in the provisioning state.
1. Select the private endpoint you wish to approve 1. Select the approve button. 1. If there are any private endpoint connections you want to reject, whether it's a pending request or existing connection, select the connection and select the "Reject" button.
Open the command line and run the following command:
nslookup <your-key-vault-name>.vault.azure.net ```
-If you run the ns lookup command to resolve the IP address of a key vault over a public endpoint, you will see a result that looks like this:
+If you run the ns lookup command to resolve the IP address of a key vault over a public endpoint, you'll see a result that looks like this:
```console c:\ >nslookup <your-key-vault-name>.vault.azure.net
Address: (public IP address)
Aliases: <your-key-vault-name>.vault.azure.net ```
-If you run the ns lookup command to resolve the IP address of a key vault over a private endpoint, you will see a result that looks like this:
+If you run the ns lookup command to resolve the IP address of a key vault over a private endpoint, you'll see a result that looks like this:
```console c:\ >nslookup your_vault_name.vault.azure.net
Aliases: <your-key-vault-name>.vault.azure.net
1. You can check and fix this in Azure portal. Open the Key Vault resource, and select the Networking option. 2. Then select the Private endpoint connections tab. 3. Make sure connection state is Approved and provisioning state is Succeeded.
- 4. You may also navigate to the private endpoint resource and review same properties there, and double-check that the virtual network matches the one you are using.
+ 4. You may also navigate to the private endpoint resource and review same properties there, and double-check that the virtual network matches the one you're using.
* Check to make sure you have a Private DNS Zone resource. 1. You must have a Private DNS Zone resource with the exact name: privatelink.vaultcore.azure.net. 2. To learn how to set this up please see the following link. [Private DNS Zones](../../dns/private-dns-privatednszone.md)
-* Check to make sure the Private DNS Zone is linked to the Virtual Network. This may be the issue if you are still getting the public IP address returned.
- 1. If the Private Zone DNS is not linked to the virtual network, the DNS query originating from the virtual network will return the public IP address of the key vault.
+* Check to make sure the Private DNS Zone is linked to the Virtual Network. This may be the issue if you're still getting the public IP address returned.
+ 1. If the Private Zone DNS isn't linked to the virtual network, the DNS query originating from the virtual network will return the public IP address of the key vault.
2. Navigate to the Private DNS Zone resource in the Azure portal and select the virtual network links option. 4. The virtual network that will perform calls to the key vault must be listed. 5. If it's not there, add it. 6. For detailed steps, see the following document [Link Virtual Network to Private DNS Zone](../../dns/private-dns-getstarted-portal.md#link-the-virtual-network)
-* Check to make sure the Private DNS Zone is not missing an A record for the key vault.
+* Check to make sure the Private DNS Zone isn't missing an A record for the key vault.
1. Navigate to the Private DNS Zone page.
- 2. Select Overview and check if there is an A record with the simple name of your key vault (i.e. fabrikam). Do not specify any suffix.
+ 2. Select Overview and check if there's an A record with the simple name of your key vault (i.e. fabrikam). Don't specify any suffix.
3. Make sure you check the spelling, and either create or fix the A record. You can use a TTL of 600 (10 mins). 4. Make sure you specify the correct private IP address.
Aliases: <your-key-vault-name>.vault.azure.net
4. The link will show the Overview of the NIC resource, which contains the property Private IP address. 5. Verify that this is the correct IP address that is specified in the A record.
-* If you are connecting from an on-prem resource to a Key Vault, ensure you have all required conditional forwarders in the on-prem environment enabled.
- 1. Review [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration) for the zones needed, and make sure you have conditional forwarders for both `vault.azure.net` and `vaultcore.azure.net` on your on-prem DNS.
+* If you're connecting from an on-premises resource to a Key Vault, ensure you have all required conditional forwarders in the on-premises environment enabled.
+ 1. Review [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration) for the zones needed, and make sure you have conditional forwarders for both `vault.azure.net` and `vaultcore.azure.net` on your on-premises DNS.
2. Ensure that you have conditional forwarders for those zones that route to an [Azure Private DNS Resolver](../../dns/dns-private-resolver-overview.md) or some other DNS platform with access to Azure resolution. ## Limitations and Design Considerations
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md
To add role assignments, you must have `Microsoft.Authorization/roleAssignments/
1. Enable Azure RBAC permissions on new key vault:
- ![Enable Azure RBAC permissions - new vault](../media/rbac/image-1.png)
+ ![Enable Azure RBAC permissions - new vault](../media/rbac/new-vault.png)
2. Enable Azure RBAC permissions on existing key vault:
- ![Enable Azure RBAC permissions - existing vault](../media/rbac/image-2.png)
+ ![Enable Azure RBAC permissions - existing vault](../media/rbac/existing-vault.png)
> [!IMPORTANT] > Setting Azure RBAC permission model invalidates all access policies permissions. It can cause outages when equivalent Azure roles aren't assigned.
kubernetes-fleet Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/overview.md
Title: "Overview of Azure Kubernetes Fleet Manager (preview)"
Previously updated : 08/29/2022 Last updated : 06/12/2023
Fleet supports the following scenarios:
* Create Kubernetes resource objects on the Fleet resource's cluster and control their propagation to all or a subset of all member clusters.
-* Export a service from one member cluster to the Fleet resource. Once successfully exported, the service and its endpoints are synced to the hub, which other member clusters (or any Fleet resource-scoped load balancer) can consume.
+* Load balance incoming L4 traffic across service endpoints on multiple clusters
+
+* Orchestrate Kubernetes version and node image upgrades across multiple clusters by using update runs, stages, and groups.
[!INCLUDE [preview features note](./includes/preview/preview-callout.md)] ## Next steps
-[Create an Azure Kubernetes Fleet Manager resource and group multiple AKS clusters as member clusters of the fleet](./quickstart-create-fleet-and-members.md).
+[Create an Azure Kubernetes Fleet Manager resource and group multiple AKS clusters as member clusters of the fleet](./quickstart-create-fleet-and-members.md).
load-balancer Cross Region Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-region-overview.md
Cross-region load balancer routes the traffic to the appropriate regional load b
* NAT64 translation isn't supported at this time. The frontend and backend IPs must be of the same type (v4 or v6).
-* UDP traffic isn't supported on Cross-region Load Balancer.
+* UDP traffic isn't supported on Cross-region Load Balancer.
+* Outbound rules aren't support on Cross-region Load Balancer. For outbound connections please utilize [outbound rules](./outbound-rules.md) on the regional load balancer or [NAT gateway](https://learn.microsoft.com/azure/nat-gateway/nat-overview).
logic-apps Logic Apps Using Sap Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-sap-connector.md
The preview SAP built-in connector trigger named **Register SAP RFC server for t
> When you use a Premium-level ISE, use the ISE-native SAP connector, not the SAP managed connector, > which doesn't natively run in an ISE. For more information, review the [ISE prerequisites](#ise-prerequisites).
+* By default, the preview SAP built-in connector operations are stateless. To run these operations in stateful mode, see [Enable stateful mode for stateless built-in connectors](../connectors/enable-stateful-affinity-built-in-connectors.md).
+ * To use either the SAP managed connector trigger named **When a message is received from SAP** or the SAP built-in trigger named **Register SAP RFC server for trigger**, complete the following tasks: * Set up your SAP gateway security permissions or Access Control List (ACL). In the **Gateway Monitor** (T-Code SMGW) dialog box, which shows the **secinfo** and **reginfo** files, open the **Goto** menu, and select **Expert Functions** > **External Security** > **Maintenance of ACL Files**.
For a Standard workflow in single-tenant Azure Logic Apps, use the preview SAP *
- **sapnco.dll** - **sapnco_utils.dll**
-1. To SNC from SAP, you need to download the following files and have them ready to upload to your logic app resource. For more information, see [SNC prerequisites](#snc-prerequisites-standard):
+1. For SNC from SAP, you need to download the following files and have them ready to upload to your logic app resource. For more information, see [SNC prerequisites](#snc-prerequisites-standard):
- **sapcrypto.dll** - **sapgenpse.exe**
machine-learning Concept Manage Ml Pitfalls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-manage-ml-pitfalls.md
Title: Avoid overfitting & imbalanced data with AutoML
+ Title: Avoid overfitting & imbalanced data with Automated machine learning
-description: Identify and manage common pitfalls of ML models with Azure Machine Learning's automated machine learning solutions.
+description: Identify and manage common pitfalls of ML models with Azure Machine Learning's Automated ML solutions.
Previously updated : 10/21/2021 Last updated : 06/15/2023
-# Prevent overfitting and imbalanced data with automated machine learning
+# Prevent overfitting and imbalanced data with Automated ML
-Overfitting and imbalanced data are common pitfalls when you build machine learning models. By default, Azure Machine Learning's automated machine learning provides charts and metrics to help you identify these risks, and implements best practices to help mitigate them.
+Overfitting and imbalanced data are common pitfalls when you build machine learning models. By default, Azure Machine Learning's Automated ML provides charts and metrics to help you identify these risks, and implements best practices to help mitigate them.
## Identify overfitting
-Overfitting in machine learning occurs when a model fits the training data too well, and as a result can't accurately predict on unseen test data. In other words, the model has simply memorized specific patterns and noise in the training data, but is not flexible enough to make predictions on real data.
+Overfitting in machine learning occurs when a model fits the training data too well, and as a result can't accurately predict on unseen test data. In other words, the model has memorized specific patterns and noise in the training data, but is not flexible enough to make predictions on real data.
Consider the following trained models and their corresponding train and test accuracies.
Consider the following trained models and their corresponding train and test acc
| B | 87% | 87% | | C | 99.9% | 45% |
-Considering model **A**, there is a common misconception that if test accuracy on unseen data is lower than training accuracy, the model is overfitted. However, test accuracy should always be less than training accuracy, and the distinction for overfit vs. appropriately fit comes down to *how much* less accurate.
+Consider model **A**, there is a common misconception that if test accuracy on unseen data is lower than training accuracy, the model is overfitted. However, test accuracy should always be less than training accuracy, and the distinction for overfit vs. appropriately fit comes down to *how much* less accurate.
-When comparing models **A** and **B**, model **A** is a better model because it has higher test accuracy, and although the test accuracy is slightly lower at 95%, it is not a significant difference that suggests overfitting is present. You wouldn't choose model **B** simply because the train and test accuracies are closer together.
+Compare models **A** and **B**, model **A** is a better model because it has higher test accuracy, and although the test accuracy is slightly lower at 95%, it is not a significant difference that suggests overfitting is present. You wouldn't choose model **B** because the train and test accuracies are closer together.
-Model **C** represents a clear case of overfitting; the training accuracy is very high but the test accuracy isn't anywhere near as high. This distinction is subjective, but comes from knowledge of your problem and data, and what magnitudes of error are acceptable.
+Model **C** represents a clear case of overfitting; the training accuracy is high but the test accuracy isn't anywhere near as high. This distinction is subjective, but comes from knowledge of your problem and data, and what magnitudes of error are acceptable.
## Prevent overfitting
-In the most egregious cases, an overfitted model assumes that the feature value combinations seen during training will always result in the exact same output for the target.
+In the most egregious cases, an overfitted model assumes that the feature value combinations seen during training always results in the exact same output for the target.
-The best way to prevent overfitting is to follow ML best-practices including:
+The best way to prevent overfitting is to follow ML best practices including:
* Using more training data, and eliminating statistical bias * Preventing target leakage
The best way to prevent overfitting is to follow ML best-practices including:
* **Model complexity limitations** * **Cross-validation**
-In the context of automated ML, the first three items above are **best-practices you implement**. The last three bolded items are **best-practices automated ML implements** by default to protect against overfitting. In settings other than automated ML, all six best-practices are worth following to avoid overfitting models.
+In the context of Automated ML, the first three ways lists best practices you implement. The last three bolded items are **best practices Automated ML implements** by default to protect against overfitting. In settings other than Automated ML, all six best practices are worth following to avoid overfitting models.
## Best practices you implement ### Use more data
-Using **more data** is the simplest and best possible way to prevent overfitting, and as an added bonus typically increases accuracy. When you use more data, it becomes harder for the model to memorize exact patterns, and it is forced to reach solutions that are more flexible to accommodate more conditions. It's also important to recognize **statistical bias**, to ensure your training data doesn't include isolated patterns that won't exist in live-prediction data. This scenario can be difficult to solve, because there may not be overfitting between your train and test sets, but there may be overfitting present when compared to live test data.
+Using more data is the simplest and best possible way to prevent overfitting, and as an added bonus typically increases accuracy. When you use more data, it becomes harder for the model to memorize exact patterns, and it is forced to reach solutions that are more flexible to accommodate more conditions. It's also important to recognize statistical bias, to ensure your training data doesn't include isolated patterns that don't exist in live-prediction data. This scenario can be difficult to solve, because there could be overfitting present when compared to live test data.
### Prevent target leakage
-**Target leakage** is a similar issue, where you may not see overfitting between train/test sets, but rather it appears at prediction-time. Target leakage occurs when your model "cheats" during training by having access to data that it shouldn't normally have at prediction-time. For example, if your problem is to predict on Monday what a commodity price will be on Friday, but one of your features accidentally included data from Thursdays, that would be data the model won't have at prediction-time since it cannot see into the future. Target leakage is an easy mistake to miss, but is often characterized by abnormally high accuracy for your problem. If you are attempting to predict stock price and trained a model at 95% accuracy, there is likely target leakage somewhere in your features.
+Target leakage is a similar issue, where you may not see overfitting between train/test sets, but rather it appears at prediction-time. Target leakage occurs when your model "cheats" during training by having access to data that it shouldn't normally have at prediction-time. For example, to predict on Monday what a commodity price will be on Friday, if your features accidentally included data from Thursdays, that would be data the model won't have at prediction-time since it can't see into the future. Target leakage is an easy mistake to miss, but is often characterized by abnormally high accuracy for your problem. If you're attempting to predict stock price and trained a model at 95% accuracy, there's likely target leakage somewhere in your features.
### Use fewer features
-**Removing features** can also help with overfitting by preventing the model from having too many fields to use to memorize specific patterns, thus causing it to be more flexible. It can be difficult to measure quantitatively, but if you can remove features and retain the same accuracy, you have likely made the model more flexible and have reduced the risk of overfitting.
+Removing features can also help with overfitting by preventing the model from having too many fields to use to memorize specific patterns, thus causing it to be more flexible. It can be difficult to measure quantitatively, but if you can remove features and retain the same accuracy, you have likely made the model more flexible and have reduced the risk of overfitting.
-## Best practices automated ML implements
+## Best practices Automated ML implements
### Regularization and hyperparameter tuning
-**Regularization** is the process of minimizing a cost function to penalize complex and overfitted models. There are different types of regularization functions, but in general they all penalize model coefficient size, variance, and complexity. Automated ML uses L1 (Lasso), L2 (Ridge), and ElasticNet (L1 and L2 simultaneously) in different combinations with different model hyperparameter settings that control overfitting. In simple terms, automated ML will vary how much a model is regulated and choose the best result.
+**Regularization** is the process of minimizing a cost function to penalize complex and overfitted models. There's different types of regularization functions, but in general they all penalize model coefficient size, variance, and complexity. Automated ML uses L1 (Lasso), L2 (Ridge), and ElasticNet (L1 and L2 simultaneously) in different combinations with different model hyperparameter settings that control overfitting. Automated ML varies how much a model is regulated and choose the best result.
### Model complexity limitations
-Automated ML also implements explicit **model complexity limitations** to prevent overfitting. In most cases this implementation is specifically for decision tree or forest algorithms, where individual tree max-depth is limited, and the total number of trees used in forest or ensemble techniques are limited.
+Automated ML also implements explicit model complexity limitations to prevent overfitting. In most cases, this implementation is specifically for decision tree or forest algorithms, where individual tree max-depth is limited, and the total number of trees used in forest or ensemble techniques are limited.
### Cross-validation
-**Cross-validation (CV)** is the process of taking many subsets of your full training data and training a model on each subset. The idea is that a model could get "lucky" and have great accuracy with one subset, but by using many subsets the model won't achieve this high accuracy every time. When doing CV, you provide a validation holdout dataset, specify your CV folds (number of subsets) and automated ML will train your model and tune hyperparameters to minimize error on your validation set. One CV fold could be overfitted, but by using many of them it reduces the probability that your final model is overfitted. The tradeoff is that CV does result in longer training times and thus greater cost, because instead of training a model once, you train it once for each *n* CV subsets.
+Cross-validation (CV) is the process of taking many subsets of your full training data and training a model on each subset. The idea is that a model could get "lucky" and have great accuracy with one subset, but by using many subsets the model won't achieve this high accuracy every time. When doing CV, you provide a validation holdout dataset, specify your CV folds (number of subsets) and Automated ML trains your model and tune hyperparameters to minimize error on your validation set. One CV fold could be overfitted, but by using many of them it reduces the probability that your final model is overfitted. The tradeoff is that CV results in longer training times and greater cost, because you train a model once for each *n* in the CV subsets.
> [!NOTE]
-> Cross-validation is not enabled by default; it must be configured in automated ML settings. However, after cross-validation is configured and a validation data set has been provided, the process is automated for you. Learn more about [cross validation configuration in Auto ML (SDK v1)](./v1/how-to-configure-cross-validation-data-splits.md?view=azureml-api-1&preserve-view=true)
+> Cross-validation isn't enabled by default; it must be configured in Automated machine learning settings. However, after cross-validation is configured and a validation data set has been provided, the process is automated for you.
<a name="imbalance"></a>
Automated ML also implements explicit **model complexity limitations** to preven
Imbalanced data is commonly found in data for machine learning classification scenarios, and refers to data that contains a disproportionate ratio of observations in each class. This imbalance can lead to a falsely perceived positive effect of a model's accuracy, because the input data has bias towards one class, which results in the trained model to mimic that bias.
-In addition, automated ML jobs generate the following charts automatically, which can help you understand the correctness of the classifications of your model, and identify models potentially impacted by imbalanced data.
+In addition, Automated ML jobs generate the following charts automatically. These charts help you understand the correctness of the classifications of your model, and identify models potentially impacted by imbalanced data.
Chart| Description |
Chart| Description
## Handle imbalanced data
-As part of its goal of simplifying the machine learning workflow, **automated ML has built in capabilities** to help deal with imbalanced data such as,
+As part of its goal of simplifying the machine learning workflow, Automated ML has built in capabilities to help deal with imbalanced data such as,
-- A **weight column**: automated ML will create a column of weights as input to cause rows in the data to be weighted up or down, which can be used to make a class more or less "important".
+- A weight column: Automated ML creates a column of weights as input to cause rows in the data to be weighted up or down, which can be used to make a class more or less "important."
-- The algorithms used by automated ML detect imbalance when the number of samples in the minority class is equal to or fewer than 20% of the number of samples in the majority class, where minority class refers to the one with fewest samples and majority class refers to the one with most samples. Subsequently, AutoML will run an experiment with sub-sampled data to check if using class weights would remedy this problem and improve performance. If it ascertains a better performance through this experiment, then this remedy is applied.
+- The algorithms used by Automated ML detect imbalance when the number of samples in the minority class is equal to or fewer than 20% of the number of samples in the majority class, where minority class refers to the one with fewest samples and majority class refers to the one with most samples. Subsequently, automated machine learning will run an experiment with subsampled data to check if using class weights would remedy this problem and improve performance. If it ascertains a better performance through this experiment, then this remedy is applied.
- Use a performance metric that deals better with imbalanced data. For example, the AUC_weighted is a primary metric that calculates the contribution of every class based on the relative number of samples representing that class, hence is more robust against imbalance.
-The following techniques are additional options to handle imbalanced data **outside of automated ML**.
+The following techniques are additional options to handle imbalanced data outside of Automated ML.
- Resampling to even the class imbalance, either by up-sampling the smaller classes or down-sampling the larger classes. These methods require expertise to process and analyze.
The following techniques are additional options to handle imbalanced data **outs
## Next steps
-See examples and learn how to build models using automated machine learning:
+See examples and learn how to build models using Automated ML:
-+ Follow the [Tutorial: Train an object detection model with AutoML and Python](tutorial-auto-train-image-models.md).
++ Follow the [Tutorial: Train an object detection model with automated machine learning and Python](tutorial-auto-train-image-models.md). + Configure the settings for automatic training experiment: + In Azure Machine Learning studio, [use these steps](how-to-use-automated-ml-for-ml-models.md).
machine-learning Concept Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-soft-delete.md
Last updated 11/07/2022
-monikerRange: 'azureml-api-2'
+monikerRange: 'azureml-api-2 || azureml-api-1'
#Customer intent: As an IT pro, understand how to enable data protection capabilities, to protect against accidental deletion.
During the retention period, soft deleted workspaces can be recovered or permane
The default deletion behavior when deleting a workspace is soft delete. Optionally, you may override the soft delete behavior by permanently deleting your workspace. Permanently deleting a workspace ensures workspace data is immediately deleted. Use this option to meet related compliance requirements, or whenever you require a workspace name to be reused immediately after deletion. This may be useful in dev/test scenarios where you want to create and later delete a workspace.
-When deleting a workspace from the Azure Portal, check __Delete the workspace permanently__. You can permanently delete only one workspace at a time, and not using a batch operation.
+When deleting a workspace from the Azure portal, check __Delete the workspace permanently__. You can permanently delete only one workspace at a time, and not using a batch operation.
:::image type="content" source="./media/concept-soft-delete/soft-delete-permanently-delete.png" alt-text="Screenshot of the delete workspace form in the portal.":::
-If you are using the [Azure Machine Learning SDK or CLI](https://learn.microsoft.com/python/api/azure-ai-ml/azure.ai.ml.operations.workspaceoperations#azure-ai-ml-operations-workspaceoperations-begin-delete), you can set the `permanently_delete` flag.
+> [!TIP]
+> The v1 SDK and CLI don't provide functionality to override the default soft-delete behavior. To override the default behavior from SDK or CLI, use the the v2 versions. For more information, see the [CLI & SDK v2](concept-v2.md) article or the [v2 version of this article](concept-soft-delete.md?view=azureml-api-2&preserve-view=true#deleting-a-workspace).
+
+If you are using the [Azure Machine Learning SDK or CLI](/python/api/azure-ai-ml/azure.ai.ml.operations.workspaceoperations#azure-ai-ml-operations-workspaceoperations-begin-delete), you can set the `permanently_delete` flag.
```python from azure.ai.ml import MLClient
result = ml_client.workspaces.begin_delete(
print(result) ```+ Once permanently deleted, workspace data can no longer be recovered. Permanent deletion of workspace data is also triggered when the soft delete retention period expires. ## Manage soft deleted workspaces
machine-learning Designer Accessibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/designer-accessibility.md
The following keyboard actions help you navigate a pipeline graph:
- Tab: Move to first node > each port of the node > next node. - Up/down arrow keys: Move to next or previous node by its position in the graph. - Ctrl+G when focus is on a port: Go to the connected port. When there's more than one connection from one port, open a list view to select the target. Use the Esc key to go to the selected target.
+- Ctrl + Shift + H to focus on the canvas.
## Edit the pipeline graph
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
Previously updated : 03/15/2022 Last updated : 06/15/2023 #Customer intent: I'm a data scientist with ML knowledge in the natural language processing space, looking to build ML models using language specific data in Azure Machine Learning with full control of the model algorithm, hyperparameters, and training and deployment environments.
Last updated 03/15/2022
In this article, you learn how to train natural language processing (NLP) models with [automated ML](concept-automated-ml.md) in Azure Machine Learning. You can create NLP models with automated ML via the Azure Machine Learning Python SDK v2 or the Azure Machine Learning CLI v2.
-Automated ML supports NLP which allows ML professionals and data scientists to bring their own text data and build custom models for tasks such as, multi-class text classification, multi-label text classification, and named entity recognition (NER).
+Automated ML supports NLP which allows ML professionals and data scientists to bring their own text data and build custom models for NLP tasks. NLP tasks include multi-class text classification, multi-label text classification, and named entity recognition (NER).
-You can seamlessly integrate with the [Azure Machine Learning data labeling](how-to-create-text-labeling-projects.md) capability to label your text data or bring your existing labeled data. Automated ML provides the option to use distributed training on multi-GPU compute clusters for faster model training. The resulting model can be operationalized at scale by leveraging Azure Machine Learning's MLOps capabilities.
+You can seamlessly integrate with the [Azure Machine Learning data labeling](how-to-create-text-labeling-projects.md) capability to label your text data or bring your existing labeled data. Automated ML provides the option to use distributed training on multi-GPU compute clusters for faster model training. The resulting model can be operationalized at scale using Azure Machine Learning's MLOps capabilities.
## Prerequisites
You can seamlessly integrate with the [Azure Machine Learning data labeling](how
* Azure subscription. If you don't have an Azure subscription, sign up to try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-* An Azure Machine Learning workspace with a GPU training compute. To create the workspace, see [Create workspace resources](quickstart-create-resources.md). See [GPU optimized virtual machine sizes](../virtual-machines/sizes-gpu.md) for more details of GPU instances provided by Azure.
+* An Azure Machine Learning workspace with a GPU training compute. To create the workspace, see [Create workspace resources](quickstart-create-resources.md). For more information, see [GPU optimized virtual machine sizes](../virtual-machines/sizes-gpu.md) for more details of GPU instances provided by Azure.
> [!WARNING] > Support for multilingual models and the use of models with longer max sequence length is necessary for several NLP use cases, such as non-english datasets and longer range documents. As a result, these scenarios may require higher GPU memory for model training to succeed, such as the NC_v3 series or the ND series.
You can seamlessly integrate with the [Azure Machine Learning data labeling](how
* Azure subscription. If you don't have an Azure subscription, sign up to try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-* An Azure Machine Learning workspace with a GPU training compute. To create the workspace, see [Create workspace resources](quickstart-create-resources.md). See [GPU optimized virtual machine sizes](../virtual-machines/sizes-gpu.md) for more details of GPU instances provided by Azure.
+* An Azure Machine Learning workspace with a GPU training compute. To create the workspace, see [Create workspace resources](quickstart-create-resources.md). For more information, see [GPU optimized virtual machine sizes](../virtual-machines/sizes-gpu.md) for more details of GPU instances provided by Azure.
> [!WARNING] > Support for multilingual models and the use of models with longer max sequence length is necessary for several NLP use cases, such as non-english datasets and longer range documents. As a result, these scenarios may require higher GPU memory for model training to succeed, such as the NC_v3 series or the ND series.
You can seamlessly integrate with the [Azure Machine Learning data labeling](how
* The Azure Machine Learning Python SDK v2 installed. To install the SDK you can either,
- * Create a compute instance, which automatically installs the SDK and is pre-configured for ML workflows. See [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md) for more information.
+ * Create a compute instance, which automatically installs the SDK and is preconfigured for ML workflows. See [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md) for more information.
* [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
Determine what NLP task you want to accomplish. Currently, automated ML supports
Task |AutoML job syntax| Description -|-|
-Multi-class text classification | CLI v2: `text_classification` <br> SDK v2: `text_classification()`| There are multiple possible classes and each sample can be classified as exactly one class. The task is to predict the correct class for each sample. <br> <br> For example, classifying a movie script as "Comedy" or "Romantic".
-Multi-label text classification | CLI v2: `text_classification_multilabel` <br> SDK v2: `text_classification_multilabel()`| There are multiple possible classes and each sample can be assigned any number of classes. The task is to predict all the classes for each sample<br> <br> For example, classifying a movie script as "Comedy", or "Romantic", or "Comedy and Romantic".
+Multi-class text classification | CLI v2: `text_classification` <br> SDK v2: `text_classification()`| There are multiple possible classes and each sample can be classified as exactly one class. The task is to predict the correct class for each sample. <br> <br> For example, classifying a movie script as "Comedy," or "Romantic".
+Multi-label text classification | CLI v2: `text_classification_multilabel` <br> SDK v2: `text_classification_multilabel()`| There are multiple possible classes and each sample can be assigned any number of classes. The task is to predict all the classes for each sample<br> <br> For example, classifying a movie script as "Comedy," or "Romantic," or "Comedy and Romantic".
Named Entity Recognition (NER)| CLI v2:`text_ner` <br> SDK v2: `text_ner()`| There are multiple possible tags for tokens in sequences. The task is to predict the tags for all the tokens for each sequence. <br> <br> For example, extracting domain-specific entities from unstructured text, such as contracts or financial documents. ## Thresholding
-Thresholding is the multi-label feature that allows users to pick the threshold above which the predicted probabilities will lead to a positive label. Lower values allow for more labels, which is better when users care more about recall, but this option could lead to more false positives. Higher values allow fewer labels and hence better for users who care about precision, but this option could lead to more false negatives.
+Thresholding is the multi-label feature that allows users to pick the threshold which the predicted probabilities will lead to a positive label. Lower values allow for more labels, which is better when users care more about recall, but this option could lead to more false positives. Higher values allow fewer labels and hence better for users who care about precision, but this option could lead to more false negatives.
## Preparing data
-For NLP experiments in automated ML, you can bring your data in `.csv` format for multi-class and multi-label classification tasks. For NER tasks, two-column `.txt` files that use a space as the separator and adhere to the CoNLL format are supported. The following sections provide additional detail for the data format accepted for each task.
+For NLP experiments in automated ML, you can bring your data in `.csv` format for multi-class and multi-label classification tasks. For NER tasks, two-column `.txt` files that use a space as the separator and adhere to the CoNLL format are supported. The following sections provides details for the data format accepted for each task.
### Multi-class
rings O
### Data validation
-Before training, automated ML applies data validation checks on the input data to ensure that the data can be preprocessed correctly. If any of these checks fail, the run fails with the relevant error message. The following are the requirements to pass data validation checks for each task.
+Before a model trains, automated ML applies data validation checks on the input data to ensure that the data can be preprocessed correctly. If any of these checks fail, the run fails with the relevant error message. The following are the requirements to pass data validation checks for each task.
> [!Note] > Some data validation checks are applicable to both the training and the validation set, whereas others are applicable only to the training set. If the test dataset could not pass the data validation, that means that automated ML couldn't capture it and there is a possibility of model inference failure, or a decline in model performance.
Task | Data validation check
All tasks | At least 50 training samples are required Multi-class and Multi-label | The training data and validation data must have <br> - The same set of columns <br>- The same order of columns from left to right <br>- The same data type for columns with the same name <br>- At least two unique labels <br> - Unique column names within each dataset (For example, the training set can't have multiple columns named **Age**) Multi-class only | None
-Multi-label only | - The label column format must be in [accepted format](#multi-label) <br> - At least one sample should have 0 or 2+ labels, otherwise it should be a `multiclass` task <br> - All labels should be in `str` or `int` format, with no overlapping. You should not have both label `1` and label `'1'`
-NER only | - The file should not start with an empty line <br> - Each line must be an empty line, or follow format `{token} {label}`, where there is exactly one space between the token and the label and no white space after the label <br> - All labels must start with `I-`, `B-`, or be exactly `O`. Case sensitive <br> - Exactly one empty line between two samples <br> - Exactly one empty line at the end of the file
+Multi-label only | - The label column format must be in [accepted format](#multi-label) <br> - At least one sample should have 0 or 2+ labels, otherwise it should be a `multiclass` task <br> - All labels should be in `str` or `int` format, with no overlapping. You shouldn't have both label `1` and label `'1'`
+NER only | - The file shouldn't start with an empty line <br> - Each line must be an empty line, or follow format `{token} {label}`, where there's exactly one space between the token and the label and no white space after the label <br> - All labels must start with `I-`, `B-`, or be exactly `O`. Case sensitive <br> - Exactly one empty line between two samples <br> - Exactly one empty line at the end of the file
## Configure experiment Automated ML's NLP capability is triggered through task specific `automl` type jobs, which is the same workflow for submitting automated ML experiments for classification, regression and forecasting tasks. You would set parameters as you would for those experiments, such as `experiment_name`, `compute_name` and data inputs. However, there are key differences:
-* You can ignore `primary_metric`, as it is only for reporting purposes. Currently, automated ML only trains one model per run for NLP and there is no model selection.
+* You can ignore `primary_metric`, as it's only for reporting purposes. Currently, automated ML only trains one model per run for NLP and there is no model selection.
* The `label_column_name` parameter is only required for multi-class and multi-label text classification tasks. * If more than 10% of the samples in your dataset contain more than 128 tokens, it's considered long range.
- * In order to use the long range text feature, you should use a NC6 or higher/better SKUs for GPU such as: [NCv3](../virtual-machines/ncv3-series.md) series or [ND](../virtual-machines/nd-series.md) series.
+ * In order to use the long range text feature, you should use an NC6 or higher/better SKUs for GPU such as: [NCv3](../virtual-machines/ncv3-series.md) series or [ND](../virtual-machines/nd-series.md) series.
# [Azure CLI](#tab/cli) [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-For CLI v2 AutoML jobs you configure your experiment in a YAML file like the following.
+For CLI v2 automated ml jobs, you configure your experiment in a YAML file like the following.
For CLI v2 AutoML jobs you configure your experiment in a YAML file like the fol
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-For AutoML jobs via the SDK, you configure the job with the specific NLP task function. The following example demonstrates the configuration for `text_classification`.
+For Automated ML jobs via the SDK, you configure the job with the specific NLP task function. The following example demonstrates the configuration for `text_classification`.
```Python # general job parameters compute_name = "gpu-cluster"
All the pre-trained text DNN models currently available in AutoML NLP for fine-t
* xlnet_base_cased * xlnet_large_cased
-Note that the large models are significantly larger than their base counterparts. They are typically more performant, but they take up more GPU memory and time for training. As such, their SKU requirements are more stringent: we recommend running on ND-series VMs for the best results.
+Note that the large models are larger than their base counterparts. They are typically more performant, but they take up more GPU memory and time for training. As such, their SKU requirements are more stringent: we recommend running on ND-series VMs for the best results.
## Supported hyperparameters
The following table describes the hyperparameters that AutoML NLP supports.
| Parameter name | Description | Syntax | |-|||
-| gradient_accumulation_steps | The number of backward operations whose gradients are to be summed up before performing one step of gradient descent by calling the optimizer's step function. <br><br> This is leveraged to use an effective batch size which is gradient_accumulation_steps times larger than the maximum size that fits the GPU. | Must be a positive integer.
+| gradient_accumulation_steps | The number of backward operations whose gradients are to be summed up before performing one step of gradient descent by calling the optimizer's step function. <br><br> This is to use an effective batch size, which is gradient_accumulation_steps times larger than the maximum size that fits the GPU. | Must be a positive integer.
| learning_rate | Initial learning rate. | Must be a float in the range (0, 1). | | learning_rate_scheduler |Type of learning rate scheduler. | Must choose from `linear, cosine, cosine_with_restarts, polynomial, constant, constant_with_warmup`. | | model_name | Name of one of the supported models. | Must choose from `bert_base_cased, bert_base_uncased, bert_base_multilingual_cased, bert_base_german_cased, bert_large_cased, bert_large_uncased, distilbert_base_cased, distilbert_base_uncased, roberta_base, roberta_large, distilroberta_base, xlm_roberta_base, xlm_roberta_large, xlnet_base_cased, xlnet_large_cased`. |
All discrete hyperparameters only allow choice distributions, such as the intege
## Configure your sweep settings
-You can configure all the sweep-related parameters. Multiple model subspaces can be constructed with hyperparameters conditional to the respective model, as seen below in each example.
+You can configure all the sweep-related parameters. Multiple model subspaces can be constructed with hyperparameters conditional to the respective model, as seen in each hyperparameter tuning example.
The same discrete and continuous distribution options that are available for general HyperDrive jobs are supported here. See all nine options in [Hyperparameter tuning a model](how-to-tune-hyperparameters.md#define-the-search-space)
When sweeping hyperparameters, you need to specify the sampling method to use fo
You can optionally specify the experiment budget for your AutoML NLP training job using the `timeout_minutes` parameter in the `limits` - the amount of time in minutes before the experiment terminates. If none specified, the default experiment timeout is seven days (maximum 60 days).
-AutoML NLP also supports `trial_timeout_minutes`, the maximum amount of time in minutes an individual trial can run before being terminated, and `max_nodes`, the maximum number of nodes from the backing compute cluster to leverage for the job. These parameters also belong to the `limits` section.
+AutoML NLP also supports `trial_timeout_minutes`, the maximum amount of time in minutes an individual trial can run before being terminated, and `max_nodes`, the maximum number of nodes from the backing compute cluster to use for the job. These parameters also belong to the `limits` section.
Parameter | Detail
`max_trials` | Parameter for maximum number of configurations to sweep. Must be an integer between 1 and 1000. When exploring just the default hyperparameters for a given model algorithm, set this parameter to 1. The default value is 1. `max_concurrent_trials`| Maximum number of runs that can run concurrently. If specified, must be an integer between 1 and 100. The default value is 1. <br><br> **NOTE:** <li> The number of concurrent runs is gated on the resources available in the specified compute target. Ensure that the compute target has the available resources for the desired concurrency. <li> `max_concurrent_trials` is capped at `max_trials` internally. For example, if user sets `max_concurrent_trials=4`, `max_trials=2`, values would be internally updated as `max_concurrent_trials=2`, `max_trials=2`.
-You can configure all the sweep related parameters as shown in the example below.
+You can configure all the sweep related parameters as shown in this example.
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
sweep:
## Known Issues
-Dealing with very low scores, or higher loss values:
+Dealing with low scores, or higher loss values:
-For certain datasets, regardless of the NLP task, the scores produced may be very low, sometimes even zero. This would be accompanied by higher loss values implying that the neural network failed to converge. This can happen more frequently on certain GPU SKUs.
+For certain datasets, regardless of the NLP task, the scores produced may be very low, sometimes even zero. This score is accompanied by higher loss values implying that the neural network failed to converge. These scores can happen more frequently on certain GPU SKUs.
-While such cases are uncommon, they're possible and the best way to handle it is to leverage hyperparameter tuning and provide a wider range of values, especially for hyperparameters like learning rates. Until our hyperparameter tuning capability is available in production we recommend users, who face such issues, to leverage the NC6 or ND6 compute clusters, where we've found training outcomes to be fairly stable.
+While such cases are uncommon, they're possible and the best way to handle it's to leverage hyperparameter tuning and provide a wider range of values, especially for hyperparameters like learning rates. Until our hyperparameter tuning capability is available in production we recommend users experiencing these issues, to use the NC6 or ND6 compute clusters. These clusters typically have training outcomes that are fairly stable.
## Next steps + [Deploy AutoML models to an online (real-time inference) endpoint](how-to-deploy-automl-endpoint.md)
-+ [Troubleshoot automated ML experiments (SDK v1)](./v1/how-to-troubleshoot-auto-ml.md?view=azureml-api-1&preserve-view=true)
++ [Hyperparameter tuning a model](how-to-tune-hyperparameters.md)
machine-learning How To Deploy Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoints.md
For supported general-purpose and GPU instance types, see [Managed online endpoi
# [ARM template](#tab/arm)
-The preceding registration of the environment specifies a non-GPU docker image `mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1` by passing the value to the `environment-version.json` template using the `dockerImage` parameter. For a GPU compute, provide a value for a GPU docker image to the template (using the `dockerImage` parameter) and provide a GPU compute type SKU to the `online-endpoint-deployment.json` template (using the `skuName` parameter).
+The preceding registration of the environment specifies a non-GPU docker image `mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04` by passing the value to the `environment-version.json` template using the `dockerImage` parameter. For a GPU compute, provide a value for a GPU docker image to the template (using the `dockerImage` parameter) and provide a GPU compute type SKU to the `online-endpoint-deployment.json` template (using the `skuName` parameter).
For supported general-purpose and GPU instance types, see [Managed online endpoints supported VM SKUs](reference-managed-online-endpoints-vm-sku-list.md). For a list of Azure Machine Learning CPU and GPU base images, see [Azure Machine Learning base images](https://github.com/Azure/AzureML-Containers).
machine-learning How To Enable Studio Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-enable-studio-virtual-network.md
In this article, you learn how to:
> [!TIP] > This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series: >
-> * [Virtual network overview](how-to-network-security-overview.md)
:::moniker range="azureml-api-2"
+> * [Virtual network overview](how-to-network-security-overview.md)
> * [Secure the workspace resources](how-to-secure-workspace-vnet.md) > * [Secure the training environment](how-to-secure-training-vnet.md) > * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
+> * [Use custom DNS](how-to-custom-dns.md)
+> * [Use a firewall](how-to-access-azureml-behind-firewall.md)
:::moniker-end :::moniker range="azureml-api-1"
+> * [Virtual network overview](how-to-network-security-overview.md)
> * [Secure the workspace resources](./v1/how-to-secure-workspace-vnet.md) > * [Secure the training environment](./v1/how-to-secure-training-vnet.md) > * [Secure the inference environment](./v1/how-to-secure-inferencing-vnet.md) > * [Use custom DNS](how-to-custom-dns.md) > * [Use a firewall](how-to-access-azureml-behind-firewall.md) > > For a tutorial on creating a secure workspace, see [Tutorial: Create a secure workspace](tutorial-create-secure-workspace.md) or [Tutorial: Create a secure workspace using a template](tutorial-create-secure-workspace-template.md).
Some storage services, such as Azure Storage Account, have firewall settings tha
This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series:
-* [Virtual network overview](how-to-network-security-overview.md)
:::moniker range="azureml-api-2"
+* [Virtual network overview](how-to-network-security-overview.md)
* [Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Secure the training environment](how-to-secure-training-vnet.md) * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
+* [Use custom DNS](how-to-custom-dns.md)
+* [Use a firewall](how-to-access-azureml-behind-firewall.md)
:::moniker-end :::moniker range="azureml-api-1"
+* [Virtual network overview](how-to-network-security-overview.md)
* [Secure the workspace resources](./v1/how-to-secure-workspace-vnet.md) * [Secure the training environment](./v1/how-to-secure-training-vnet.md) * [Secure the inference environment](./v1/how-to-secure-inferencing-vnet.md) * [Use custom DNS](how-to-custom-dns.md) * [Use a firewall](how-to-access-azureml-behind-firewall.md)+
machine-learning How To Inference Server Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-server-http.md
The following table contains the parameters accepted by the server:
| appinsights_instrumentation_key | False | N/A | The instrumentation key to the application insights where the logs will be published. | | access_control_allow_origins | False | N/A | Enable CORS for the specified origins. Separate multiple origins with ",". <br> Example: "microsoft.com, bing.com" |
-> [!TIP]
-> CORS (Cross-origin resource sharing) is a way to allow resources on a webpage to be requested from another domain. CORS works via HTTP headers sent with the client request and returned with the service response. For more information on CORS and valid headers, see [Cross-origin resource sharing](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing) in Wikipedia. See [here](v1/how-to-deploy-advanced-entry-script.md#cross-origin-resource-sharing-cors) for an example of the scoring script.
## Request flow
machine-learning How To Interactive Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-interactive-jobs.md
Previously updated : 03/15/2022 Last updated : 06/15/2023 #Customer intent: I'm a data scientist with ML knowledge in the machine learning space, looking to build ML models using data in Azure Machine Learning with full control of the model training including debugging and monitoring of live jobs.
By specifying interactive applications at job creation, you can connect directly
> [!NOTE] > If you use `sleep infinity`, you will need to manually [cancel the job](./how-to-interactive-jobs.md#end-job) to let go of the compute resource (and stop billing).
-5. Select the training applications you want to use to interact with the job.
+5. Select at least one training application you want to use to interact with the job. If you do not select an application, the debug feature will not be available.
:::image type="content" source="./media/interactive-jobs/select-training-apps.png" alt-text="Screenshot of selecting a training application for the user to use for a job.":::
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-view-metrics.md
Logs can help you diagnose errors and warnings, or track performance metrics lik
> [!TIP] > This article shows you how to monitor the model training process. If you're interested in monitoring resource usage and events from Azure Machine Learning, such as quotas, completed training jobs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
-> [!TIP]
-> For information on logging metrics in Azure Machine Learning designer, see [How to log metrics in the designer](./v1/how-to-track-designer-experiments.md).
- ## Prerequisites * You must have an Azure Machine Learning workspace. [Create one if you don't have any](quickstart-create-resources.md).
machine-learning How To Machine Learning Interpretability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-machine-learning-interpretability.md
You can run the explanation remotely on Azure Machine Learning Compute and log t
* Learn how to generate the Responsible AI dashboard via [CLI v2 and SDK v2](how-to-responsible-ai-dashboard-sdk-cli.md) or the [Azure Machine Learning studio UI](how-to-responsible-ai-dashboard-ui.md). * Explore the [supported interpretability visualizations](how-to-responsible-ai-dashboard.md#feature-importances-model-explanations) of the Responsible AI dashboard. * Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.
-* Learn how to enable [interpretability for automated machine learning models](./v1/how-to-machine-learning-interpretability-automl.md).
+* Learn how to enable [interpretability for automated machine learning models (SDK v1)](./v1/how-to-machine-learning-interpretability-automl.md).
machine-learning How To Manage Kubernetes Instance Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-kubernetes-instance-types.md
code_configuration:
instance_type: <instance type name> environment: conda_file: file:./model/conda.yml
- image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1
+ image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest
``` #### [Python SDK](#tab/select-instancetype-to-modeldeployment-with-sdk)
from azure.ai.ml import KubernetesOnlineDeployment,Model,Environment,CodeConfigu
model = Model(path="./model/sklearn_mnist_model.pkl") env = Environment( conda_file="./model/conda.yml",
- image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
) # define the deployment
code_configuration:
scoring_script: score.py environment: conda_file: file:./model/conda.yml
- image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1
+ image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest
resources: requests: cpu: "0.1"
from azure.ai.ml import (
model = Model(path="./model/sklearn_mnist_model.pkl") env = Environment( conda_file="./model/conda.yml",
- image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
) requests = ResourceSettings(cpu="0.1", memory="0.2G")
machine-learning How To Manage Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models.md
Previously updated : 04/15/2022 Last updated : 06/16/2023
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-cli.md
Previously updated : 08/12/2022 Last updated : 06/16/2023
az group delete -g <resource-group-name>
For more information, see the [az ml workspace delete](/cli/azure/ml/workspace#az-ml-workspace-delete) documentation.
-If you accidentally deleted your workspace, are still able to retrieve your notebooks. For more information, see the [workspace deletion](./v1/how-to-high-availability-machine-learning.md#workspace-deletion) section of the disaster recovery article.
+> [!TIP]
+> The default behavior for Azure Machine Learning is to _soft delete_ the workspace. This means that the workspace is not immediately deleted, but instead is marked for deletion. For more information, see [Soft delete](./concept-soft-delete.md).
## Troubleshooting
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md
When you no longer need a workspace, delete it.
[!INCLUDE [machine-learning-delete-workspace](../../includes/machine-learning-delete-workspace.md)]
-If you accidentally deleted your workspace, you may still be able to retrieve your notebooks. For details, see [Failover for business continuity and disaster recovery](./v1/how-to-high-availability-machine-learning.md#workspace-deletion).
+> [!TIP]
+> The default behavior for Azure Machine Learning is to _soft delete_ the workspace. This means that the workspace is not immediately deleted, but instead is marked for deletion. For more information, see [Soft delete](./concept-soft-delete.md).
# [Python SDK](#tab/python)
machine-learning How To Use Foundation Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-foundation-models.md
Title: How to use Open Source Foundation Models curated by Azure Machine Learning (preview)
+ Title: How to use Open Source foundation models curated by Azure Machine Learning (preview)
-description: Learn how to discover, evaluate, fine-tune and deploy Open Source Foundation Models in Azure Machine Learning
+description: Learn how to discover, evaluate, fine-tune and deploy Open Source foundation models in Azure Machine Learning
Previously updated : 04/25/2023 Last updated : 06/15/2023
-# How to use Open Source Foundation Models curated by Azure Machine Learning (preview)
+# How to use Open Source foundation models curated by Azure Machine Learning (preview)
> [!IMPORTANT] > Items marked (preview) in this article are currently in public preview. > The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-In this article, you learn how to access and evaluate Foundation Models using Azure Machine Learning automated ML in the [Azure Machine Learning studio](overview-what-is-azure-machine-learning.md#studio). Additionally, you learn how to fine-tune each model and how to deploy the model at scale.
+In this article, you learn how to access and evaluate foundation models using Azure Machine Learning automated ML in the [Azure Machine Learning studio](overview-what-is-azure-machine-learning.md#studio). Additionally, you learn how to fine-tune each model and how to deploy the model at scale.
-Foundation Models are machine learning models that have been pre-trained on vast amounts of data, and that can be fine tuned for specific tasks with relatively small amount of domain specific data. These models serve as a starting point for custom models and accelerate the model building process for a variety of tasks including natural language processing, computer vision, speech and generative AI tasks. Azure Machine Learning provides the capability to easily integrate these pre-trained Foundation Models into your applications. **Foundation Models in Azure Machine Learning** provides Azure Machine Learning native capabilities that enable customers to discover, evaluate, fine tune, deploy and operationalize open-source Foundation Models at scale.
+Foundation models are machine learning models that have been pre-trained on vast amounts of data, and that can be fine tuned for specific tasks with relatively small amount of domain specific data. These models serve as a starting point for custom models and accelerate the model building process for a variety of tasks including natural language processing, computer vision, speech and generative AI tasks. Azure Machine Learning provides the capability to easily integrate these pre-trained foundation models into your applications. **foundation models in Azure Machine Learning** provides Azure Machine Learning native capabilities that enable customers to discover, evaluate, fine tune, deploy and operationalize open-source foundation models at scale.
-## How to access Foundation Models in Azure Machine Learning
+## How to access foundation models in Azure Machine Learning
-The 'Model catalog' (preview) in Azure Machine Learning Studio is a hub for discovering Foundation Models. The Open Source Models collection is a repository of the most popular open source Foundation Models curated by Azure Machine Learning. These models are packaged for out of the box usage and are optimized for use in Azure Machine Learning. Currently, it includes the top open source large language models, with support for other tasks coming soon. You can view the complete list of supported open source Foundation Models in the [Model catalog](https://ml.azure.com/model/catalog), under the `Open Source Models` collection.
+The 'Model catalog' (preview) in Azure Machine Learning studio is a hub for discovering foundation models. The Open Source Models collection is a repository of the most popular open source foundation models curated by Azure Machine Learning. These models are packaged for out of the box usage and are optimized for use in Azure Machine Learning. Currently, it includes the top open source large language models, with support for other tasks coming soon. You can view the complete list of supported open source foundation models in the [model catalog](https://ml.azure.com/model/catalog), under the `Open Source Models` collection.
:::image type="content" source="./media/how-to-use-foundation-models/model-catalog.png" lightbox="./media/how-to-use-foundation-models/model-catalog.png" alt-text="Screenshot showing the model catalog section in Azure Machine Learning studio." :::
-You can filter the list of models in the Model catalog by Task, or by license. Select a specific model name and the see a model card for the selected model, which lists detailed information about the model. For example:
+You can filter the list of models in the model catalog by Task, or by license. Select a specific model name and the see a model card for the selected model, which lists detailed information about the model. For example:
:::image type="content" source="./media/how-to-use-foundation-models\model-card.png" lightbox="./media/how-to-use-foundation-models\model-card.png" alt-text="Screenshot showing the model card for gpt2 in Azure Machine Learning studio. The model card shows a description of the model and samples of what the model outputs. ":::
You can filter the list of models in the Model catalog by Task, or by license. S
You can quickly test out any pre-trained model using the Sample Inference widget on the model card, providing your own sample input to test the result. Additionally, the model card for each model includes a brief description of the model and links to samples for code based inferencing, finetuning and evaluation of the model. > [!NOTE]
->If you are using a private workspace, your virtual network needs to allow outbound access in order to use Foundation Models in Azure Machine Learning
+>If you are using a private workspace, your virtual network needs to allow outbound access in order to use foundation models in Azure Machine Learning
-## How to evaluate Foundation Models using your own test data
+## How to evaluate foundation models using your own test data
You can evaluate a Foundation Model against your test dataset, using either the Evaluate UI wizard or by using the code based samples, linked from the model card.
-### Evaluating using UI wizard
+### Evaluating using the studio
-You can invoke the Evaluate UI wizard by clicking on the 'Evaluate' button on the model card for any foundation model.
+You can invoke the Evaluate model form by clicking on the 'Evaluate' button on the model card for any foundation model.
-An image of the Evaluation Settings wizard:
+An image of the Evaluation Settings form:
Each model can be evaluated for the specific inference task that the model can be used for.
Each model can be evaluated for the specific inference task that the model can b
1. Pass in the test data you would like to use to evaluate your model. You can choose to either upload a local file (in JSONL format) or select an existing registered dataset from your workspace. 1. Once you've selected the dataset, you need to map the columns from your input data, based on the schema needed for the task. For example, map the column names that correspond to the 'sentence' and 'label' keys for Text Classification **Compute:** 1. Provide the Azure Machine Learning Compute cluster you would like to use for finetuning the model. Evaluation needs to run on GPU compute. Ensure that you have sufficient compute quota for the compute SKUs you wish to use.
-1. Select 'Finish' in the Evaluate wizard to submit your evaluation job. Once the job completes, you can view evaluation metrics for the model. Based on the evaluation metrics, you might decide if you would like to finetune the model using your own training data. Additionally, you can decide if you would like to register the model and deploy it to an endpoint.
+1. Select **Finish** in the Evaluate wizard to submit your evaluation job. Once the job completes, you can view evaluation metrics for the model. Based on the evaluation metrics, you might decide if you would like to finetune the model using your own training data. Additionally, you can decide if you would like to register the model and deploy it to an endpoint.
**Advanced Evaluation Parameters:**
Each model can be evaluated for the specific inference task that the model can b
### Evaluating using code based samples
-To enable users to get started with model evaluation, we have published samples (both Python notebooks and CLI examples) in the [Evaluation samples in azureml-examples git repo](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/system/evaluation). Each model card also links to Evaluation samples for corresponding tasks
+To enable users to get started with model evaluation, we have published samples (both Python notebooks and CLI examples) in the [Evaluation samples in azureml-examples git repo](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/system/evaluation). Each model card also links to evaluation samples for corresponding tasks
-## How to finetune Foundation Models using your own training data
+## How to finetune foundation models using your own training data
-In order to improve model performance in your workload, you might want to fine tune a foundation model using your own training data. You can easily finetune these Foundation Models by using either the Finetune UI wizard or by using the code based samples linked from the model card.
+In order to improve model performance in your workload, you might want to fine tune a foundation model using your own training data. You can easily finetune these foundation models by using either the finetune settings in the studio or by using the code based samples linked from the model card.
-### Finetuning using the UI wizard
+### Finetune using the studio
+You can invoke the finetune settings form by selecting on the **Finetune** button on the model card for any foundation model.
-You can invoke the Finetune UI wizard by clicking on the 'Finetune' button on the model card for any foundation model.
+**Finetune Settings:**
-**Finetuning Settings:**
- **Finetuning task type**
You can invoke the Finetune UI wizard by clicking on the 'Finetune' button on th
1. Once you've selected the dataset, you need to map the columns from your input data, based on the schema needed for the task. For example: map the column names that correspond to the 'sentence' and 'label' keys for Text Classification
-* Validation data: Pass in the data you would like to use to validate your model. Selecting 'Automatic split' reserves an automatic split of training data for validation. Alternatively, you can provide a different validation dataset.
-* Test data: Pass in the test data you would like to use to evaluate your finetuned model. Selecting 'Automatic split' reserves an automatic split of training data for test.
-* Compute: Provide the Azure Machine Learning Compute cluster you would like to use for finetuning the model. Fine tuning needs to run on GPU compute. We recommend using compute SKUs with A100 / V100 GPUs when fine tuning. Ensure that you have sufficient compute quota for the compute SKUs you wish to use.
+* Validation data: Pass in the data you would like to use to validate your model. Selecting **Automatic split** reserves an automatic split of training data for validation. Alternatively, you can provide a different validation dataset.
+* Test data: Pass in the test data you would like to use to evaluate your finetuned model. Selecting **Automatic split** reserves an automatic split of training data for test.
+* Compute: Provide the Azure Machine Learning Compute cluster you would like to use for finetuning the model. Finetuning needs to run on GPU compute. We recommend using compute SKUs with A100 / V100 GPUs when fine tuning. Ensure that you have sufficient compute quota for the compute SKUs you wish to use.
-3. Select 'Finish' in the Finetune Wizard to submit your finetuning job. Once the job completes, you can view evaluation metrics for the finetuned model. You can then go ahead and register the finetuned model output by the finetuning job and deploy this model to an endpoint for inferencing.
+3. Select **Finish** in the finetune form to submit your finetuning job. Once the job completes, you can view evaluation metrics for the finetuned model. You can then register the finetuned model output by the finetuning job and deploy this model to an endpoint for inferencing.
-**Advanced Finetuning Parameters:**
+**Advanced finetuning parameters:**
-The Finetuning UI wizard, allows you to perform basic finetuning by providing your own training data. Additionally, there are several advanced finetuning parameters, such as learning rate, epochs, batch size, etc., described in the Readme file for each task [here](https://github.com/Azure/azureml-assets/tree/main/training/finetune_acft_hf_nlp/components/finetune). Each of these settings has default values, but can be customized via code based samples, if needed.
+The finetuning feature, allows you to perform basic finetuning by providing your own training data. Additionally, there are several advanced finetuning parameters, such as learning rate, epochs, batch size, etc., described in the Readme file for each task [here](https://github.com/Azure/azureml-assets/tree/main/training/finetune_acft_hf_nlp/components/finetune). Each of these settings has default values, but can be customized via code based samples, if needed.
### Finetuning using code based samples
Currently, Azure Machine Learning supports finetuning models for the following l
* Summarization * Translation
-To enable users to quickly get started with fine tuning, we have published samples (both Python notebooks and CLI examples) for each task in the [azureml-examples git repo Finetune samples](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/system/finetune). Each model card also links to Finetuning samples for supported finetuning tasks.
+To enable users to quickly get started with finetuning, we have published samples (both Python notebooks and CLI examples) for each task in the [azureml-examples git repo Finetune samples](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/system/finetune). Each model card also links to Finetuning samples for supported finetuning tasks.
-## Deploying Foundation Models to endpoints for inferencing
+## Deploying foundation models to endpoints for inferencing
-You can deploy Foundation Models (both pre-trained models from the model catalog, and finetuned models, once they're registered to your workspace) to an endpoint that can then be used for inferencing. Deployment to both real time endpoints and batch endpoints is supported. You can deploy these models by using either the Deploy UI wizard or by using the code based samples linked from the model card.
+You can deploy foundation models (both pre-trained models from the model catalog, and finetuned models, once they're registered to your workspace) to an endpoint that can then be used for inferencing. Deployment to both real time endpoints and batch endpoints is supported. You can deploy these models by using either the Deploy UI wizard or by using the code based samples linked from the model card.
-### Deploying using the UI wizard
+### Deploying using the studio
You can invoke the Deploy UI wizard by clicking on the 'Deploy' button on the model card for any foundation model, and selecting either Real-time endpoint or Batch endpoint
Since the scoring script and environment are automatically included with the fou
To enable users to quickly get started with deployment and inferencing, we have published samples in the [Inference samples in the azureml-examples git repo](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/system/inference). The published samples include Python notebooks and CLI examples. Each model card also links to Inference samples for Real time and Batch inferencing.
-## Import Foundation Models
+## Import foundation models
-If you're looking to use an open source model that isn't included in the Model Catalog, you can import the model from Hugging Face into your Azure Machine Learning workspace. Hugging Face is an open-source library for natural language processing (NLP) that provides pre-trained models for popular NLP tasks. Currently, model import supports importing models for the following tasks, as long as the model meets the requirements listed in the Model Import Notebook:
+If you're looking to use an open source model that isn't included in the model catalog, you can import the model from Hugging Face into your Azure Machine Learning workspace. Hugging Face is an open-source library for natural language processing (NLP) that provides pre-trained models for popular NLP tasks. Currently, model import supports importing models for the following tasks, as long as the model meets the requirements listed in the Model Import Notebook:
* fill-mask * token-classification
If you're looking to use an open source model that isn't included in the Model C
> [!NOTE] >Models from Hugging Face are subject to third-party license terms available on the Hugging Face model details page. It is your responsibility to comply with the model's license terms.
-You can select the "Import" button on the top-right of the Model Catalog to use the Model Import Notebook.
+You can select the "Import" button on the top-right of the model catalog to use the Model Import Notebook.
:::image type="content" source="./media/how-to-use-foundation-models/model-import.png" alt-text="Screenshot showing the model import button as it's displayed in the top right corner on the foundation model catalog.":::
You need to provide compute for the Model import to run. Running the Model Impor
## Next Steps
-To learn about how foundation model compares to other methods of training, visit [Foundation Models.](./concept-foundation-models.md)
+To learn about how foundation model compares to other methods of training, visit [foundation models.](./concept-foundation-models.md)
machine-learning Reference Machine Learning Cloud Parity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-machine-learning-cloud-parity.md
In the list of global Azure regions, there are several regions that serve specif
Azure Machine Learning is still in development in air-gap Regions. The information in the rest of this document provides information on what features of Azure Machine Learning are available in these regions, along with region-specific information on using these features.
-## Azure Government
+## Azure Government
| Feature | Public cloud status | US-Virginia | US-Arizona| |-|:-:|:--:|:-:|
The information in the rest of this document provides information on what featur
| [Azure Stack Edge with FPGA (SDK/CLI v1)](./v1/how-to-deploy-fpga-web-service.md#deploy-to-a-local-edge-server) | Public Preview | NO | NO | | **Other** | | | | | [Open Datasets](../open-datasets/samples.md) | Public Preview | YES | YES |
-| [Custom Cognitive Search](how-to-deploy-model-cognitive-search.md) | Public Preview | YES | YES |
+| [Custom Cognitive Search](./v1/how-to-deploy-model-cognitive-search.md) | Public Preview | YES | YES |
### Azure Government scenarios
The information in the rest of this document provides information on what featur
| Scenario | US-Virginia | US-Arizona| Limitations | |-|:-:|:--:|-| | **General security setup** | | | |
-| Disable/control internet access (inbound and outbound) and specific VNet | PARTIAL| PARTIAL | |
+| Disable/control internet access (inbound and outbound) and specific VNet | PARTIAL| PARTIAL | |
| Placement for all associated resources/services | YES | YES | | | Encryption at-rest and in-transit. | YES | YES | | | Root and SSH access to compute resources. | YES | YES | |
-| Maintain the security of deployed systems (instances, endpoints, etc.), including endpoint protection, patching, and logging | PARTIAL| PARTIAL |ACI behind VNet currently not available |
-| Control (disable/limit/restrict) the use of ACI/AKS integration | PARTIAL| PARTIAL |ACI behind VNet currently not available|
+| Maintain the security of deployed systems (instances, endpoints, etc.), including endpoint protection, patching, and logging | PARTIAL| PARTIAL |ACI behind VNet currently not available |
+| Control (disable/limit/restrict) the use of ACI/AKS integration | PARTIAL| PARTIAL |ACI behind VNet currently not available|
| Azure role-based access control (Azure RBAC) - Custom Role Creations | YES | YES | |
-| Control access to ACR images used by ML Service (Azure provided/maintained versus custom) |PARTIAL| PARTIAL | |
+| Control access to ACR images used by ML Service (Azure provided/maintained versus custom) |PARTIAL| PARTIAL | |
| **General Machine Learning Service Usage** | | | | | Ability to have a development environment to build a model, train that model, host it as an endpoint, and consume it via a webapp | YES | YES | | | Ability to pull data from ADLS (Data Lake Storage) |YES | YES | |
The information in the rest of this document provides information on what featur
* For both: `graph.windows.net`
-## Azure China 21Vianet
+## Azure China 21Vianet
| Feature | Public cloud status | CH-East-2 | CH-North-3 | |-|::|:--:|:-:|
machine-learning How To Deploy Model Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-model-cognitive-search.md
+
+ Title: Deploy a model for use with Cognitive Search
+
+description: Learn how to use Azure Machine Learning to deploy a model for use with Cognitive Search. The model is used as a custom skill to enrich the search experience.
+++++++ Last updated : 03/11/2021+
+monikerRange: 'azureml-api-1'
+++
+# Deploy a model for use with Cognitive Search
++
+This article teaches you how to use Azure Machine Learning to deploy a model for use with [Azure Cognitive Search](/azure/search/search-what-is-azure-search).
+
+Cognitive Search performs content processing over heterogenous content, to make it queryable by humans or applications. This process can be enhanced by using a model deployed from Azure Machine Learning.
+
+Azure Machine Learning can deploy a trained model as a web service. The web service is then embedded in a Cognitive Search _skill_, which becomes part of the processing pipeline.
+
+> [!IMPORTANT]
+> The information in this article is specific to the deployment of the model. It provides information on the supported deployment configurations that allow the model to be used by Cognitive Search.
+>
+> For information on how to configure Cognitive Search to use the deployed model, see the [Build and deploy a custom skill with Azure Machine Learning](/azure/search/cognitive-search-tutorial-aml-custom-skill) tutorial.
+
+When deploying a model for use with Azure Cognitive Search, the deployment must meet the following requirements:
+
+* Use Azure Kubernetes Service to host the model for inference.
+* Enable transport layer security (TLS) for the Azure Kubernetes Service. TLS is used to secure HTTPS communications between Cognitive Search and the deployed model.
+* The entry script must use the `inference_schema` package to generate an OpenAPI (Swagger) schema for the service.
+* The entry script must also accept JSON data as input, and generate JSON as output.
++
+## Prerequisites
+
+* An Azure Machine Learning workspace. For more information, see [Create workspace resources](../quickstart-create-resources.md).
+
+* A Python development environment with the Azure Machine Learning SDK installed. For more information, see [Azure Machine Learning SDK](/python/api/overview/azure/ml/install).
+
+* A registered model.
+
+* A general understanding of [How and where to deploy models](how-to-deploy-and-where.md).
+
+## Connect to your workspace
+
+An Azure Machine Learning workspace provides a centralized place to work with all the artifacts you create when you use Azure Machine Learning. The workspace keeps a history of all training jobs, including logs, metrics, output, and a snapshot of your scripts.
+
+To connect to an existing workspace, use the following code:
+
+> [!IMPORTANT]
+> This code snippet expects the workspace configuration to be saved in the current directory or its parent. For more information, see [Create and manage Azure Machine Learning workspaces](how-to-manage-workspace.md). For more information on saving the configuration to file, see [Create a workspace configuration file](how-to-configure-environment.md).
+
+```python
+from azureml.core import Workspace
+
+try:
+ # Load the workspace configuration from local cached inffo
+ ws = Workspace.from_config()
+ print(ws.name, ws.location, ws.resource_group, ws.location, sep='\t')
+ print('Library configuration succeeded')
+except:
+ print('Workspace not found')
+```
+
+## Create a Kubernetes cluster
+
+**Time estimate**: Approximately 20 minutes.
+
+A Kubernetes cluster is a set of virtual machine instances (called nodes) that are used for running containerized applications.
+
+When you deploy a model from Azure Machine Learning to Azure Kubernetes Service, the model and all the assets needed to host it as a web service are packaged into a Docker container. This container is then deployed onto the cluster.
+
+The following code demonstrates how to create a new Azure Kubernetes Service (AKS) cluster for your workspace:
+
+> [!TIP]
+> You can also attach an existing Azure Kubernetes Service to your Azure Machine Learning workspace. For more information, see [How to deploy models to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md).
+
+> [!IMPORTANT]
+> Notice that the code uses the `enable_ssl()` method to enable transport layer security (TLS) for the cluster. This is required when you plan on using the deployed model from Cognitive Search.
+
+```python
+from azureml.core.compute import AksCompute, ComputeTarget
+# Create or attach to an AKS inferencing cluster
+
+# Create the provisioning configuration with defaults
+prov_config = AksCompute.provisioning_configuration()
+
+# Enable TLS (sometimes called SSL) communications
+# Leaf domain label generates a name using the formula
+# "<leaf-domain-label>######.<azure-region>.cloudapp.azure.com"
+# where "######" is a random series of characters
+prov_config.enable_ssl(leaf_domain_label = "contoso")
+
+cluster_name = 'amlskills'
+# Try to use an existing compute target by that name.
+# If one doesn't exist, create one.
+try:
+
+ aks_target = ComputeTarget(ws, cluster_name)
+ print("Attaching to existing cluster")
+except Exception as e:
+ print("Creating new cluster")
+ aks_target = ComputeTarget.create(workspace = ws,
+ name = cluster_name,
+ provisioning_configuration = prov_config)
+ # Wait for the create process to complete
+ aks_target.wait_for_completion(show_output = True)
+```
+
+> [!IMPORTANT]
+> Azure will bill you as long as the AKS cluster exists. Make sure to delete your AKS cluster when you're done with it.
+
+For more information on using AKS with Azure Machine Learning, see [How to deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md).
+
+## Write the entry script
+
+The entry script receives data submitted to the web service, passes it to the model, and returns the scoring results. The following script loads the model on startup, and then uses the model to score data. This file is sometimes called `score.py`.
+
+> [!TIP]
+> The entry script is specific to your model. For example, the script must know the framework to use with your model, data formats, etc.
+
+> [!IMPORTANT]
+> When you plan on using the deployed model from Azure Cognitive Search you must use the `inference_schema` package to enable schema generation for the deployment. This package provides decorators that allow you to define the input and output data format for the web service that performs inference using the model.
+
+```python
+from azureml.core.model import Model
+from nlp_architect.models.absa.inference.inference import SentimentInference
+from spacy.cli.download import download as spacy_download
+import traceback
+import json
+# Inference schema for schema discovery
+from inference_schema.schema_decorators import input_schema, output_schema
+from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType
+from inference_schema.parameter_types.standard_py_parameter_type import StandardPythonParameterType
+
+def init():
+ """
+ Set up the ABSA model for Inference
+ """
+ global SentInference
+ spacy_download('en')
+ aspect_lex = Model.get_model_path('hotel_aspect_lex')
+ opinion_lex = Model.get_model_path('hotel_opinion_lex')
+ SentInference = SentimentInference(aspect_lex, opinion_lex)
+
+# Use inference schema decorators and sample input/output to
+# build the OpenAPI (Swagger) schema for the deployment
+standard_sample_input = {'text': 'a sample input record containing some text' }
+standard_sample_output = {"sentiment": {"sentence": "This place makes false booking prices, when you get there, they say they do not have the reservation for that day.",
+ "terms": [{"text": "hotels", "type": "AS", "polarity": "POS", "score": 1.0, "start": 300, "len": 6},
+ {"text": "nice", "type": "OP", "polarity": "POS", "score": 1.0, "start": 295, "len": 4}]}}
+@input_schema('raw_data', StandardPythonParameterType(standard_sample_input))
+@output_schema(StandardPythonParameterType(standard_sample_output))
+def run(raw_data):
+ try:
+ # Get the value of the 'text' field from the JSON input and perform inference
+ input_txt = raw_data["text"]
+ doc = SentInference.run(doc=input_txt)
+ if doc is None:
+ return None
+ sentences = doc._sentences
+ result = {"sentence": doc._doc_text}
+ terms = []
+ for sentence in sentences:
+ for event in sentence._events:
+ for x in event:
+ term = {"text": x._text, "type":x._type.value, "polarity": x._polarity.value, "score": x._score,"start": x._start,"len": x._len }
+ terms.append(term)
+ result["terms"] = terms
+ print("Success!")
+ # Return the results to the client as a JSON document
+ return {"sentiment": result}
+ except Exception as e:
+ result = str(e)
+ # return error message back to the client
+ print("Failure!")
+ print(traceback.format_exc())
+ return json.dumps({"error": result, "tb": traceback.format_exc()})
+```
+
+For more information on entry scripts, see [How and where to deploy](how-to-deploy-and-where.md).
+
+## Define the software environment
+
+The environment class is used to define the Python dependencies for the service. It includes dependencies required by both the model and the entry script. In this example, it installs packages from the regular pypi index, as well as from a GitHub repo.
+
+```python
+from azureml.core.conda_dependencies import CondaDependencies
+from azureml.core import Environment
+
+conda = None
+pip = ["azureml-defaults", "azureml-monitoring",
+ "git+https://github.com/NervanaSystems/nlp-architect.git@absa", 'nlp-architect', 'inference-schema',
+ "spacy==2.0.18"]
+
+conda_deps = CondaDependencies.create(conda_packages=None, pip_packages=pip)
+
+myenv = Environment(name='myenv')
+myenv.python.conda_dependencies = conda_deps
+```
+
+For more information on environments, see [Create and manage environments for training and deployment](how-to-use-environments.md).
+
+## Define the deployment configuration
+
+The deployment configuration defines the Azure Kubernetes Service hosting environment used to run the web service.
+
+> [!TIP]
+> If you aren't sure about the memory, CPU, or GPU needs of your deployment, you can use profiling to learn these. For more information, see [How and where to deploy a model](how-to-deploy-and-where.md).
+
+```python
+from azureml.core.model import Model
+from azureml.core.webservice import Webservice
+from azureml.core.image import ContainerImage
+from azureml.core.webservice import AksWebservice, Webservice
+
+# If deploying to a cluster configured for dev/test, ensure that it was created with enough
+# cores and memory to handle this deployment configuration. Note that memory is also used by
+# things such as dependencies and Azure Machine Learning components.
+
+aks_config = AksWebservice.deploy_configuration(autoscale_enabled=True,
+ autoscale_min_replicas=1,
+ autoscale_max_replicas=3,
+ autoscale_refresh_seconds=10,
+ autoscale_target_utilization=70,
+ auth_enabled=True,
+ cpu_cores=1, memory_gb=2,
+ scoring_timeout_ms=5000,
+ replica_max_concurrent_requests=2,
+ max_request_wait_time=5000)
+```
+
+For more information, see the reference documentation for [AksService.deploy_configuration](/python/api/azureml-core/azureml.core.webservice.akswebservice#deploy-configuration-autoscale-enabled-none--autoscale-min-replicas-none--autoscale-max-replicas-none--autoscale-refresh-seconds-none--autoscale-target-utilization-none--collect-model-data-none--auth-enabled-none--cpu-cores-none--memory-gb-none--enable-app-insights-none--scoring-timeout-ms-none--replica-max-concurrent-requests-none--max-request-wait-time-none--num-replicas-none--primary-key-none--secondary-key-none--tags-none--properties-none--description-none--gpu-cores-none--period-seconds-none--initial-delay-seconds-none--timeout-seconds-none--success-threshold-none--failure-threshold-none--namespace-none--token-auth-enabled-none--compute-target-name-none-).
+
+## Define the inference configuration
+
+The inference configuration points to the entry script and the environment object:
+
+```python
+from azureml.core.model import InferenceConfig
+inf_config = InferenceConfig(entry_script='score.py', environment=myenv)
+```
+
+For more information, see the reference documentation for [InferenceConfig](/python/api/azureml-core/azureml.core.model.inferenceconfig).
+
+## Deploy the model
+
+Deploy the model to your AKS cluster and wait for it to create your service. In this example, two registered models are loaded from the registry and deployed to AKS. After deployment, the `score.py` file in the deployment loads these models and uses them to perform inference.
+
+```python
+from azureml.core.webservice import AksWebservice, Webservice
+
+c_aspect_lex = Model(ws, 'hotel_aspect_lex')
+c_opinion_lex = Model(ws, 'hotel_opinion_lex')
+service_name = "hotel-absa-v2"
+
+aks_service = Model.deploy(workspace=ws,
+ name=service_name,
+ models=[c_aspect_lex, c_opinion_lex],
+ inference_config=inf_config,
+ deployment_config=aks_config,
+ deployment_target=aks_target,
+ overwrite=True)
+
+aks_service.wait_for_deployment(show_output = True)
+print(aks_service.state)
+```
+
+For more information, see the reference documentation for [Model](/python/api/azureml-core/azureml.core.model.model).
+
+## Issue a sample query to your service
+
+The following example uses the deployment information stored in the `aks_service` variable by the previous code section. It uses this variable to retrieve the scoring URL and authentication token needed to communicate with the service:
+
+```python
+import requests
+import json
+
+primary, secondary = aks_service.get_keys()
+
+# Test data
+input_data = '{"raw_data": {"text": "This is a nice place for a relaxing evening out with friends. The owners seem pretty nice, too. I have been there a few times including last night. Recommend."}}'
+
+# Since authentication was enabled for the deployment, set the authorization header.
+headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ primary)}
+
+# Send the request and display the results
+resp = requests.post(aks_service.scoring_uri, input_data, headers=headers)
+print(resp.text)
+```
+
+The result returned from the service is similar to the following JSON:
+
+```json
+{"sentiment": {"sentence": "This is a nice place for a relaxing evening out with friends. The owners seem pretty nice, too. I have been there a few times including last night. Recommend.", "terms": [{"text": "place", "type": "AS", "polarity": "POS", "score": 1.0, "start": 15, "len": 5}, {"text": "nice", "type": "OP", "polarity": "POS", "score": 1.0, "start": 10, "len": 4}]}}
+```
+
+## Connect to Cognitive Search
+
+For information on using this model from Cognitive Search, see the [Build and deploy a custom skill with Azure Machine Learning](/azure/search/cognitive-search-tutorial-aml-custom-skill) tutorial.
+
+## Clean up the resources
+
+If you created the AKS cluster specifically for this example, delete your resources after you're done testing it with Cognitive Search.
+
+> [!IMPORTANT]
+> Azure bills you based on how long the AKS cluster is deployed. Make sure to clean it up after you are done with it.
+
+```python
+aks_service.delete()
+aks_target.delete()
+```
+
+## Next steps
+
+* [Build and deploy a custom skill with Azure Machine Learning](/azure/search/cognitive-search-tutorial-aml-custom-skill)
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-manage-workspace-cli.md
You can also delete the resource group, which deletes the workspace and all othe
az group delete -g <resource-group-name> ```
-If you accidentally deleted your workspace, are still able to retrieve your notebooks. For more information, see the [workspace deletion](how-to-high-availability-machine-learning.md#workspace-deletion) section of the disaster recovery article.
+> [!TIP]
+> The default behavior for Azure Machine Learning is to _soft delete_ the workspace. This means that the workspace is not immediately deleted, but instead is marked for deletion. For more information, see [Soft delete](../concept-soft-delete.md).
## Troubleshooting
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-manage-workspace.md
When you no longer need a workspace, delete it.
[!INCLUDE [machine-learning-delete-workspace](../../../includes/machine-learning-delete-workspace.md)]
-If you accidentally deleted your workspace, you may still be able to retrieve your notebooks. For details, see [Failover for business continuity and disaster recovery](how-to-high-availability-machine-learning.md#workspace-deletion).
+> [!TIP]
+> The default behavior for Azure Machine Learning is to _soft delete_ the workspace. This means that the workspace is not immediately deleted, but instead is marked for deletion. For more information, see [Soft delete](../concept-soft-delete.md).
Delete the workspace `ws`:
network-watcher Supported Region Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/supported-region-traffic-analytics.md
Title: Traffic analytics supported regions
-description: This article provides the list of Azure Network Watcher traffic analytics supported regions.
+description: Learn about the regions that support enabling traffic analytics on NSG flow logs and the Log Analytics workspaces that you can use.
- Previously updated : 06/15/2022 Last updated : 06/15/2023
-# Azure Network Watcher traffic analytics supported regions
+# Traffic analytics supported regions
-This article provides the list of regions supported by Traffic Analytics. You can view the list of supported regions of both NSG and Log Analytics Workspaces below.
+In this article, you learn about Azure regions that support enabling [traffic analytics](traffic-analytics.md) for NSG flow logs.
-## Supported regions: NSG
+## Supported regions: network security groups
+
+You can enable traffic analytics for NSG flow logs for network security groups that exist in any of the following Azure regions:
-You can use traffic analytics for NSGs in any of the following supported regions:
:::row::: :::column span=""::: Australia Central
You can use traffic analytics for NSGs in any of the following supported regions
:::column-end::: :::row-end:::
-## Supported regions: Log Analytics Workspaces
+## Supported regions: Log Analytics workspaces
+
+The Log Analytics workspace that you use for traffic analytics must exist in one of the following Azure regions:
-The Log Analytics workspace must exist in the following regions:
:::row::: :::column span=""::: Australia Central
The Log Analytics workspace must exist in the following regions:
:::row-end::: > [!NOTE]
-> If NSGs support a region, but the log analytics workspace does not support that region for traffic analytics as per above lists, then you can use log analytics workspace of any other supported region as a workaround.
+> If a network security group is supported for flow logging in a region, but Log Analytics workspace isn't supported in that region for traffic analytics, you can use a Log Analytics workspace from any other supported region as a workaround.
## Next steps -- Learn how to [enable flow log settings](enable-network-watcher-flow-log-settings.md).-- Learn the ways to [use traffic analytics](usage-scenarios-traffic-analytics.md).
+- Learn more about [Traffic analytics](traffic-analytics.md).
+- Learn about [Usage scenarios of traffic analytics](usage-scenarios-traffic-analytics.md).
operator-nexus Howto Baremetal Bmc Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-bmc-ssh.md
The BMCs support a maximum number of 12 users. Users are defined on a per Cluste
- The users added must be part of an Azure Active Directory (Azure AD) group. For more information, see [How to Manage Groups](../active-directory/fundamentals/how-to-manage-groups.md). - To restrict access for managing keysets, create a custom role. For more information, see [Azure Custom Roles](../role-based-access-control/custom-roles.md). In this instance, add or exclude permissions for `Microsoft.NetworkCloud/clusters/bmcKeySets`. The options are `/read`, `/write`, and `/delete`.
+> [!NOTE]
+> When BMC access is created, modified or deleted via the commands described in this
+> article, a background process delivers those changes to the machines. This process is paused during
+> Operator Nexus software upgrades. If an upgrade is known to be in progress, you can use the `--no-wait`
+> option with the command to prevent the command prompt from waiting for the process to complete.
+ ## Creating a BMC keyset The `bmckeyset create` command creates SSH access to the bare metal machine in a Cluster for a group of users.
operator-nexus Howto Baremetal Bmm Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-bmm-ssh.md
There's no limit to the number of users in a group.
- The added users must be part of an Azure Active Directory (Azure AD) group. For more information, see [How to Manage Groups](../active-directory/fundamentals/how-to-manage-groups.md). - To restrict access for managing keysets, create a custom role. For more information, see [Azure Custom Roles](../role-based-access-control/custom-roles.md). In this instance, add or exclude permissions for `Microsoft.NetworkCloud/clusters/bareMetalMachineKeySets`. The options are `/read`, `/write`, and `/delete`.
+> [!NOTE]
+> When bare metal machine access is created, modified or deleted via the commands described in this
+> article, a background process delivers those changes to the machines. This process is paused during
+> Operator Nexus software upgrades. If an upgrade is known to be in progress, you can use the `--no-wait`
+> option with the command to prevent the command prompt from waiting for the process to complete.
+ ## Creating a bare metal machine keyset The `baremetalmachinekeyset create` command creates SSH access to the bare metal machine in a Cluster for a group of users.
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. The flexible se
| South Central US | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | South India | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: | | Southeast Asia | :heavy_check_mark:(v3/v4 only) | :x: $ | :heavy_check_mark: | :heavy_check_mark: |
-| Sweden Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Sweden Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Switzerland North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Switzerland West | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: | | UAE North | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x: |
private-multi-access-edge-compute-mec Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-multi-access-edge-compute-mec/overview.md
For more information, see [Azure Private 5G Core](../private-5g-core/private-5g-
**Azure Digital Twins**: Azure Digital Twins enables device sensors to be modeled in their business context considering spatial relationships, usage patterns, and other business context that turns a fleet of devices into a digital replica of a physical asset or environment. For more information, see [Azure Digital Twins](https://azure.microsoft.com/services/digital-twins/). ## Next steps
+- Learn more about [Azure Private 5G Core](/azure/private-5g-core/private-5g-core-overview)
+- Learn more about [Azure Network Function Manager](/azure/network-function-manager/overview)
+- Learn more about [Azure Kubernetes Service (AKS) hybrid deployment](/azure/aks/hybrid/)
+- Learn more about [Azure Stack Edge](/azure/databox-online/)
- Learn more about [Affirmed Private Network Service](affirmed-private-network-service-overview.md)
purview Catalog Private Link Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-private-link-faqs.md
Previously updated : 03/13/2023 Last updated : 06/16/2023 # Customer intent: As a Microsoft Purview admin, I want to set up private endpoints and managed vnets for my Microsoft Purview account for secure access or ingestion. # FAQ about Microsoft Purview private endpoints and Managed VNets
Use a Managed IR if:
Use a self-hosted integration runtime if: - You are planning to scan data sources in Azure IaaS, SaaS services behind private network or in your on-premises network. - Managed VNet is not available in the region where your Microsoft Purview account is deployed.
+- You are planning to scan any sources that are not listed under [Managed VNet IR supported sources](catalog-managed-vnet.md#supported-data-sources).
### Can I use both self-hosted integration runtime and Managed IR inside a Microsoft Purview account?
sap Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/providers.md
# What are providers in Azure Monitor for SAP solutions?
-In the context of *Azure Monitor for SAP solutions*, a *provider* contains the connection information for a corresponding component and helps to collect data from there. There are multiple provider types. For example, an SAP HANA provider is configured for a specific component within the SAP landscape, like an SAP HANA database. You can configure an Azure Monitor for SAP solutions resource (also known as SAP monitor resource) with multiple providers of the same type or multiple providers of multiple types.
+In the context of Azure Monitor for SAP solutions, a *provider* contains the connection information for a corresponding component and helps to collect data from there. There are multiple provider types. For example, an SAP HANA provider is configured for a specific component within the SAP landscape, like an SAP HANA database. You can configure an Azure Monitor for SAP solutions resource (also known as an SAP monitor resource) with multiple providers of the same type or multiple providers of multiple types.
-You can choose to configure different provider types for data collection from the corresponding component in their SAP landscape. For example, you can configure one provider for the SAP HANA provider type, another provider for high availability cluster provider type, and so on.
+You can choose to configure different provider types for data collection from the corresponding component in their SAP landscape. For example, you can configure one provider for the SAP HANA provider type, another provider for the high-availability cluster provider type, and so on.
You can also configure multiple providers of a specific provider type to reuse the same SAP monitor resource and associated managed group. For more information, see [Manage Azure Resource Manager resource groups by using the Azure portal](../../azure-resource-manager/management/manage-resource-groups-portal.md).
-![Diagram showing Azure Monitor for SAP solutions connection to available providers.](./media/providers/providers.png)
+![Diagram that shows Azure Monitor for SAP solutions connection to available providers.](./media/providers/providers.png)
-It's recommended to configure at least one provider when you deploy an Azure Monitor for SAP solutions resource. By configuring a provider, you start data collection from the corresponding component for which the provider is configured.
+We recommend that you configure at least one provider when you deploy an Azure Monitor for SAP solutions resource. By configuring a provider, you start data collection from the corresponding component for which the provider is configured.
-If you don't configure any providers at the time of deployment, the Azure Monitor for SAP solutions resource is still deployed, but no data is collected. You can add providers after deployment through the SAP monitor resource within the Azure portal. You can add or delete providers from the SAP monitor resource at any time.
+If you don't configure any providers at the time of deployment, the Azure Monitor for SAP solutions resource is still deployed, but no data is collected. You can add providers after deployment through the SAP monitor resource in the Azure portal. You can add or delete providers from the SAP monitor resource at any time.
## Provider type: SAP NetWeaver
-You can configure one or more providers of provider type SAP NetWeaver to enable data collection from SAP NetWeaver layer. Azure Monitor for SAP solutions NetWeaver provider uses the existing
-- [**SAPControl** Web service](https://www.sap.com/documents/2016/09/0a40e60d-8b7c-0010-82c7-eda71af511fa.html) interface to retrieve the appropriate information.-- SAP RFC - ability to collect additional information from the SAP system using Standard SAP RFC.-
-You can get the following data with the SAP NetWeaver provider:
--- SAP system and application server availability (e.g Instance process availability of dispatcher,ICM,Gateway,Message server,Enqueue Server,IGS Watchdog) (SAPOsControl)-- Work process usage statistics and trends (SAPOsControl)-- Enqueue Lock statistics and trends (SAPOsControl)-- Queue usage statistics and trends (SAPOsControl)-- SMON Metrics (**Tcode - /SDF/SMON**) (RFC)-- SWNC Workload, Memory, Transaction, User, RFC Usage (**Tcode - St03n**) (RFC)-- Short Dumps (**Tcode - ST22**) (RFC)-- Object Lock (**Tcode - SM12**) (RFC)-- Failed Updates (**Tcode - SM13**) (RFC)-- System Logs Analysis (**Tcode - SM21**) (RFC)-- Batch Jobs Statistics (**Tcode - SM37**) (RFC)-- Outbound Queues (**Tcode - SMQ1**) (RFC)-- Inbound Queues (**Tcode - SMQ2**) (RFC)-- Transactional RFC (**Tcode - SM59**) (RFC)-- STMS Change Transport System Metrics (**Tcode - STMS**) (RFC)
+You can configure one or more providers of the provider type SAP NetWeaver to enable data collection from the SAP NetWeaver layer. The Azure Monitor for SAP solutions NetWeaver provider uses the existing:
+
+- [SAPControl Web service](https://www.sap.com/documents/2016/09/0a40e60d-8b7c-0010-82c7-eda71af511fa.html) interface to retrieve the appropriate information.
+- SAP RFC ability to collect more information from the SAP system by using Standard SAP RFC.
+
+With the SAP NetWeaver provider, you can get the:
+
+- SAP system and application server availability (for example, instance process availability of Dispatcher, ICM, Gateway, Message Server, Enqueue Server, IGS Watchdog) (SAPOsControl).
+- Work process usage statistics and trends (SAPOsControl).
+- Enqueue lock statistics and trends (SAPOsControl).
+- Queue usage statistics and trends (SAPOsControl).
+- SMON metrics (**Tcode - /SDF/SMON**) (RFC).
+- SWNC workload, memory, transaction, user, RFC usage (**Tcode - St03n**) (RFC).
+- Short dumps (**Tcode - ST22**) (RFC).
+- Object lock (**Tcode - SM12**) (RFC).
+- Failed updates (**Tcode - SM13**) (RFC).
+- System logs analysis (**Tcode - SM21**) (RFC).
+- Batch jobs statistics (**Tcode - SM37**) (RFC).
+- Outbound queues (**Tcode - SMQ1**) (RFC).
+- Inbound queues (**Tcode - SMQ2**) (RFC).
+- Transactional RFC (**Tcode - SM59**) (RFC).
+- STMS Change Transport System metrics (**Tcode - STMS**) (RFC).
Configuring the SAP NetWeaver provider requires:
-For SOAP Web Methods:
- - Fully Qualified Domain Name of SAP Web dispatcher OR SAP Application server.
- - SAP System ID, Instance no.
- - Host file entries of all SAP application servers that get listed via SAPcontrol "GetSystemInstanceList" web method.
+For SOAP web methods:
+ - Fully qualified domain name (FQDN) of the SAP Web Dispatcher or the SAP application server.
+ - SAP system ID, Instance no.
+ - Host file entries of all SAP application servers that get listed via the SAPcontrol `GetSystemInstanceList` web method.
For SOAP+RFC:
- - Fully Qualified Domain Name of SAP Web dispatcher OR SAP Application server.
- - SAP System ID, Instance no.
- - SAP Client ID, HTTP port, SAP Username and Password for login.
- - Host file entries of all SAP application servers that get listed via SAPcontrol "GetSystemInstanceList" web method.
+ - FQDN of the SAP Web Dispatcher or the SAP application server.
+ - SAP system ID, Instance no.
+ - SAP client ID, HTTP port, SAP username and password for login.
+ - Host file entries of all SAP application servers that get listed via the SAPcontrol `GetSystemInstanceList` web method.
-Check [SAP NetWeaver provider](provider-netweaver.md) creation for more detail steps.
+For more information, see [Configure SAP NetWeaver for Azure Monitor for SAP solutions](provider-netweaver.md).
-![Diagram showing the NetWeaver provider architecture.](./media/providers/netweaver-architecture.png)
+![Diagram that shows the NetWeaver provider architecture.](./media/providers/netweaver-architecture.png)
## Provider type: SAP HANA
-You can configure one or more providers of provider type *SAP HANA* to enable data collection from SAP HANA database. The SAP HANA provider connects to the SAP HANA database over SQL port, pulls data from the database, and pushes it to the Log Analytics workspace in your subscription. The SAP HANA provider collects data every 1 minute from the SAP HANA database.
+You can configure one or more providers of the provider type **SAP HANA** to enable data collection from the SAP HANA database. The SAP HANA provider connects to the SAP HANA database over the SQL port. The provider pulls data from the database and pushes it to the Log Analytics workspace in your subscription. The SAP HANA provider collects data every minute from the SAP HANA database.
-You can see the following data with the SAP HANA provider:
+With the SAP HANA provider, you can see the:
-- Underlying infrastructure usage-- SAP HANA host status-- SAP HANA system replication-- SAP HANA Backup data-- Fetching Services-- Network throughput between the nodes in a scaleout system-- SAP HANA Long Idling Cursors-- SAP HANA Long Running Transactions-- Checks for configuration parameter values-- SAP HANA Uncommitted Write Transactions-- SAP HANA Disk Fragmentation-- SAP HANA Statistics Server Health-- SAP HANA High Memory Usage Service-- SAP HANA Blocking Transactions
+- Underlying infrastructure usage.
+- SAP HANA host status.
+- SAP HANA system replication.
+- SAP HANA backup data.
+- Fetching services.
+- Network throughput between the nodes in a scaleout system.
+- SAP HANA long-idling cursors.
+- SAP HANA long-running transactions.
+- Checks for configuration parameter values.
+- SAP HANA uncommitted write transactions.
+- SAP HANA disk fragmentation.
+- SAP HANA statistics server health.
+- SAP HANA high memory usage service.
+- SAP HANA blocking transactions.
+Configuring the SAP HANA provider requires the:
+- Host IP address.
+- HANA SQL port number.
+- SYSTEMDB username and password.
-Configuring the SAP HANA provider requires:
-- The host IP address,-- HANA SQL port number-- **SYSTEMDB** username and password
+We recommend that you configure the SAP HANA provider against SYSTEMDB. However, you can configure more providers against other database tenants.
-It's recommended to configure the SAP HANA provider against **SYSTEMDB**. However, more providers can be configured against other database tenants.
+For more information, see [Configure SAP HANA provider for Azure Monitor for SAP solutions](provider-hana.md).
-Check [SAP HANA provider](provider-hana.md) creation for more detail steps.
+![Diagram that shows Azure Monitor for SAP solutions providers - SAP HANA architecture.](./media/providers/azure-monitor-providers-hana.png)
-![Diagram shows Azure Monitor for SAP solutions providers - SAP HANA architecture.](./media/providers/azure-monitor-providers-hana.png)
+## Provider type: SQL Server
-## Provider type: Microsoft SQL server
+You can configure one or more SQL Server providers to enable data collection from [SQL Server on virtual machines](https://azure.microsoft.com/services/virtual-machines/sql-server/). The SQL Server provider connects to SQL Server over the SQL port. It then pulls data from the database and pushes it to the Log Analytics workspace in your subscription. Configure SQL Server for SQL authentication and for signing in with the SQL Server username and password. Set the SAP database as the default database for the provider. The SQL Server provider collects data every 60 seconds up to every hour from the SQL Server.
-You can configure one or more Microsoft SQL Server providers to enable data collection from [SQL Server on Virtual Machines](https://azure.microsoft.com/services/virtual-machines/sql-server/). The SQL Server provider connects to Microsoft SQL Server over the SQL port. It then pulls data from the database and pushes it to the Log Analytics workspace in your subscription. Configure SQL Server for SQL authentication and for signing in with the SQL Server username and password. Set the SAP database as the default database for the provider. The SQL Server provider collects data from every 60 seconds up to every hour from the SQL server.
+With the SQL Server provider, you can get the:
+- Underlying infrastructure usage.
+- Top SQL statements.
+- Top largest table.
+- Problems recorded in the SQL Server error log.
+- Blocking processes and others.
-You can get the following data with the SQL Server provider:
-- Underlying infrastructure usage-- Top SQL statements-- Top largest table-- Problems recorded in the SQL Server error log-- Blocking processes and others
+Configuring SQL Server provider requires the:
+- SAP system ID.
+- Host IP address.
+- SQL Server port number.
+- SQL Server username and password.
-Configuring Microsoft SQL Server provider requires:
-- The SAP System ID-- The Host IP address-- The SQL Server port number-- The SQL Server username and password
+ For more information, see [Configure SQL Server for Azure Monitor for SAP solutions](provider-sql-server.md).
-Check [SQL Database provider](provider-sql-server.md) creation for more detail steps.
-
-![Diagram shows Azure Monitor for SAP solutions providers - SQL architecture.](./media/providers/azure-monitor-providers-sql.png)
+![Diagram that shows Azure Monitor for SAP solutions providers - SQL architecture.](./media/providers/azure-monitor-providers-sql.png)
## Provider type: High-availability cluster
-You can configure one or more providers of provider type *High-availability cluster* to enable data collection from Pacemaker cluster within the SAP landscape. The High-availability cluster provider connects to Pacemaker using the [ha_cluster_exporter](https://github.com/ClusterLabs/ha_cluster_exporter) for **SUSE** based clusters and by using [Performance co-pilot](https://access.redhat.com/articles/6139852) for **RHEL** based clusters. Azure Monitor for SAP solutions then pulls data from cluster and pushes it to Log Analytics workspace in your subscription. The High-availability cluster provider collects data every 60 seconds from Pacemaker.
+You can configure one or more providers of the provider type *high-availability cluster* to enable data collection from the Pacemaker cluster within the SAP landscape. The high-availability cluster provider connects to Pacemaker by using the [ha_cluster_exporter](https://github.com/ClusterLabs/ha_cluster_exporter) for **SUSE**-based clusters and by using [Performance co-pilot](https://access.redhat.com/articles/6139852) for **RHEL**-based clusters. Azure Monitor for SAP solutions then pulls data from the cluster and pushes it to the Log Analytics workspace in your subscription. The high-availability cluster provider collects data every 60 seconds from Pacemaker.
-You can get the following data with the High-availability cluster provider:
+With the high-availability cluster provider, you can get the:
+ - Cluster status represented as a roll-up of node and resource status.
+ - Location constraints.
+ - Trends.
+ - [Others](https://github.com/ClusterLabs/ha_cluster_exporter/blob/master/doc/metrics.md).
-![Diagram shows Azure Monitor for SAP solutions providers - High-availability cluster architecture.](./media/providers/azure-monitor-providers-pacemaker-cluster.png)
+![Diagram that shows Azure Monitor for SAP solutions providers - High-availability cluster architecture.](./media/providers/azure-monitor-providers-pacemaker-cluster.png)
-To configure a High-availability cluster provider, two primary steps are involved:
+To configure a high-availability cluster provider, two primary steps are involved:
1. Install [ha_cluster_exporter](provider-ha-pacemaker-cluster.md) in *each* node within the Pacemaker cluster.
- You have two options for installing ha_cluster_exporter:
+ You have two options for installing `ha_cluster_exporter`:
- Use Azure Automation scripts to deploy a high-availability cluster. The scripts install [ha_cluster_exporter](https://github.com/ClusterLabs/ha_cluster_exporter) on each cluster node. - Do a [manual installation](https://github.com/ClusterLabs/ha_cluster_exporter#manual-clone--build).
-2. Configure a High-availability cluster provider for *each* node within the Pacemaker cluster.
+1. Configure a high-availability cluster provider for *each* node within the Pacemaker cluster.
- To configure the High-availability cluster provider, the following information is required:
+ To configure the high-availability cluster provider, the following information is required:
- - **Name**. A name for this provider. It should be unique for this Azure Monitor for SAP solutions instance.
- - **Prometheus Endpoint**. `http://<servername or ip address>:9664/metrics`.
- - **SID**. For SAP systems, use the SAP SID. For other systems (for example, NFS clusters), use a three-character name for the cluster. The SID must be distinct from other clusters that are monitored.
- - **Cluster name**. The cluster name used when creating the cluster. The cluster name can be found in the cluster property `cluster-name`.
- - **Hostname**. The Linux hostname of the virtual machine (VM).
+ - **Name**: A name for this provider. It should be unique for this Azure Monitor for SAP solutions instance.
+ - **Prometheus endpoint**: `http://<servername or ip address>:9664/metrics`.
+ - **SID**: For SAP systems, use the SAP SID. For other systems (for example, NFS clusters), use a three-character name for the cluster. The SID must be distinct from other clusters that are monitored.
+ - **Cluster name**: The cluster name used when you're creating the cluster. You can find the cluster name in the cluster property `cluster-name`.
+ - **Hostname**: The Linux hostname of the virtual machine (VM).
- Check [High Availability Cluster provider](provider-ha-pacemaker-cluster.md) creation for more detail steps.
+ For more information, see [Create a high-availability cluster provider for Azure Monitor for SAP solutions](provider-ha-pacemaker-cluster.md).
## Provider type: OS (Linux)
-You can configure one or more providers of provider type OS (Linux) to enable data collection from a BareMetal or VM node. The OS (Linux) provider connects to BareMetal or VM nodes using the [Node_Exporter](https://github.com/prometheus/node_exporter) endpoint. It then pulls data from the nodes and pushes it to Log Analytics workspace in your subscription. The OS (Linux) provider collects data every 60 seconds for most of the metrics from the nodes.
+You can configure one or more providers of the provider type OS (Linux) to enable data collection from a BareMetal or VM node. The OS (Linux) provider connects to BareMetal or VM nodes by using the [Node_Exporter](https://github.com/prometheus/node_exporter) endpoint. It then pulls data from the nodes and pushes it to the Log Analytics workspace in your subscription. The OS (Linux) provider collects data every 60 seconds for most of the metrics from the nodes.
-You can get the following data with the OS (Linux) provider:
+With the OS (Linux) provider, you can get the:
- - CPU usage, CPU usage by process
- - Disk usage, I/O read & write
- - Memory distribution, memory usage, swap memory usage
- - Network usage, network inbound & outbound traffic details
+ - CPU usage and CPU usage by process.
+ - Disk usage and I/O read and write.
+ - Memory distribution, memory usage, and swap memory usage.
+ - Network usage and the network inbound and outbound traffic details.
To configure an OS (Linux) provider, two primary steps are involved: 1. Install [Node_Exporter](https://github.com/prometheus/node_exporter) on each BareMetal or VM node.
- You have two options for installing [Node_exporter](https://github.com/prometheus/node_exporter):
- - For automated installation with Ansible, use [Node_Exporter](https://github.com/prometheus/node_exporter) on each BareMetal or VM node to install the OS (Linux) Provider.
+ You have two options for installing [Node_Exporter](https://github.com/prometheus/node_exporter):
+ - For automated installation with Ansible, use [Node_Exporter](https://github.com/prometheus/node_exporter) on each BareMetal or VM node to install the OS (Linux) provider.
- Do a [manual installation](https://prometheus.io/docs/guides/node-exporter/). 1. Configure an OS (Linux) provider for each BareMetal or VM node instance in your environment. To configure the OS (Linux) provider, the following information is required:
- - **Name**: a name for this provider, unique to the Azure Monitor for SAP solutions instance.
- - **Node Exporter endpoint**: usually `http://<servername or ip address>:9100/metrics`.
+ - **Name**: A name for this provider that's unique to the Azure Monitor for SAP solutions instance.
+ - **Node Exporter endpoint**: Usually `http://<servername or ip address>:9100/metrics`.
-Port 9100 is exposed for the **Node_Exporter** endpoint.
+Port 9100 is exposed for the `Node_Exporter` endpoint.
-Check [Operating System provider](provider-linux.md) creation for more detail steps.
+For more information, see [Configure Linux provider for Azure Monitor for SAP solutions](provider-linux.md).
> [!Warning]
-> Make sure **Node-Exporter** keeps running after the node reboot.
+> Make sure `Node-Exporter` keeps running after the node reboot.
## Provider type: IBM Db2
-You can configure one or more IBM Db2 providers to enable data collection from IBM Db2 servers. The Db2 Server provider connects to database over given port. It then pulls data from the database and pushes it to the Log Analytics workspace in your subscription. The Db2 Server provider collects data from every 60 seconds up to every hour from the DB2 server.
+You can configure one or more IBM Db2 providers to enable data collection from IBM Db2 servers. The Db2 Server provider connects to the database over a specific port. It then pulls data from the database and pushes it to the Log Analytics workspace in your subscription. The Db2 Server provider collects data every 60 seconds up to every hour from the Db2 Server.
-You can get the following data with the IBM Db2 provider:
+With the IBM Db2 provider, you can get the:
-- Database availability-- Number of connections-- Logical and physical reads-- Waits and current locks-- Top 20 runtime and executions
+- Database availability.
+- Number of connections.
+- Logical and physical reads.
+- Waits and current locks.
+- Top 20 runtime and executions.
-Configuring IBM Db2 provider requires:
-- The SAP System ID-- The Host IP address-- The Database Name-- The Port number of the DB2 Server to connect to-- The Db2 Server username and password
+Configuring the IBM Db2 provider requires the:
+- SAP system ID.
+- Host IP address.
+- Database name.
+- Port number of the Db2 Server to connect to.
+- Db2 Server username and password.
-Check [IBM Db2 provider](provider-ibm-db2.md) creation for more detail steps.
+For more information, see [Create IBM Db2 provider for Azure Monitor for SAP solutions](provider-ibm-db2.md).
-![Diagram shows Azure Monitor for SAP solutions providers - IBM Db2 architecture.](./media/providers/azure-monitor-providers-db2.png)
+![Diagram that shows Azure Monitor for SAP solutions providers - IBM Db2 architecture.](./media/providers/azure-monitor-providers-db2.png)
## Next steps
sap Set Up Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/set-up-network.md
Title: Set up network for Azure Monitor for SAP solutions
+ Title: Set up a network for Azure Monitor for SAP solutions
description: Learn how to set up an Azure virtual network for use with Azure Monitor for SAP solutions.
Last updated 10/27/2022
#Customer intent: As a developer, I want to set up an Azure virtual network so that I can use Azure Monitor for SAP solutions.
-# Set up network for Azure Monitor for SAP solutions
+# Set up a network for Azure Monitor for SAP solutions
-In this how-to guide, you'll learn how to configure an Azure virtual network so that you can deploy *Azure Monitor for SAP solutions*.
-- You'll learn to [create a new subnet](#create-new-subnet) for use with Azure Functions.-- You'll learn to [set up outbound internet access](#configure-outbound-internet-access) to the SAP environment that you want to monitor.
+In this how-to guide, you learn how to configure an Azure virtual network so that you can deploy Azure Monitor for SAP solutions. You learn how to:
-## Create new subnet
+- [Create a new subnet](#create-a-new-subnet) for use with Azure Functions.
+- [Set up outbound internet access](#configure-outbound-internet-access) to the SAP environment that you want to monitor.
-Azure Functions is the data collection engine for Azure Monitor for SAP solutions. You'll need to create a new subnet to host Azure Functions.
+## Create a new subnet
-[Craete a new subnet](../../azure-functions/functions-networking-options.md#subnets) with an **IPv4/25** block or larger. Since we need atleast 100 IP addresses for monitoring resources.
-After subnet creation is successful, verify the below steps to ensure connectivity between Azure Monitor for SAP solutions subnet to your SAP environment subnet.
+Azure Functions is the data collection engine for Azure Monitor for SAP solutions. You must create a new subnet to host Azure Functions.
+
+[Create a new subnet](../../azure-functions/functions-networking-options.md#subnets) with an **IPv4/25** block or larger because you need at least 100 IP addresses for monitoring resources.
+After you successfully create a subnet, verify the following steps to ensure connectivity between the Azure Monitor for SAP solutions subnet and your SAP environment subnet:
- If both the subnets are in different virtual networks, do a virtual network peering between the virtual networks.-- If the subnets are associated with user defined routes, make sure the routes are configured to allow traffic between the subnets.-- If the SAP Environment subnets have NSG rules, make sure the rules are configured to allow inbound traffic from Azure Monitor for SAP solutions subnet.-- If you have a firewall in your SAP environment, make sure the firewall is configured to allow inbound traffic from Azure Monitor for SAP solutions subnet.
+- If the subnets are associated with user-defined routes, make sure the routes are configured to allow traffic between the subnets.
+- If the SAP environment subnets have network security group (NSG) rules, make sure the rules are configured to allow inbound traffic from the Azure Monitor for SAP solutions subnet.
+- If you have a firewall in your SAP environment, make sure the firewall is configured to allow inbound traffic from the Azure Monitor for SAP solutions subnet.
For more information, see how to [integrate your app with an Azure virtual network](../../app-service/overview-vnet-integration.md).
-## Using Custom DNS for Virtual Network
+## Use Custom DNS for your virtual network
-This section only applies to if you are using Custom DNS for your Virtual Network. Add the IP Address 168.63.129.16 which points to Azure DNS Server. This will resolve the storage account and other resource urls which are required for proper functioning of Azure Monitor for SAP Solutions. see below reference image.
+This section only applies if you're using Custom DNS for your virtual network. Add the IP address 168.63.129.16, which points to Azure DNS Server. This arrangement resolves the storage account and other resource URLs that are required for proper functioning of Azure Monitor for SAP solutions.
> [!div class="mx-imgBorder"]
-> ![Screenshot of Custom DNS Setting.]([../../media/set-up-network/adding-custom-dns.png)
+> ![Screenshot that shows the Custom DNS setting.]([../../media/set-up-network/adding-custom-dns.png)
## Configure outbound internet access
-In many use cases, you might choose to restrict or block outbound internet access to your SAP network environment. However, Azure Monitor for SAP solutions requires network connectivity between the [subnet that you configured](#create-new-subnet) and the systems that you want to monitor. Before you deploy an Azure Monitor for SAP solutions resource, you need to configure outbound internet access, or the deployment will fail.
+In many use cases, you might choose to restrict or block outbound internet access to your SAP network environment. However, Azure Monitor for SAP solutions requires network connectivity between the [subnet that you configured](#create-a-new-subnet) and the systems that you want to monitor. Before you deploy an Azure Monitor for SAP solutions resource, you must configure outbound internet access or the deployment fails.
There are multiple methods to address restricted or blocked outbound internet access. Choose the method that works best for your use case: -- [Use the **Route All** feature in Azure functions](#use-route-all)-- [Use service tags with a network security group (NSG) in your virtual network](#use-service-tags)-- [Use a private endpoint for your subnet](#use-private-endpoint)
+- [Use the Route All feature in Azure Functions](#use-route-all)
+- [Use service tags with an NSG in your virtual network](#use-service-tags)
+- [Use a private endpoint for your subnet](#use-a-private-endpoint)
### Use Route All
You can configure the **Route All** setting when you create an Azure Monitor for
You can only use this option before you deploy an Azure Monitor for SAP solutions resource. It's not possible to change the **Route All** setting after you create the Azure Monitor for SAP solutions resource.
-### Allow Inbound Traffic
+### Allow inbound traffic
-In case you have NSG or User Defined Route rules that block inbound traffic to your SAP Environment, then you need to modify the rules to allow the inbound traffic, also depending on the types of providers you are trying to onboard you have to unblock a few ports as mentioned below.
+If you have NSG or User-Defined Route rules that block inbound traffic to your SAP environment, you must modify the rules to allow the inbound traffic. Also, depending on the types of providers you're trying to add, you must unblock a few ports, as shown in the following table.
-| **Provider Type** | **Port Number** |
+| Provider type | Port number |
||| | Prometheus OS | 9100 | | Prometheus HA Cluster on RHEL | 44322 | | Prometheus HA Cluster on SUSE | 9100 |
-| SQL Server | 1433 (can be different if you are not using the default port) |
-| DB2 Server | 25000 (can be different if you are not using the default port) |
+| SQL Server | 1433 (can be different if you aren't using the default port) |
+| DB2 Server | 25000 (can be different if you aren't using the default port) |
| SAP HANA DB | 3\<instance number\>13, 3\<instance number\>15 | | SAP NetWeaver | 5\<instance number\>13, 5\<instance number\>15 | ### Use service tags
-If you use NSGs, you can create Azure Monitor for SAP solutions-related [virtual network service tags](../../virtual-network/service-tags-overview.md) to allow appropriate traffic flow for your deployment. A service tag represents a group of IP address prefixes from a given Azure service.
+If you use NSGs, you can create Azure Monitor for SAP solutions-related [virtual network service tags](../../virtual-network/service-tags-overview.md) to allow appropriate traffic flow for your deployment. A service tag represents a group of IP address prefixes from a specific Azure service.
-You can use this option after you've deployed an Azure Monitor for SAP solutions resource.
+You can use this option after you deploy an Azure Monitor for SAP solutions resource.
1. Find the subnet associated with your Azure Monitor for SAP solutions managed resource group: 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for or select the Azure Monitor for SAP solutions service. 1. On the **Overview** page for Azure Monitor for SAP solutions, select your Azure Monitor for SAP solutions resource. 1. On the managed resource group's page, select the Azure Functions app.
- 1. On the app's page, select the **Networking** tab. Then, select **VNET Integration**.
- 1. Review and note the subnet details. You'll need the subnet's IP address to create rules in the next step.
+ 1. On the app's page, select the **Networking** tab. Then select **VNET Integration**.
+ 1. Review and note the subnet details. You need the subnet's IP address to create rules in the next step.
1. Select the subnet's name to find the associated NSG. Note the NSG's information.
-3. Set new NSG rules for outbound network traffic:
+1. Set new NSG rules for outbound network traffic:
1. Go to the NSG resource in the Azure portal. 1. On the NSG's menu, under **Settings**, select **Outbound security rules**.
- 1. Select the **Add** button to add the following new rules:
-
-| **Priority** | **Name** | **Port** | **Protocol** | **Source** | **Destination** | **Action** |
-|--|--|-|--||-||
-| 450 | allow_monitor | 443 | TCP | Azure Function subnet | Azure Monitor | Allow |
-| 501 | allow_keyVault | 443 | TCP | Azure Function subnet | Azure Key Vault | Allow |
-| 550 | allow_storage | 443 | TCP | Azure Function subnet | Storage | Allow |
-| 600 | allow_azure_controlplane | 443 | Any | Azure Function subnet | Azure Resource Manager | Allow |
-| 650 | allow_ams_to_source_system | Any | Any | Azure Function subnet | Virtual Network or comma separated IP addresses of the source system. | Allow |
-| 660 | deny_internet | Any | Any | Any | Internet | Deny |
-
+ 1. Select **Add** to add the following new rules:
+
+ | Priority | Name | Port | Protocol | Source | Destination | Action |
+ |--|--|-|--||-||
+ | 450 | allow_monitor | 443 | TCP | Azure Functions subnet | Azure Monitor | Allow |
+ | 501 | allow_keyVault | 443 | TCP | Azure Functions subnet | Azure Key Vault | Allow |
+ | 550 | allow_storage | 443 | TCP | Azure Functions subnet | Storage | Allow |
+ | 600 | allow_azure_controlplane | 443 | Any | Azure Functions subnet | Azure Resource Manager | Allow |
+ | 650 | allow_ams_to_source_system | Any | Any | Azure Functions subnet | Virtual network or comma-separated IP addresses of the source system | Allow |
+ | 660 | deny_internet | Any | Any | Any | Internet | Deny |
+
The Azure Monitor for SAP solution's subnet IP address refers to the IP of the subnet associated with your Azure Monitor for SAP solutions resource. To find the subnet, go to the Azure Monitor for SAP solutions resource in the Azure portal. On the **Overview** page, review the **vNet/subnet** value.
-For the rules that you create, **allow_vnet** must have a lower priority than **deny_internet**. All other rules also need to have a lower priority than **allow_vnet**. However, the remaining order of these other rules is interchangeable.
+For the rules that you create, **allow_vnet** must have a lower priority than **deny_internet**. All other rules also need to have a lower priority than **allow_vnet**. The remaining order of these other rules is interchangeable.
-### Use private endpoint
+### Use a private endpoint
You can enable a private endpoint by creating a new subnet in the same virtual network as the system that you want to monitor. No other resources can use this subnet. It's not possible to use the same subnet as Azure Functions for your private endpoint. To create a private endpoint for Azure Monitor for SAP solutions:
-1. Create a Azure Private DNS zone which will contain the private endpoint records. You can follow the steps in [Create a private DNS zone](../../dns/private-dns-getstarted-portal.md) to create a private DNS zone. Make sure to link the private DNS zone to the virtual networks that contain you SAP System and Azure Monitor for SAP solutions resources.
+1. Create an Azure Private DNS zone to contain the private endpoint records. Follow the steps in [Create a private DNS zone](../../dns/private-dns-getstarted-portal.md) to create a private DNS zone. Make sure to link the private DNS zone to the virtual networks that contain your SAP system and Azure Monitor for SAP solutions resources.
> [!div class="mx-imgBorder"]
- > ![Screenshot of Adding Virtual Network Link to Private DNS Zone.]([../../media/set-up-network/dns-add-private-link.png)
+ > ![Screenshot that shows adding a virtual network link to a private DNS zone.]([../../media/set-up-network/dns-add-private-link.png)
-1. Create a subnet in the virtual network, that will be used for the private endpoint. Note down the subnet ID and Private IP Address for these subnets.
-2. To find the resources in the Azure portal, go to your Azure Monitor for SAP solutions resource.
-3. On the **Overview** page for the Azure Monitor for SAP solutions resource, select the **Managed resource group**.
+1. Create a subnet in the virtual network that will be used for the private endpoint. Note the subnet ID and private IP address for these subnets.
+1. To find the resources in the Azure portal, go to your Azure Monitor for SAP solutions resource.
+1. On the **Overview** page for the Azure Monitor for SAP solutions resource, select the **Managed resource group**.
-#### Create key vault endpoint
+#### Create a key vault endpoint
-You can follow the steps in [Create a private endpoint for Azure Key Vault](../../key-vault/general/private-link-service.md) to configure the endpoint and test the connectivity to key vault.
+Follow the steps in [Create a private endpoint for Azure Key Vault](../../key-vault/general/private-link-service.md) to configure the endpoint and test the connectivity to a key vault.
-#### Create storage endpoint
+#### Create a storage endpoint
-It's necessary to create a separate private endpoint for each Azure Storage account resource, including the queue, table, storage blob, and file. If you create a private endpoint for the storage queue, it's not possible to access the resource from systems outside of the virtual networking, including the Azure portal. However, other resources in the same storage account are accessible.
+It's necessary to create a separate private endpoint for each Azure Storage account resource, including the queue, table, storage blob, and file. If you create a private endpoint for the storage queue, it's not possible to access the resource from systems outside the virtual networking, including the Azure portal. Other resources in the same storage account are accessible.
Repeat the following process for each type of storage subresource (table, queue, blob, and file):
Repeat the following process for each type of storage subresource (table, queue,
1. On the **Virtual Network** tab, select the virtual network and the subnet that you created specifically for the endpoint. It's not possible to use the same subnet as the Azure Functions app. > [!div class="mx-imgBorder"]
- > ![Screenshot of Creating Private Endpoint - Virtual Network.]([../../media/set-up-network/private-endpoint-vnet-step.png)
+ > ![Screenshot that shows creating a private endpoint on the Virtual Network tab.]([../../media/set-up-network/private-endpoint-vnet-step.png)
1. On the **DNS** tab, for **Integrate with private DNS zone**, select **Yes**. 1. On the **Tags** tab, add tags if necessary. 1. Select **Review + create** to create the private endpoint.
-1. After the deployment is complete, Navigate back to Storage Account. On the **Networking** page, select the **Firewalls and virtual networks** tab.
+1. After the deployment is complete, go back to your storage account. On the **Networking** page, select the **Firewalls and virtual networks** tab.
1. For **Public network access**, select **Enable from all networks**. 1. Select **Apply** to save the changes.
-1. Make sure to create private endpoints for all storage sub-resources (table, queue, blob, and file)
+1. Make sure to create private endpoints for all storage subresources (table, queue, blob, and file).
-#### Create log analytics endpoint
+#### Create a Log Analytics endpoint
-It's not possible to create a private endpoint directly for a Log Analytics workspace. To enable a private endpoint for this resource, you can connect the resource to an [Azure Monitor Private Link Scope (AMPLS)](../../azure-monitor/logs/private-link-security.md). Then, you can create a private endpoint for the AMPLS resource.
+It's not possible to create a private endpoint directly for a Log Analytics workspace. To enable a private endpoint for this resource, connect the resource to an [Azure Monitor Private Link Scope (AMPLS)](../../azure-monitor/logs/private-link-security.md). Then, you can create a private endpoint for the AMPLS resource.
-If possible, create the private endpoint before you allow any system to access the Log Analytics workspace through a public endpoint. Otherwise, you'll need to restart the Function App before you can access the Log Analytics workspace through the private endpoint.
+If possible, create the private endpoint before you allow any system to access the Log Analytics workspace through a public endpoint. Otherwise, you must restart the function app before you can access the Log Analytics workspace through the private endpoint.
Select a scope for the private endpoint: 1. Go to the Log Analytics workspace in the Azure portal. 1. In the resource menu, under **Settings**, select **Network isolation**. 1. Select **Add** to create a new AMPLS setting.
-1. Select the appropriate scope for the endpoint. Then, select **Apply**.
+1. Select the appropriate scope for the endpoint. Then select **Apply**.
Create the private endpoint: 1. Go to the AMPLS resource in the Azure portal.
-1. In the resource menu, under **Configure**, select **Private Endpoint connections**.
+1. On the resource menu, under **Configure**, select **Private Endpoint connections**.
1. Select **Private Endpoint** to create a new endpoint. 1. On the **Basics** tab, enter or select all required information. 1. On the **Resource** tab, enter or select all required information.
-1. On the **Virtual Network** tab, select the virtual network and the subnet that you created specifically for the endpoint. It's not possible to use the same subnet as the Azure Functions app.
+1. On the **Virtual Network** tab, select the virtual network and the subnet that you created specifically for the endpoint. It's not possible to use the same subnet as the function app.
1. On the **DNS** tab, for **Integrate with private DNS zone**, select **Yes**. If necessary, add tags. 1. Select **Review + create** to create the private endpoint. Configure the scope: 1. Go to the Log Analytics workspace in the Azure portal.
-1. In the resource's menu, under **Settings**, select **Network Isolation**.
+1. On the resource's menu, under **Settings**, select **Network Isolation**.
1. Under **Virtual networks access configuration**: 1. Set **Accept data ingestion from public networks not connected through a Private Link Scope** to **No**. This setting disables data ingestion from any system outside the virtual network. 1. Set **Accept queries from public networks not connected through a Private Link Scope** to **Yes**. This setting allows workbooks to display data. 1. Select **Save**.
-If you enable a private endpoint after any system accessed the Log Analytics workspace through a public endpoint, restart the Function App before moving forward. Otherwise, you can't access the Log Analytics workspace through the private endpoint.
+If you enable a private endpoint after any system accessed the Log Analytics workspace through a public endpoint, restart the function app before you move forward. Otherwise, you can't access the Log Analytics workspace through the private endpoint.
1. Go to the Azure Monitor for SAP solutions resource in the Azure portal. 1. On the **Overview** page, select the name of the **Managed resource group**.
-1. On the managed resource group's page, select the **Function App**.
-1. On the Function App's **Overview** page, select **Restart**.
+1. On the managed resource group's page, select the function app.
+1. On the function app's **Overview** page, select **Restart**.
Find and note important IP address ranges:
Find and note important IP address ranges:
1. Find the IP address range for the key vault and storage account. 1. Go to the resource group that contains the Azure Monitor for SAP solutions resource in the Azure portal. 1. On the **Overview** page, note the **Private endpoint** in the resource group.
- 1. In the resource group's menu, under **Settings**, select **DNS configuration**.
+ 1. On the resource group's menu, under **Settings**, select **DNS configuration**.
1. On the **DNS configuration** page, note the **IP addresses** for the private endpoint.
-1. Find the subnet for the log analytics private endpoint.
+1. Find the subnet for the Log Analytics private endpoint.
1. Go to the private endpoint created for the AMPLS resource.
- 2. On the private endpoint's menu, under **Settings**, select **DNS configuration**.
- 3. On the **DNS configuration** page, note the associated IP addresses.
- 4. Go to the Azure Monitor for SAP solutions resource in the Azure portal.
- 5. On the **Overview** page, select the **vNet/subnet** to go to that resource.
- 6. On the virtual network page, select the subnet that you used to create the Azure Monitor for SAP solutions resource.
+ 1. On the private endpoint's menu, under **Settings**, select **DNS configuration**.
+ 1. On the **DNS configuration** page, note the associated IP addresses.
+ 1. Go to the Azure Monitor for SAP solutions resource in the Azure portal.
+ 1. On the **Overview** page, select the **vNet/subnet** to go to that resource.
+ 1. On the virtual network page, select the subnet that you used to create the Azure Monitor for SAP solutions resource.
Add outbound security rules: 1. Go to the NSG resource in the Azure portal.
-1. In the NSG menu, under **Settings**, select **Outbound security rules**.
+1. On the NSG menu, under **Settings**, select **Outbound security rules**.
1. Add the following required security rules. | Priority | Description | | -- | - |
- | 550 | Allow the source IP for making calls to source system to be monitored. |
- | 600 | Allow the source IP for making calls Azure Resource Manager service tag. |
- | 650 | Allow the source IP to access key-vault resource using private endpoint IP. |
- | 700 | Allow the source IP to access storage-account resources using private endpoint IP. (Include IPs for each of storage account sub resources: table, queue, file, and blob) |
- | 800 | Allow the source IP to access log-analytics workspace resource using private endpoint IP. |
+ | 550 | Allow the source IP for making calls to a source system to be monitored. |
+ | 600 | Allow the source IP for making calls to an Azure Resource Manager service tag. |
+ | 650 | Allow the source IP to access the key vault resource by using a private endpoint IP. |
+ | 700 | Allow the source IP to access storage account resources by using a private endpoint IP. (Include IPs for each of the storage account subresources: table, queue, file, and blob.) |
+ | 800 | Allow the source IP to access a Log Analytics workspace resource by using a private endpoint IP. |
-### DNS Configuration for Private Endpoints
+### DNS configuration for private endpoints
-After creating the private endpoints, you need to configure DNS to resolve the private endpoint IP addresses. You can use either Azure Private DNS or custom DNS servers. Refer to [Configure DNS for private endpoints](../../private-link/private-endpoint-dns.md) for more information.
+After you create the private endpoints, you need to configure DNS to resolve the private endpoint IP addresses. You can use either Azure Private DNS or custom DNS servers. For more information, see [Configure DNS for private endpoints](../../private-link/private-endpoint-dns.md).
## Next steps -- [Quickstart: set up Azure Monitor for SAP solutions through the Azure portal](quickstart-portal.md)-- [Quickstart: set up Azure Monitor for SAP solutions with PowerShell](quickstart-powershell.md)
+- [Quickstart: Set up Azure Monitor for SAP solutions through the Azure portal](quickstart-portal.md)
+- [Quickstart: Set up Azure Monitor for SAP solutions with PowerShell](quickstart-powershell.md)
sap Businessobjects Deployment Guide Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/businessobjects-deployment-guide-linux.md
Title: SAP BusinessObjects BI platform deployment on Azure for Linux | Microsoft Docs description: Deploy and configure SAP BusinessObjects BI platform on Azure for Linux
-tags: azure-resource-manager
-keywords: ''
- Previously updated : 10/05/2020 Last updated : 06/15/2023 - # SAP BusinessObjects BI platform deployment guide for Linux on Azure
Here's the product version and file system layout for this example:
| /usr/sap/frsinput | The mount directory is for the shared files across all BOBI hosts that will be used as the input file repository directory. | Business need | bl1adm | sapsys | Azure NetApp Files | | /usr/sap/frsoutput | The mount directory is for the shared files across all BOBI hosts that will be used as the output file repository directory | Business need | bl1adm | sapsys | Azure NetApp Files |
+> [!IMPORTANT]
+>
+> While the setup of the SAP BusinessObjects platform is explained using Azure NetApp Files, you could use NFS on Azure Files as the input and output file repository.
+ ## Deploy Linux virtual machine via Azure portal In this section, you create two virtual machines with the Linux operating system image for the SAP BOBI platform. The high-level steps to create the virtual machines are as follows:
In this section, you create two virtual machines with the Linux operating system
- Don't use a single subnet for all Azure services in the SAP BI platform deployment. Based on SAP BI platform architecture, you need to create multiple subnets. In this deployment, you create three subnets: one each for the application, the file repository store, and Application Gateway. - In Azure, Application Gateway and Azure NetApp Files must always be on a separate subnet. For more information, see [Azure Application Gateway](../../application-gateway/configuration-overview.md) and [Guidelines for Azure NetApp Files network planning](../../azure-netapp-files/azure-netapp-files-network-topologies.md).
-3. Create an availability set. To achieve redundancy for each tier in a multi-instance deployment, place virtual machines for each tier in an availability set. Make sure you separate the availability sets for each tier based on your architecture.
+3. Select the suitable [availability options](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) depending on your preferred system configuration within an Azure region, whether it involves spanning across zones, residing within a single zone, or operating in a zone-less region.
4. Create virtual machine 1, called **(azusbosl1)**.
The following instructions assume that you've already deployed your [Azure virtu
3. [Delegate a subnet to Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-delegate-subnet.md).
-5. Deploy Azure NetApp Files volumes by following the instructions in [Create an NFS volume for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-create-volumes.md).
+4. Deploy Azure NetApp Files volumes by following the instructions in [Create an NFS volume for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-create-volumes.md).
You can deploy the volumes as NFSv3 and NFSv4.1, because both protocols are supported for the SAP BOBI platform. Deploy the volumes in their respective Azure NetApp Files subnets. The IP addresses of the Azure NetApp Files volumes are assigned automatically.
Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in
As you're creating your Azure NetApp Files for SAP BOBI platform file repository server, be aware of the following considerations: -- The minimum capacity pool is 4 tebibytes (TiB).
+- The minimum capacity pool is 4 tebibytes (TiB). The capacity pool size can be increased in 1 TiB increments.
- The minimum volume size is 100 gibibytes (GiB). - Azure NetApp Files and all virtual machines where the Azure NetApp Files volumes will be mounted must be in the same Azure virtual network, or in [peered virtual networks](../../virtual-network/virtual-network-peering-overview.md) in the same region. Azure NetApp Files access over virtual network peering in the same region is supported. Azure NetApp Files access over global peering isn't currently supported. - The selected virtual network must have a subnet that is delegated to Azure NetApp Files.
+- The throughput and performance characteristics of an Azure NetApp Files volume is a function of the volume quota and service level, as documented in [Service level for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-service-levels.md). While sizing the SAP Azure NetApp volumes, make sure that the resulting throughput meets the application requirements.
- With the Azure NetApp Files [export policy](../../azure-netapp-files/azure-netapp-files-configure-export-policy.md), you can control the allowed clients, the access type (for example, read-write or read only). - The Azure NetApp Files feature isn't zone-aware yet. Currently, the feature isn't deployed in all availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions. - Azure NetApp Files volumes can be deployed as NFSv3 or NFSv4.1 volumes. Both protocols are supported for the SAP BI platform applications.
The steps in this section use the following prefix:
```bash sudo mount -a-
+
sudo df -h-
+
Filesystem Size Used Avail Use% Mounted on devtmpfs 7.9G 8.0K 7.9G 1% /dev tmpfs 7.9G 82M 7.8G 2% /run
The steps in this section use the following prefix:
```bash sudo mount -a-
+
sudo df -h-
+
Filesystem Size Used Avail Use% Mounted on devtmpfs 7.9G 8.0K 7.9G 1% /dev tmpfs 7.9G 82M 7.8G 2% /run
In this section, you create a private link that allows SAP BOBI virtual machines
- Resource: MySQL database created in the previous section - Target sub-resource: mysqlServer 7. In the **Networking** section, select the **Virtual network** and **Subnet** on which the SAP BOBI application is deployed.
- >[!NOTE]
- >If you have a network security group (NSG) enabled for the subnet, it will be disabled for private endpoints on this subnet only. Other resources on the subnet will still have NSG enforcement.
+ > [!NOTE]
+ > If you have a network security group (NSG) enabled for the subnet, it will be disabled for private endpoints on this subnet only. Other resources on the subnet will still have NSG enforcement.
8. For **Integrate with private DNS zone**, accept the **default (yes)**.
-9. Select your **private DNS zone** from the dropdown list.
+9. Select your **private DNS zone** from the dropdown list.
10. Select **Review+Create**, and create a private endpoint. For more information, see [Private Link for Azure Database for MySQL](../../mysql/concepts-data-access-security-private-link.md).
For more information, see [Private Link for Azure Database for MySQL](../../mysq
| GRANT USAGE ON *.* TO `cmsadmin`@`%` | | GRANT ALL PRIVILEGES ON `cmsbl1`.* TO `cmsadmin`@`%` WITH GRANT OPTION | ++-
+
USE sys; SHOW GRANTS FOR 'auditadmin'@'%'; +-+
For the SAP BOBI application server to access a database, it requires database c
```bash # This configuration is for bash shell. If you are using any other shell for sidadm, kindly set environment variable accordingly. vi /home/bl1adm/.bashrc-
+
export LD_LIBRARY_PATH=/usr/lib64 ```
The following sections describe how to achieve high availability on each compone
You can achieve high availability for application servers by employing redundancy. To do this, configure multiple instances of BI and web servers in various Azure VMs.
-To reduce the impact of downtime due to one or more events, it's a good idea to:
--- Use availability zones to protect datacenter failures.-- Configure multiple VMs in an availability set for redundancy.-- Use managed disks for VMs in an availability set.-- Configure each application tier into separate availability sets.
+To reduce the impact of downtime due to [planned and unplanned events](./sap-high-availability-architecture-scenarios.md#planned-and-unplanned-maintenance-of-virtual-machines), it's a good idea to follow the [high availability architecture guidance](./sap-high-availability-architecture-scenarios.md).
For more information, see [Manage the availability of Linux virtual machines](../../virtual-machines/availability.md).
->[!Important]
->The concepts of Azure availability zones and Azure availability sets are mutually exclusive. You can deploy a pair or multiple VMs into either a specific availability zone or an availability set, but you can't do both.
+> [!IMPORTANT]
+>
+> - The concepts of Azure availability zones and Azure availability sets are mutually exclusive. You can deploy a pair or multiple VMs into either a specific availability zone or an availability set, but you can't do both.
+> - If you planning to deploy across availability zones, it is advised to use [flexible scale set with FD=1](./virtual-machine-scale-set-sap-deployment-guide.md) over standard availability zone deployment.
### High availability for a CMS database
Filestore refers to the disk directories where contents like reports, universes,
For SAP BOBI platform running on Linux, you can choose [Azure Premium Files](../../storage/files/storage-files-introduction.md) or [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) for file shares that are designed to be highly available and highly durable in nature. For more information, see [Redundancy](../../storage/files/storage-files-planning.md#redundancy) for Azure Files.
-> [!Important]
-> SMB Protocol for Azure Files is generally available, but NFS Protocol support for Azure Files is currently in preview. For more information, see [NFS 4.1 support for Azure Files is now in preview](https://azure.microsoft.com/blog/nfs-41-support-for-azure-files-is-now-in-preview/).
- Note that this file share service isn't available in all regions. See [Products available by region](https://azure.microsoft.com/global-infrastructure/services/) to find up-to-date information. If the service isn't available in your region, you can create an NFS server from which you can share the file system to the SAP BOBI application. But you'll also need to consider its high availability. ### High availability for Load Balancer
To distribute traffic across a web server, you can either use Azure Load Balance
- For Azure Load Balancer, redundancy can be achieved by configuring Standard Load Balancer as zone-redundant. For more information, see [Standard Load Balancer and Availability Zones](../../load-balancer/load-balancer-standard-availability-zones.md). - For Application Gateway, high availability can be achieved based on the type of tier selected during deployment.
- - v1 SKU supports high-availability scenarios when you've deployed two or more instances. Azure distributes these instances across update and fault domains to ensure that instances don't all fail at the same time. You achieve redundancy within the zone.
- - v2 SKU automatically ensures that new instances are spread across fault domains and update domains. If you choose zone redundancy, the newest instances are also spread across availability zones to offer zonal failure resiliency. For more details, see [Autoscaling and Zone-redundant Application Gateway v2](../../application-gateway/application-gateway-autoscaling-zone-redundant.md).
+ - v1 SKU supports high-availability scenarios when you've deployed two or more instances. Azure distributes these instances across update and fault domains to ensure that instances don't all fail at the same time. You achieve redundancy within the zone.
+ - v2 SKU automatically ensures that new instances are spread across fault domains and update domains. If you choose zone redundancy, the newest instances are also spread across availability zones to offer zonal failure resiliency. For more details, see [Autoscaling and Zone-redundant Application Gateway v2](../../application-gateway/application-gateway-autoscaling-zone-redundant.md).
### Reference high availability architecture for SAP BOBI platform
-The following diagram shows the setup of SAP BOBI platform when you're using an availability set running on Linux server. The architecture showcases the use of different services, like Azure Application Gateway, Azure NetApp Files, and Azure Database for MySQL. These services offer built-in redundancy, which reduces the complexity of managing different high availability solutions.
+The following diagram shows the setup of SAP BOBI platform running on Linux server. The architecture showcases the use of different services, like Azure Application Gateway, Azure NetApp Files, and Azure Database for MySQL. These services offer built-in redundancy, which reduces the complexity of managing different high availability solutions.
-Notice that the incoming traffic (HTTPS - TCP/443) is load-balanced by using Azure Application Gateway v1 SKU, which is highly available when deployed on two or more instances. Multiple instances of the web server, management servers, and processing servers are deployed in separate VMs to achieve redundancy, and each tier is deployed in separate availability sets. Azure NetApp Files has built-in redundancy within the datacenter, so your Azure NetApp Files volumes for the file repository server will be highly available. The CMS database is provisioned on Azure Database for MySQL, which has inherent high availability. For more information, see [High availability in Azure Database for MySQL](../../mysql/concepts-high-availability.md).
+Notice that the incoming traffic (HTTPS) is load-balanced by using Azure Application Gateway v1/v2 SKU, which is highly available when deployed on two or more instances. Multiple instances of the web server, management servers, and processing servers are deployed in separate VMs to achieve redundancy. Azure NetApp Files has built-in redundancy within the datacenter, so your Azure NetApp Files volumes for the file repository server will be highly available. The CMS database is provisioned on Azure Database for MySQL, which has inherent high availability. For more information, see [High availability in Azure Database for MySQL](../../mysql/concepts-high-availability.md).
![Diagram that shows SAP BusinessObjects BI platform redundancy with availability sets.](media/businessobjects-deployment-guide/businessobjects-deployment-high-availability.png)
This section explains the strategy to provide disaster recovery protection for a
This guide focuses on the second option. It won't cover all possible configuration options for disaster recovery, but does cover a solution that features native Azure services in combination with a SAP BOBI platform configuration.
->[!Important]
->The availability of each component in the SAP BOBI platform should be factored in the secondary region, and you must thoroughly test the entire disaster recovery strategy.
+> [!IMPORTANT]
+>
+> - The availability of each component in the SAP BOBI platform should be factored in the secondary region, and you must thoroughly test the entire disaster recovery strategy.
+> - In case where your SAP BI platform is configured with [flexible scale set](./virtual-machine-scale-set-sap-deployment-guide.md) with FD=1, then you need to use [PowerShell](../../site-recovery/azure-to-azure-powershell.md) to set up Azure Site Recovery for disaster recovery. Currently, it's the only method available to configure disaster recovery for VMs deployed in scale set.
### Reference disaster recovery architecture for SAP BOBI platform
sap Businessobjects Deployment Guide Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/businessobjects-deployment-guide-windows.md
documentationcenter: saponazure
-tags: azure-resource-manager
-keywords: ''
Previously updated : 04/08/2021 Last updated : 06/16/2023
In this section, we'll create two VMs with a Windows operating system (OS) image
- In Azure, Application Gateway must be on a separate subnet. For more information, see [Application Gateway configuration overview](../../application-gateway/configuration-overview.md). - If you're using Azure NetApp Files for a file store instead of Azure Files, create a separate subnet for Azure NetApp Files. For more information, see [Guidelines for Azure NetApp Files network planning](../../azure-netapp-files/azure-netapp-files-network-topologies.md).
-1. Create an availability set:
-
- - To achieve redundancy for each tier in a multi-instance deployment, place VMs for each tier in an availability set. Make sure you separate the availability sets for each tier based on your architecture.
+1. Select the suitable [availability options](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) depending on your preferred system configuration within an Azure region, whether it involves spanning across zones, residing within a single zone, or operating in a zone-less region.
1. Create virtual machine 1 (azuswinboap1):
If you need to access the storage account from a different virtual network, you
1. For **Replication** label, choose a redundancy level. Select **Locally redundant storage (LRS)**.
- For Premium FileStorage, LRS and ZRS are the only options available. Based on your deployment strategy (availability set or availability zone), choose the appropriate redundancy level. For more information, see [Azure Storage redundancy](../../storage/common/storage-redundancy.md).
+ For Premium FileStorage, ZRS and LRS are the only options available. Based on your VM deployment strategy (flexible scale set, availability zone or availability set), choose the appropriate redundancy level. For more information, see [Azure Storage redundancy](../../storage/common/storage-redundancy.md).
1. Select **Next**.
Similarly, you can create the audit database. For example, enter **boaudit**.
### Download and install an ODBC driver
-SAP BOBI application servers require database client/drivers to access the CMS or audit database. A Microsoft ODBC driver is used to access CMS and audit databases running on SQL Database. This section provides instructions on how to download and set up an ODBC driver on Windows.
+SAP BOBI application servers require database client/drivers to access the CMS or audit database. A Microsoft ODBC driver is used to access CMS and audit databases running on SQL Database. This section provides instructions on how to download and set up an ODBC driver on Windows.
1. See the **CMS + Audit repository support by OS** section in the [Product Availability Matrix (PAM) for SAP BusinessObjects BI platform](https://support.sap.com/pam) to find out the database connectors that are compatible with SQL Database. 1. Download the ODBC driver version from the [link](/sql/connect/odbc/windows/release-notes-odbc-sql-server-windows?preserve-view=true&view=sql-server-ver15). In this example, we're downloading ODBC driver [13.1](/sql/connect/odbc/windows/release-notes-odbc-sql-server-windows?preserve-view=true&view=sql-server-ver15#131). 1. Install the ODBC driver on all BI servers (azuswinboap1 and azuswinboap2). 1. After you install the driver in **azuswinboap1**, go to **Start** > **Windows Administrative Tools** > **ODBC Data Sources (64-bit)**.
-1. Go to the **System DSN** tab.
+1. Go to the **System DSN** tab.
1. Select **Add** to create a connection to the CMS database. 1. Select **ODBC Driver 13 for SQL Server**, and select **Finish**. 1. Enter the information of your CMS database like the following, and select **Next**:
SAP BOBI application servers require database client/drivers to access the CMS o
>[!Note] >SQL Database communicates over port 1433. Outbound traffic over port 1433 should be allowed from your SAP BOBI application servers.
-Repeat the preceding steps to create a connection for the audit database on the server azuswinboap1. Similarly, install and configure both ODBC data sources (bocms and boaudit) on all BI application servers (azuswinboap2).
+Repeat the preceding steps to create a connection for the audit database on the server azuswinboap1. Similarly, install and configure both ODBC data sources (bocms and boaudit) on all BI application servers (azuswinboap2).
## Server preparation
Go to the media of the SAP BusinsessObjects BI platform, and run `setup.exe`.
Follow the instructions in the [SAP Business Intelligence Platform Installation Guide for Windows](https://help.sap.com/viewer/df8899896b364f6c880112f52e4d06c8/4.3.1/en-US/46ae62456e041014910aba7db0e91070.html) that are specific to your version. Here are a few points to note while you install the SAP BOBI platform on Windows: -- On the **Configure Destination Folder** screen, provide the destination folder where you want to install the BI platform. For example, enter **F:\SAP BusinessObjects\***.
+- On the **Configure Destination Folder** screen, provide the destination folder where you want to install the BI platform. For example, enter **F:\SAP BusinessObjects\***.
- On the **Configure Product Registration** screen, you can either use a temporary license key for SAP BusinessObjects Solutions from SAP Note [1288121](https://launchpad.support.sap.com/#/notes/1288121) or generate a license key in SAP Service Marketplace. - On the **Select Install Type** screen, select **Full** installation on the first server (azuswinboap1). For the other server (azuswinboap2), select **Custom / Expand**, which expands the existing SAP BOBI setup. - On the **Select Default or Existing Database** screen, select **configure an existing database**, which prompts you to select the CMS and the audit database. Select **Microsoft SQL Server using ODBC** for the **CMS Database** type and the **Audit Database** type.
After a multi-instance installation of the SAP BOBI platform, more post-configur
### Configure a cluster name
-In a multi-instance deployment of the SAP BOBI platform, you want to run several CMS servers together in a cluster. A cluster consists of two or more CMS servers working together against a common CMS system database. If a node that's running on CMS fails, a node with another CMS will continue to service BI platform requests. By default in an SAP BOBI platform, a cluster name reflects the hostname of the first CMS that you install.
+In a multi-instance deployment of the SAP BOBI platform, you want to run several CMS servers together in a cluster. A cluster consists of two or more CMS servers working together against a common CMS system database. If a node that's running on CMS fails, a node with another CMS will continue to service BI platform requests. By default in an SAP BOBI platform, a cluster name reflects the hostname of the first CMS that you install.
-To configure the cluster name on Windows, follow the instructions in the [SAP Business Intelligence Platform Administrator Guide](https://help.sap.com/viewer/2e167338c1b24da9b2a94e68efd79c42/4.3.1/en-US). After you configure the cluster name, follow SAP Note [1660440](https://launchpad.support.sap.com/#/notes/1660440) to set the default system entry on the CMC or BI Launchpad sign-in page.
+To configure the cluster name on Windows, follow the instructions in the [SAP Business Intelligence Platform Administrator Guide](https://help.sap.com/viewer/2e167338c1b24da9b2a94e68efd79c42/4.3.1/en-US). After you configure the cluster name, follow SAP Note [1660440](https://launchpad.support.sap.com/#/notes/1660440) to set the default system entry on the CMC or BI Launchpad sign-in page.
### Configure the input and output filestore location to Azure Premium Files
Filestore refers to the disk directories where the actual SAP BusinessObjects BI
1. If not created, follow the instructions provided in the preceding section, "Provision Azure Premium Files," to create and mount Azure Premium Files. > [!Tip]
- > Choose the storage redundancy for Azure Premium Files (LRS or ZRS) based on your VM's deployment (availability set or availability zone).
+ > Based on whether the virtual machine is deployed in a zonal or regional manner, the selection of storage redundancy for Azure Premium Files (ZRS or LRS) should be determined.
1. Follow SAP Note [2512660](https://launchpad.support.sap.com/#/notes/0002512660) to change the path of the file repository (Input and Output).
In SAP Note [2808640](https://launchpad.support.sap.com/#/notes/2808640), steps
In an SAP BOBI multi-instance deployment, Java web application servers (web tier) are running on two or more hosts. To distribute the user load evenly across web servers, you can use a load balancer between end users and web servers. You can use Azure Load Balancer or Application Gateway to manage traffic to your web application servers. The offerings are explained in the following sections:
-* [Load Balancer](../../load-balancer/load-balancer-overview.md) is a high-performance, low-latency, layer 4 (TCP, UDP) load balancer that distributes traffic among healthy VMs. A load balancer health probe monitors a given port on each VM and only distributes traffic to operational VMs. You can choose either a public load balancer or an internal load balancer depending on whether you want the SAP BI platform accessible from the internet or not. It's zone redundant, which ensures high availability across availability zones.
+- [Load Balancer](../../load-balancer/load-balancer-overview.md) is a high-performance, low-latency, layer 4 (TCP, UDP) load balancer that distributes traffic among healthy VMs. A load balancer health probe monitors a given port on each VM and only distributes traffic to operational VMs. You can choose either a public load balancer or an internal load balancer depending on whether you want the SAP BI platform accessible from the internet or not. It's zone redundant, which ensures high availability across availability zones.
In the following figure, see the "Internal Load Balancer" section where the web application server runs on port 8080 (default Tomcat HTTP port), which will be monitored by a health probe. Any incoming request that comes from users will get redirected to the web application servers (azuswinboap1 or azuswinboap2) in the back-end pool. Load Balancer doesn't support TLS/SSL termination, which is also known as TLS/SSL offloading. If you're using Load Balancer to distribute traffic across web servers, we recommend using Standard Load Balancer.
In an SAP BOBI multi-instance deployment, Java web application servers (web tier
![Screenshot that shows Load Balancer used to balance traffic across web servers.](media/businessobjects-deployment-guide/businessobjects-deployment-windows-load-balancer.png)
-* [Application Gateway](../../application-gateway/overview.md) provides an application delivery controller as a service, which is used to help applications direct user traffic to one or more web application servers. It offers various layer 7 load-balancing capabilities like TLS/SSL offloading, Web Application Firewall, and cookie-based session affinity for your applications.
+- [Application Gateway](../../application-gateway/overview.md) provides an application delivery controller as a service, which is used to help applications direct user traffic to one or more web application servers. It offers various layer 7 load-balancing capabilities like TLS/SSL offloading, Web Application Firewall, and cookie-based session affinity for your applications.
In an SAP BI platform, Application Gateway directs application web traffic to the specified resources in a back-end pool. In this case, it's either azuswinboap1 or azuswinboap2. You assign a listener to the port, create rules, and add resources to a back-end pool. In the following figure, Application Gateway with a private front-end IP address (10.31.3.25) acts as an entry point for users, handles incoming TLS/SSL (HTTPS - TCP/443) connections, decrypts the TLS/SSL, and passes the request (HTTP - TCP/8080) to the servers in the back-end pool. With the built-in TLS/SSL termination feature, you need to maintain only one TLS/SSL certificate on the application gateway, which simplifies operations.
As part of the backup process, a snapshot is taken. The data is transferred to t
Based on your deployment, filestore of an SAP BOBI platform can be on Azure NetApp Files or Azure Files. Choose from the following options for backup and restore based on the storage you use for filestore:
-* **Azure NetApp Files**: For Azure NetApp Files, you can create on-demand snapshots and schedule an automatic snapshot by using snapshot policies. Snapshot copies provide a point-in-time copy of your Azure NetApp Files volume. For more information, see [Manage snapshots by using Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-manage-snapshots.md).
-* **Azure Files**: Azure Files backup is integrated with a native instance of [Backup](../../backup/backup-overview.md), which centralizes the backup and restore function along with VM backup and simplifies operation work. For more information, see [Azure file share backup](../../backup/azure-file-share-backup-overview.md) and [FAQs: Back up Azure Files](../../backup/backup-azure-files-faq.yml).
+- **Azure NetApp Files**: For Azure NetApp Files, you can create on-demand snapshots and schedule an automatic snapshot by using snapshot policies. Snapshot copies provide a point-in-time copy of your Azure NetApp Files volume. For more information, see [Manage snapshots by using Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-manage-snapshots.md).
+- **Azure Files**: Azure Files backup is integrated with a native instance of [Backup](../../backup/backup-overview.md), which centralizes the backup and restore function along with VM backup and simplifies operation work. For more information, see [Azure file share backup](../../backup/azure-file-share-backup-overview.md) and [FAQs: Back up Azure Files](../../backup/backup-azure-files-faq.yml).
If you've created a separate NFS server, make sure you implement the backup and restore strategy for the same.
If you've created a separate NFS server, make sure you implement the backup and
For an SAP BOBI platform running on Windows VMs, the CMS and audit database can run on any of the supported databases as described in the [support matrix](businessobjects-deployment-guide.md#support-matrix) of the SAP BOBI platform planning and implementation guide on Azure. So it's important that you adopt the backup and restore strategy based on the database you used for CMS and audit data storage.
-* **SQL Database** uses SQL Server technology to create [full backups](/sql/relational-databases/backup-restore/full-database-backups-sql-server?preserve-view=true&view=sql-server-ver15) every week, [differential backups](/sql/relational-databases/backup-restore/differential-backups-sql-server?preserve-view=true&view=sql-server-ver15) every 12 to 24 hours, and [transaction log](/sql/relational-databases/backup-restore/transaction-log-backups-sql-server?preserve-view=true&view=sql-server-ver15) backups every 5 to 10 minutes. The frequency of transaction log backups is based on the compute size and the amount of database activity.
-
+- **SQL Database** uses SQL Server technology to create [full backups](/sql/relational-databases/backup-restore/full-database-backups-sql-server?preserve-view=true&view=sql-server-ver15) every week, [differential backups](/sql/relational-databases/backup-restore/differential-backups-sql-server?preserve-view=true&view=sql-server-ver15) every 12 to 24 hours, and [transaction log](/sql/relational-databases/backup-restore/transaction-log-backups-sql-server?preserve-view=true&view=sql-server-ver15) backups every 5 to 10 minutes. The frequency of transaction log backups is based on the compute size and the amount of database activity.
+ Users can choose an option to configure backup storage redundancy between LRS, ZRS, or GRS blobs. Storage redundancy mechanisms store multiple copies of your data to protect it from planned and unplanned events, which includes transient hardware failure, network or power outages, or massive natural disasters. By default, SQL Database stores backup in [GRS blobs](../../storage/common/storage-redundancy.md) that are replicated to a [paired region](../../availability-zones/cross-region-replication-azure.md). It can be changed based on the business requirement to either LRS or ZRS blobs. For more up-to-date information on SQL Database backup scheduling, retention, and storage consumption, see [Automated backups: Azure SQL Database and Azure SQL Managed Instance](/azure/azure-sql/database/automated-backups-overview).
-* **Azure Database for MySQL** automatically creates server backups and stores in user-configured LRS or GRS. Azure Database for MySQL takes backups of the data files and the transaction log. Depending on the supported maximum storage size, it either takes full and differential backups (4-TB max storage servers) or snapshot backups (up to 16-TB max storage servers). These backups allow you to restore a server at any point in time within your configured backup retention period. The default backup retention period is 7 days, which you can [optionally configure](../../mysql/howto-restore-server-portal.md#set-backup-configuration) up to 35 days. All backups are encrypted by using AES 256-bit encryption. These backup files aren't user exposed and can't be exported. These backups can only be used for restore operations in Azure Database for MySQL. You can use [mysqldump](../../mysql/concepts-migrate-dump-restore.md) to copy a database. For more information, see [Backup and restore in Azure Database for MySQL](../../mysql/concepts-backup.md).
+- **Azure Database for MySQL** automatically creates server backups and stores in user-configured LRS or GRS. Azure Database for MySQL takes backups of the data files and the transaction log. Depending on the supported maximum storage size, it either takes full and differential backups (4-TB max storage servers) or snapshot backups (up to 16-TB max storage servers). These backups allow you to restore a server at any point in time within your configured backup retention period. The default backup retention period is 7 days, which you can [optionally configure](../../mysql/howto-restore-server-portal.md#set-backup-configuration) up to 35 days. All backups are encrypted by using AES 256-bit encryption. These backup files aren't user exposed and can't be exported. These backups can only be used for restore operations in Azure Database for MySQL. You can use [mysqldump](../../mysql/concepts-migrate-dump-restore.md) to copy a database. For more information, see [Backup and restore in Azure Database for MySQL](../../mysql/concepts-backup.md).
-* **For a database installed on an Azure VM**, you can use standard backup tools or [Backup](../../backup/sap-hana-db-about.md) for supported databases. Also, if the Azure services and tools don't meet your requirements, you can use supported third-party backup tools that provide an agent for backup and recovery of all SAP BOBI platform components.
+- **For a database installed on an Azure VM**, you can use standard backup tools or [Backup](../../backup/sap-hana-db-about.md) for supported databases. Also, if the Azure services and tools don't meet your requirements, you can use supported third-party backup tools that provide an agent for backup and recovery of all SAP BOBI platform components.
## High availability
The following section describes how to achieve high availability on each compone
### High availability for application servers
-BI and web application servers don't need a specific high-availability solution, no matter whether they're installed separately or together. You can achieve high availability by redundancy, that is, by configuring multiple instances of BI and web servers in various Azure VMs. You can deploy the VMs in either [availability sets](sap-high-availability-architecture-scenarios.md#multiple-instances-of-virtual-machines-in-the-same-availability-set) or [availability zones](sap-high-availability-architecture-scenarios.md#azure-availability-zones) based on business-required RTO. For deployment across availability zones, make sure all other components in the SAP BOBI platform are designed to be zone redundant too.
+BI and web application servers don't need a specific high-availability solution, no matter whether they're installed separately or together. You can achieve high availability by redundancy, that is, by configuring multiple instances of BI and web servers in various Azure VMs. You can deploy the VMs in either [flexible scale set](./sap-high-availability-architecture-scenarios.md#virtual-machine-scale-set-with-flexible-orchestration), [availability sets](sap-high-availability-architecture-scenarios.md#multiple-instances-of-virtual-machines-in-the-same-availability-set) or [availability zones](sap-high-availability-architecture-scenarios.md#azure-availability-zones) based on business-required RTO. For deployment across availability zones, make sure all other components in the SAP BOBI platform are designed to be zone redundant too.
Currently, not all Azure regions offer availability zones, so you need to adopt the deployment strategy based on your region. The Azure regions that offer zones are listed in [Azure availability zones](../../availability-zones/az-overview.md).
-> [!Important]
-> The concepts of Azure availability zones and Azure availability sets are mutually exclusive. That means you can either deploy a pair or multiple VMs into a specific availability zone or an Azure availability set, but not both.
+> [!IMPORTANT]
+>
+> - The concepts of Azure availability zones and Azure availability sets are mutually exclusive. You can deploy a pair or multiple VMs into either a specific availability zone or an availability set, but you can't do both.
+> - If you planning to deploy across availability zones, it is advised to use [flexible scale set with FD=1](./virtual-machine-scale-set-sap-deployment-guide.md) over standard availability zone deployment.
### High availability for the CMS database
-If you're using an Azure database as a solution for your CMS and audit database, a locally redundant high-availability framework is provided by default. Select the region and service inherent high-availability, redundancy, and resiliency capabilities without requiring you to configure any more components. If the deployment strategy for an SAP BOBI platform is across an availability zone, make sure you achieve zone redundancy for your CMS and audit database. For more information on high availability for supported database offerings in Azure, see [High availability for Azure SQL Database](/azure/azure-sql/database/high-availability-sla) and [High availability in Azure Database for MySQL](../../mysql/concepts-high-availability.md).
+If you're using an Azure database as a solution for your CMS and audit database, a locally redundant high-availability framework is provided by default. Select the region and service inherent high-availability, redundancy, and resiliency capabilities without requiring you to configure any more components. If the deployment strategy for an SAP BOBI platform is across an availability zone, make sure you achieve zone redundancy for your CMS and audit database. For more information on high availability for supported database offerings in Azure, see [High availability for Azure SQL Database](/azure/azure-sql/database/high-availability-sla) and [High availability in Azure Database for MySQL](../../mysql/concepts-high-availability.md).
For other database management system (DBMS) deployment for a CMS database, see [DBMS deployment guides for SAP workload](dbms-guide-general.md) for insight on a different DBMS deployment and its approach to achieving high availability.
Because the file share service isn't available in all regions, make sure you see
To distribute traffic across a web server, you can use Load Balancer or Application Gateway. The redundancy for either of the load balancers can be achieved based on the SKU you choose for deployment:
-* **Load Balancer**: Redundancy can be achieved by configuring the Standard Load Balancer front end as zone redundant. For more information, see [Standard Load Balancer and availability zones](../../load-balancer/load-balancer-standard-availability-zones.md).
-* **Application Gateway**: High availability can be achieved based on the type of tier selected during deployment:
- * The v1 SKU supports high-availability scenarios when you've deployed two or more instances. Azure distributes these instances across update and fault domains to ensure that instances don't all fail at the same time. With this SKU, redundancy can be achieved within the zone.
- * The v2 SKU automatically ensures that new instances are spread across fault domains and update domains. If you choose zone redundancy, the newest instances are also spread across availability zones to offer zonal failure resiliency. For more information, see [Autoscaling and zone-redundant Application Gateway v2](../../application-gateway/application-gateway-autoscaling-zone-redundant.md).
+- **Load Balancer**: Redundancy can be achieved by configuring the Standard Load Balancer front end as zone redundant. For more information, see [Standard Load Balancer and availability zones](../../load-balancer/load-balancer-standard-availability-zones.md).
+- **Application Gateway**: High availability can be achieved based on the type of tier selected during deployment:
+ - The v1 SKU supports high-availability scenarios when you've deployed two or more instances. Azure distributes these instances across update and fault domains to ensure that instances don't all fail at the same time. With this SKU, redundancy can be achieved within the zone.
+ - The v2 SKU automatically ensures that new instances are spread across fault domains and update domains. If you choose zone redundancy, the newest instances are also spread across availability zones to offer zonal failure resiliency. For more information, see [Autoscaling and zone-redundant Application Gateway v2](../../application-gateway/application-gateway-autoscaling-zone-redundant.md).
### Reference high-availability architecture for the SAP BusinessObjects BI platform
If availability zones aren't available in your selected region, you can deploy A
This section explains the strategy to provide DR protection for an SAP BOBI platform. It complements the [Disaster recovery for SAP](../../site-recovery/site-recovery-sap.md) document, which represents the primary resources for an overall SAP DR approach. For the SAP BOBI platform, see SAP Note [2056228](https://launchpad.support.sap.com/#/notes/2056228), which describes the following methods to implement a DR environment safely:
- * Fully or selectively use Lifecycle Management or federation to promote or distribute the content from the primary system.
- * Periodically copy over the CMS and FRS contents.
+- Fully or selectively use Lifecycle Management or federation to promote or distribute the content from the primary system.
+- Periodically copy over the CMS and FRS contents.
In this guide, we'll talk about the second option to implement a DR environment. We won't cover an exhaustive list of all possible configuration options for DR. We'll cover a solution that features native Azure services in combination with SAP BOBI platform configuration.
->[!Important]
->Availability of each component in the SAP BOBI platform should be factored in to the secondary region. The entire DR strategy must be thoroughly tested.
+> [!IMPORTANT]
+>
+> - Availability of each component in the SAP BOBI platform should be factored in to the secondary region. The entire DR strategy must be thoroughly tested.
+> - In case where your SAP BI platform is configured with [flexible scale set](./virtual-machine-scale-set-sap-deployment-guide.md) with FD=1, then you need to use [PowerShell](../../site-recovery/azure-to-azure-powershell.md) to set up Azure Site Recovery for disaster recovery. Currently, it's the only method available to configure disaster recovery for VMs deployed in scale set.
### Reference DR architecture for an SAP BusinessObjects BI platform
Option 1: [Geo-redundant database backup restore](/azure/azure-sql/database/reco
By default, SQL Database stores data in [GRS blobs](../../storage/common/storage-redundancy.md) that are replicated to a [paired region](../../availability-zones/cross-region-replication-azure.md). For a SQL database, the backup storage redundancy can be configured at the time of CMS and audit database creation, or it can be updated for an existing database. The changes made to an existing database apply to future backups only. You can restore a database on any SQL database in any Azure region from the most recent geo-replicated backups. Geo-restore uses a geo-replicated backup as its source. There's a delay between when a backup is taken and when it's geo-replicated to an Azure blob in a different region. As a result, the restored database can be up to one hour behind the original database.
- >[!Important]
- >Geo-restore is available for SQL databases configured with geo-redundant [backup storage](/azure/azure-sql/database/automated-backups-overview#backup-storage-redundancy).
+ > [!IMPORTANT]
+ > Geo-restore is available for SQL databases configured with geo-redundant [backup storage](/azure/azure-sql/database/automated-backups-overview#backup-storage-redundancy).
-Option 2: [Geo-replication](/azure/azure-sql/database/active-geo-replication-overview) or an [autofailover group](/azure/azure-sql/database/auto-failover-group-overview)
+Option 2: [Geo-replication](/azure/azure-sql/database/active-geo-replication-overview) or an [auto-failover group](/azure/azure-sql/database/auto-failover-group-overview)
- Geo-replication is a SQL Database feature that allows you to create readable secondary databases of individual databases on a server in the same or different region. If geo-replication is enabled for the CMS and audit database, the application can initiate failover to a secondary database in a different Azure region. Geo-replication is enabled for individual databases, but to enable transparent and coordinated failover of multiple databases (CMS and audit) for an SAP BOBI application, it's advisable to use an autofailover group. It provides the group semantics on top of active geo-replication, which means the entire SQL server (all databases) is replicated to another region instead of individual databases. Check the capabilities table that [compares geo-replication with failover groups](/azure/azure-sql/database/business-continuity-high-availability-disaster-recover-hadr-overview#compare-geo-replication-with-failover-groups).
+ Geo-replication is a SQL Database feature that allows you to create readable secondary databases of individual databases on a server in the same or different region. If geo-replication is enabled for the CMS and audit database, the application can initiate failover to a secondary database in a different Azure region. Geo-replication is enabled for individual databases, but to enable transparent and coordinated failover of multiple databases (CMS and audit) for an SAP BOBI application, it's advisable to use an auto-failover group. It provides the group semantics on top of active geo-replication, which means the entire SQL server (all databases) is replicated to another region instead of individual databases. Check the capabilities table that [compares geo-replication with failover groups](/azure/azure-sql/database/business-continuity-high-availability-disaster-recover-hadr-overview#compare-geo-replication-with-failover-groups).
- Autofailover groups provide read/write and read-only listener endpoints that remain unchanged during failover. The read/write endpoint can be maintained as a listener in the ODBC connection entry for the CMS and audit database. So whether you use manual or automatic failover activation, failover switches all secondary databases in the group to primary. After the database failover is completed, the DNS record is automatically updated to redirect the endpoints to the new region. The application is automatically connected to the CMS database as the read/write endpoint is maintained as a listener in the ODBC connection.
+ Auto-failover groups provide read/write and read-only listener endpoints that remain unchanged during failover. The read/write endpoint can be maintained as a listener in the ODBC connection entry for the CMS and audit database. So whether you use manual or automatic failover activation, failover switches all secondary databases in the group to primary. After the database failover is completed, the DNS record is automatically updated to redirect the endpoints to the new region. The application is automatically connected to the CMS database as the read/write endpoint is maintained as a listener in the ODBC connection.
- In the following image, an autofailover group for the SQL server (azussqlbodb) running on the East US 2 region is replicated to the East US secondary region (DR site). The read/write listener endpoint is maintained as a listener in an ODBC connection for the BI application server running on Windows. After failover, the endpoint will remain the same. No manual intervention is required to connect the BI application to the SQL database on the secondary region.
+ In the following image, an auto-failover group for the SQL server (azussqlbodb) running on the East US 2 region is replicated to the East US secondary region (DR site). The read/write listener endpoint is maintained as a listener in an ODBC connection for the BI application server running on Windows. After failover, the endpoint will remain the same. No manual intervention is required to connect the BI application to the SQL database on the secondary region.
- ![Screenshot that shows SQL Database autofailover groups.](media\businessobjects-deployment-guide\businessobjects-deployment-windows-sql-failover-group.png)
+ ![Screenshot that shows SQL Database auto-failover groups.](media\businessobjects-deployment-guide\businessobjects-deployment-windows-sql-failover-group.png)
- This option provides a lower RTO and RPO than option 1. For more information about this option, see [Use autofailover groups to enable transparent and coordinated failover of multiple databases](/azure/azure-sql/database/auto-failover-group-overview).
+ This option provides a lower RTO and RPO than option 1. For more information about this option, see [Use auto-failover groups to enable transparent and coordinated failover of multiple databases](/azure/azure-sql/database/auto-failover-group-overview).
#### Azure Database for MySQL Azure Database for MySQL provides options to recover a database if there's a disaster. Choose the appropriate option that works for your business:
-* Enable cross-region read replicas to enhance your business continuity and DR planning. You can replicate from a source server up to five replicas. Read replicas are updated asynchronously by using the Azure Database for MySQL binary log replication technology. Replicas are new servers that you manage similar to regular Azure Database for MySQL servers. To learn more about read replicas, available regions, restrictions, and how to fail over, see [Read replicas in Azure Database for MySQL](../../mysql/concepts-read-replicas.md).
+- Enable cross-region read replicas to enhance your business continuity and DR planning. You can replicate from a source server up to five replicas. Read replicas are updated asynchronously by using the Azure Database for MySQL binary log replication technology. Replicas are new servers that you manage similar to regular Azure Database for MySQL servers. To learn more about read replicas, available regions, restrictions, and how to fail over, see [Read replicas in Azure Database for MySQL](../../mysql/concepts-read-replicas.md).
-* Use the Azure Database for MySQL geo-restore feature that restores the server by using geo-redundant backups. These backups are accessible even when the region on which your server is hosted is offline. You can restore from these backups to any other region and bring your server back online.
+- Use the Azure Database for MySQL geo-restore feature that restores the server by using geo-redundant backups. These backups are accessible even when the region on which your server is hosted is offline. You can restore from these backups to any other region and bring your server back online.
- > [!Important]
+ > [!IMPORTANT]
> Geo-restore is only possible if you provisioned the server with geo-redundant backup storage. Changing the backup redundancy options after server creation isn't supported. For more information, see [Backup redundancy](../../mysql/concepts-backup.md#backup-redundancy-options). The following table lists the recommendations for DR for each tier used in this example.
The following table lists the recommendations for DR for each tier used in this
| BI application servers | Replicate by using Site Recovery | | Azure Premium Files | AzCopy *or* Azure PowerShell | | Azure NetApp Files | File-based copy tool to replicate data to a secondary region *or* Azure NetApp Files Cross-Region Replication Preview |
-| Azure SQL Database | Geo-replication/autofailover groups *or* geo-restore |
+| Azure SQL Database | Geo-replication/auto-failover groups *or* geo-restore |
| Azure Database for MySQL | Cross-region read replicas *or* restore backup from geo-redundant backups | ## Next steps
sap Businessobjects Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/businessobjects-deployment-guide.md
Previously updated : 04/13/2023 Last updated : 06/15/2023 - # SAP BusinessObjects BI platform planning and implementation guide on Azure
-## Overview
- The purpose of this guide is to provide guidelines for planning, deploying, and configuring SAP BusinessObjects BI Platform, also known as SAP BOBI Platform on Azure. This guide is intended to cover common Azure services and features that are relevant for SAP BOBI Platform. This guide isn't an exhaustive list of all possible configuration options. It covers solutions common to typical deployment scenarios. This guide isn't intended to replace the standard SAP BOBI Platform installation and administration guides, operating system, or any database documentation.
SAP BusinessObjects BI Platform is a self-contained system that can exist on a s
The SAP BI Platform consists of a collection of servers running on one or more hosts. It's essential that you choose the correct deployment strategy based on the sizing, business need and type of environment. For small installation like development or test, you can use a single Azure Virtual Machine for web application server, database server, and all BI Platform servers. In case you're using Database-as-a-Service (DBaaS) offering from Azure, database server runs separately from other components. For medium and large installation, you can have servers running on multiple Azure virtual machines.
-In below figure, architecture of large-scale deployment of SAP BOBI Platform on Azure virtual machines is shown, where each component is distributed and placed in availability sets that can sustain failover if there's service disruption.
+The diagram below illustrates the architecture of a large-scale deployment of the SAP BOBI Platform on Azure virtual machines, with each component distributed. To ensure infrastructure resilience against service disruption, VMs can be deployed using either [flexible scale set](./sap-high-availability-architecture-scenarios.md#virtual-machine-scale-set-with-flexible-orchestration), [availability sets](sap-high-availability-architecture-scenarios.md#multiple-instances-of-virtual-machines-in-the-same-availability-set) or [availability zones](sap-high-availability-architecture-scenarios.md#azure-availability-zones).
![SAP BusinessObjects BI Platform Architecture on Azure](./media/businessobjects-deployment-guide/businessobjects-architecture-on-azure.png)
In below figure, architecture of large-scale deployment of SAP BOBI Platform on
- Web application servers
- The web server hosts the web applications of SAP BOBI Platform like CMC and BI Launch Pad. To achieve high availability for web server, you must deploy at least two web application servers to manage redundancy and load balancing. In Azure, these web application servers can be placed either in availability sets or availability zones for better availability.
+ The web server hosts the web applications of SAP BOBI Platform like CMC and BI Launch Pad. To achieve high availability for web server, you must deploy at least two web application servers to manage redundancy and load balancing. In Azure, these web application servers can be placed either in flexible scale set, availability zones or availability sets for better availability.
Tomcat is the default web application for SAP BI Platform. To achieve high availability for tomcat, enable session replication using [Static Membership Interceptor](https://tomcat.apache.org/tomcat-9.0-doc/config/cluster-membership.html#Static_Membership_Attributes) in Azure. It ensures that user can access SAP BI web application even when tomcat service is disrupted.
For storage need for SAP BOBI Platform, Azure offers different types of [Managed
Azure supports two DBaaS offering for SAP BOBI Platform data tier - [Azure SQL Database](https://azure.microsoft.com/services/sql-database) (BI Application running on Windows) and [Azure Database for MySQL](https://azure.microsoft.com/services/mysql) (BI Application running on Linux and Windows). So based on the sizing result, you can choose purchasing model that best fits your need.
-> [!Tip]
+> [!TIP]
> For quick sizing reference, consider 800 SAPS = 1 vCPU while mapping the SAPS result of SAP BOBI Platform database tier to Azure Database-as-a-Service (Azure SQL Database or Azure Database for MySQL). ### Sizing models for Azure SQL database
Azure SQL Database offers the following three purchasing models:
It's more suitable for intermittent, unpredictable usage with low average compute utilization over time. So this model can be used for nonproduction SAP BOBI deployment.
-> [!Note]
+> [!NOTE]
> For SAP BOBI, it's convenient to use vCore based model and choose either General Purpose or Business Critical service tier based on the business need. ### Sizing models for Azure database for MySQL
Azure Database for MySQL comes with three different pricing tiers. They're diffe
For high-performance database workloads that require in-memory performance for faster transaction processing and higher concurrency.
-> [!Note]
+> [!NOTE]
> For SAP BOBI, it is convenient to use General Purpose or Memory Optimized pricing tier based on the business workload. ## Azure resources
Azure region is one or a collection of data-centers that contains the infrastruc
SAP BI Platform contains different components that might require specific VM types, Storage like Azure Files or Azure NetApp Files or Database as a Service (DBaaS) for its data tier that might not be available in certain regions. You can find out the exact information on VM types, Azure Storage types or, other Azure Services in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/) site. If you're already running your SAP systems on Azure, probably you have your region identified. In that case, you need to first investigate that the necessary services are available in those regions to decide the architecture of SAP BI Platform.
+### Virtual machine scale sets with flexible orchestration
+
+[Virtual machine scale sets](../../virtual-machine-scale-sets/overview.md) with flexible orchestration provide a logical grouping of platform-managed virtual machines. You have an option to create scale set within region or span it across availability zones. On creating, the flexible scale set within a region with platformFaultDomainCount>1 (FD>1), the VMs deployed in the scale set would be distributed across specified number of fault domains in the same region. On the other hand, creating the flexible scale set across availability zones with platformFaultDomainCount=1 (FD=1) would distribute VMs across specified zone and the scale set would also distribute VMs across different fault domains within the zone on a best effort basis.
+
+**For SAP workload only flexible scale set with FD=1 is supported.** The advantage of using flexible scale sets with FD=1 for cross zonal deployment, instead of traditional availability zone deployment is that the VMs deployed with the scale set would be distributed across different fault domains within the zone in a best-effort manner. To learn more about SAP workload deployment with scale set, see [flexible virtual machine scale deployment guide](./sap-high-availability-architecture-scenarios.md).
+ ### Availability zones Availability Zones are physically separate locations within an Azure region. Each Availability Zone is made of one or more datacenters equipped with independent power, cooling, and networking.
Also the number of update and fault domains that can be used by an Azure Availab
To understand the concept of Azure availability sets and the way availability sets relate to Fault and Upgrade Domains, read [manage availability](../../virtual-machines/availability.md) article.
-> [!Important]
-> The concepts of Azure Availability Zones and Azure availability sets are mutually exclusive. That means, you can either deploy a pair or multiple VMs into a specific Availability Zone or an Azure availability set. But not both.
+> [!IMPORTANT]
+>
+> - The concepts of Azure availability zones and Azure availability sets are mutually exclusive. You can deploy a pair or multiple VMs into either a specific availability zone or an availability set, but you can't do both.
+> - If you planning to deploy across availability zones, it is advised to use [flexible scale set with FD=1](./virtual-machine-scale-set-sap-deployment-guide.md) over standard availability zone deployment.
### Virtual machines
sap Dbms Guide General https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-general.md
When you plan your disk layout, find the best balance between these items:
* The number of additional data disks possible per VM size. * The overall storage or network throughput a VM can provide. * The latency different Azure Storage types can provide.-- VM storage IOPS and throughput quota.-- VM network quota in case you're using NFS - traffic to NFS shares is counting against the VM's network quota and **NOT** the storage quota.
+* VM storage IOPS and throughput quota.
+* VM network quota in case you're using NFS - traffic to NFS shares is counting against the VM's network quota and **NOT** the storage quota.
* VM SLAs. Azure enforces an IOPS quota per data disk or NFS share. These quotas are different for disks hosted on the different Azure block storage solutions or shares. I/O latency is also different between these different storage types as well.
There are other redundancy methods. For more information, see [Azure Storage rep
## VM node resiliency
-Azure offers several different SLAs for VMs. For more information, see the most recent release of [SLA for Virtual Machines](https://azure.microsoft.com/support/legal/sl).
+Azure offers several different SLAs for VMs. For more information, see the most recent release of [SLA for Virtual Machines](https://azure.microsoft.com/support/legal/sl).
The minimum recommendation for production DBMS scenarios with an SAP workload is to: -- Deploy two VMs in a separate availability set in the same Azure region.
+- Deploy two VMs using the [chosen deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) in the same Azure region.
- Run these two VMs in the same Azure virtual network and have NICs attached out of the same subnets. - Use database methods to keep a hot standby with the second VM. Methods can be SQL Server Always On, Oracle Data Guard, or HANA System Replication. You also can deploy a third VM in another Azure region and use the same database methods to supply an asynchronous replica in another Azure region.
-For information on how to set up Azure availability sets, see [this tutorial](../../virtual-machines/windows/tutorial-availability-sets.md).
--- ## Azure network considerations In large-scale SAP deployments, use the blueprint of [Azure Virtual Datacenter](/azure/architecture/vdc/networking-virtual-datacenter). Use it for your virtual network configuration and permissions and role assignments to different parts of your organization.
These best practices are the result of thousands of customer deployments:
> [!WARNING]- > Configuring [network virtual appliances](https://azure.microsoft.com/solutions/network-appliances/) in the communication path between the SAP application and the DBMS layer of a SAP NetWeaver-, Hybris-, or S/4HANA-based SAP system isn't supported. This restriction is for functionality and performance reasons. The communication path between the SAP application layer and the DBMS layer must be a direct one. The restriction doesn't include [application security group (ASG) and NSG rules](../../virtual-network/network-security-groups-overview.md) if those ASG and NSG rules allow a direct communication path. This also includes traffic to NFS shares that host DBMS data and redo log files. > > Other scenarios where network virtual appliances aren't supported are in:
These best practices are the result of thousands of customer deployments:
> > Be aware that network traffic between two [peered](../../virtual-network/virtual-network-peering-overview.md) Azure virtual networks is subject to transfer costs. Huge data volume that consists of many terabytes is exchanged between the SAP application layer and the DBMS layer. You can accumulate substantial costs if the SAP application layer and DBMS layer are segregated between two peered Azure virtual networks.
-Use two VMs for your production DBMS deployment within an Azure availability set or between two Azure Availability Zones. Also use separate routing for the SAP application layer and the management and operations traffic to the two DBMS VMs. See the following image:
-
-![Diagram of two VMs in two subnets](./media/virtual-machines-shared-sap-deployment-guide/general_two_dbms_two_subnets.PNG)
- ### Use Azure Load Balancer to redirect traffic The use of private virtual IP addresses used in functionalities like SQL Server Always On or HANA System Replication requires the configuration of an Azure load balancer. The load balancer uses probe ports to determine the active DBMS node and route the traffic exclusively to that active database node.
sap Dbms Guide Ha Ibm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-ha-ibm.md
Before you begin an installation, see the following SAP notes and documentation:
| [IBM Db2 HADR R 10.5][db2-hadr-10.5] | ## Overview
-To achieve high availability, IBM Db2 LUW with HADR is installed on at least two Azure virtual machines, which are deployed in an [Azure availability set](../../virtual-machines/windows/tutorial-availability-sets.md) or across [Azure Availability Zones](./high-availability-zones.md).
+To achieve high availability, IBM Db2 LUW with HADR is installed on at least two Azure virtual machines, which are deployed in an [virtual machine scale set](./virtual-machine-scale-set-sap-deployment-guide.md) with flexible orchestration across [availability zones](./high-availability-zones.md) or in an [availability set](../../virtual-machines/windows/tutorial-availability-sets.md).
The following graphics display a setup of two database server Azure VMs. Both database server Azure VMs have their own storage attached and are up and running. In HADR, one database instance in one of the Azure VMs has the role of the primary instance. All clients are connected to this primary instance. All changes in database transactions are persisted locally in the Db2 transaction log. As the transaction log records are persisted locally, the records are transferred via TCP/IP to the database instance on the second database server, the standby server, or standby instance. The standby instance updates the local database by rolling forward the transferred transaction log records. In this way, the standby server is kept in sync with the primary server.
Make sure that the selected OS is supported by IBM/SAP for IBM Db2 LUW. The list
1. Create or select a resource group. 1. Create or select a virtual network and subnet.
-1. Create an Azure availability set or deploy an availability zone.
- + For the availability set, set the maximum update domains to 2.
+1. Choose a [suitable deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) for SAP virtual machines. Typically a virtual machine scale set with flexible orchestration.
1. Create Virtual Machine 1. + Use SLES for SAP image in the Azure Marketplace.
- + Select the Azure availability set you created in step 3, or select Availability Zone.
+ + Select the scale set, availability zone or availability set created in step 3.
1. Create Virtual Machine 2. + Use SLES for SAP image in the Azure Marketplace.
- + Select the Azure availability set you in created in step 3, or select Availability Zone (not the same zone as in step 3).
+ + Select the scale set, availability zone or availability set created in step 3 (not the same zone as in step 4).
1. Add data disks to the VMs, and then check the recommendation of a file system setup in the article [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload][dbms-db2]. ## Create the Pacemaker cluster
Cluster node *azibmdb01* should be rebooted. The IBM Db2 primary HADR role is go
If the Pacemaker service doesn't start automatically on the rebooted former primary, be sure to start it manually with:
-<code><pre>sudo service pacemaker start</code></pre>
+<pre><code>sudo service pacemaker start</code></pre>
### Test a manual takeover
sap Dbms Guide Sqlserver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-sqlserver.md
Latin1-General, binary code point comparison sort for Unicode Data, SQL Server S
If the result is different, STOP any deployment and investigate why the setup command didn't work as expected. Deployment of SAP NetWeaver applications onto SQL Server instance with different SQL Server codepages than the one mentioned is **NOT** supported for NetWeaver deployments. ## SQL Server High-Availability for SAP in Azure
-Using SQL Server in Azure IaaS deployments for SAP, you have several different possibilities to add to deploy the DBMS layer highly available. Azure provides different up-time SLAs for a single VM using different Azure block storages, a pair of VMs deployed in an Azure availability set, or a pair of VMs deployed across Azure Availability Zones. For production systems, we expect you to deploy a pair of VMs within an availability set or across two Availability Zones. One VM will run the active SQL Server Instance. The other VM will run the passive Instance
+Using SQL Server in Azure IaaS deployments for SAP, you have several different possibilities to add to deploy the DBMS layer highly available. Azure provides different up-time SLAs for a single VM using different Azure block storages, a pair of VMs deployed in an Azure availability set, or a pair of VMs deployed across Azure Availability Zones. For production systems, we expect you to deploy a pair of VMs within an virtual machine scale set with flexible orchestration across two availability zones. See [comparison of different deployment types for SAP workload](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) for more information. One VM will run the active SQL Server Instance. The other VM will run the passive instance
### SQL Server Clustering using Windows Scale-out File Server or Azure shared disk With Windows Server 2016, Microsoft introduced [Storage Spaces Direct](/windows-server/storage/storage-spaces/storage-spaces-direct-overview). Based on Storage Spaces, Direct Deployment, SQL Server FCI clustering is supported in general. Azure also offers [Azure shared disks](../../virtual-machines/disks-shared-enable.md?tabs=azure-cli) that could be used for Windows clustering. **For SAP workload, we aren't supporting these HA options.**
sap High Availability Guide Rhel Glusterfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-glusterfs.md
You first need to create the virtual machines for this cluster.
1. Create a Resource Group 1. Create a Virtual Network
-1. Create an Availability Set
- Set max update domain
+1. Choose a [suitable deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) for SAP virtual machines. Typically a virtual machine scale set with flexible orchestration.
1. Create Virtual Machine 1 Use at least RHEL 7, in this example the [Red Hat Enterprise Linux 7.4 image](https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM).
- Select Availability Set created earlier
+ Select the scale set, availability zone or availability set created in step 3.
1. Create Virtual Machine 2 Use at least RHEL 7, in this example the [Red Hat Enterprise Linux 7.4 image](https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM).
- Select Availability Set created earlier
+ Select the scale set, availability zone or availability set created in step 3 (not the same zone as in step 4).
1. Add one data disk for each SAP system to both virtual machines. ### Configure GlusterFS
sap High Availability Guide Rhel Ibm Db2 Luw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-ibm-db2-luw.md
Before you begin an installation, see the following SAP notes and documentation:
## Overview
-To achieve high availability, IBM Db2 LUW with HADR is installed on at least two Azure virtual machines, which are deployed in an [Azure availability set](../../virtual-machines/windows/tutorial-availability-sets.md) or across [Azure Availability Zones](./high-availability-zones.md).
+To achieve high availability, IBM Db2 LUW with HADR is installed on at least two Azure virtual machines, which are deployed in an [virtual machine scale set](./virtual-machine-scale-set-sap-deployment-guide.md) with flexible orchestration across [availability zones](./high-availability-zones.md) or in an [availability set](../../virtual-machines/windows/tutorial-availability-sets.md).
The following graphics display a setup of two database server Azure VMs. Both database server Azure VMs have their own storage attached and are up and running. In HADR, one database instance in one of the Azure VMs has the role of the primary instance. All clients are connected to primary instance. All changes in database transactions are persisted locally in the Db2 transaction log. As the transaction log records are persisted locally, the records are transferred via TCP/IP to the database instance on the second database server, the standby server, or standby instance. The standby instance updates the local database by rolling forward the transferred transaction log records. In this way, the standby server is kept in sync with the primary server.
Make sure that the selected OS is supported by IBM/SAP for IBM Db2 LUW. The list
1. Create or select a resource group. 1. Create or select a virtual network and subnet.
-1. Create an Azure availability set or deploy an availability zone.
- + For the availability set, set the maximum update domains to 2.
+1. Choose a [suitable deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) for SAP virtual machines. Typically a virtual machine scale set with flexible orchestration.
1. Create Virtual Machine 1. + Use Red Hat Enterprise Linux for SAP image in the Azure Marketplace.
- + Select the Azure availability set you created in step 3, or select Availability Zone.
+ + Select the scale set, availability zone or availability set created in step 3.
1. Create Virtual Machine 2. + Use Red Hat Enterprise Linux for SAP image in the Azure Marketplace.
- + Select the Azure availability set you in created in step 3, or select Availability Zone (not the same zone as in step 3).
+ + Select the scale set, availability zone or availability set created in step 3 (not the same zone as in step 4).
1. Add data disks to the VMs, and then check the recommendation of a file system setup in the article [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload][dbms-db2]. ## Install the IBM Db2 LUW and SAP environment
sap High Availability Guide Rhel Nfs Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-nfs-azure-files.md
The example configurations and installation commands use the following instance
This document assumes that you've already deployed an [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md), subnet and resource group.
-1. Deploy your VMs. You can deploy VMs in availability sets, or in availability zones, if the Azure region supports these options. If you need additional IP addresses for your VMs, deploy and attach a second NIC. DonΓÇÖt add secondary IP addresses to the primary NIC. [Azure Load Balancer Floating IP doesn't support this scenario](../../load-balancer/load-balancer-multivip-overview.md#limitations).
+1. Deploy your VMs. Choose a [suitable deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload). You can deploy VMs in availability zones, if the Azure region supports zones, or in availability sets. If you need additional IP addresses for your VMs, deploy and attach a second NIC. DonΓÇÖt add secondary IP addresses to the primary NIC. [Azure Load Balancer Floating IP doesn't support this scenario](../../load-balancer/load-balancer-multivip-overview.md#limitations).
2. For your virtual IPs, deploy and configure an Azure [load balancer](../../load-balancer/load-balancer-overview.md). It's recommended to use a [Standard load balancer](../../load-balancer/quickstart-load-balancer-standard-public-portal.md).
sap High Availability Guide Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel.md
You first need to create the virtual machines for this cluster. Afterwards, you
1. Create a Resource Group 1. Create a Virtual Network
-1. Create an Availability Set
- Set max update domain
+1. Choose a [suitable deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) for SAP virtual machines. Typically a virtual machine scale set with flexible orchestration.
1. Create Virtual Machine 1 Use at least RHEL 7, in this example the [Red Hat Enterprise Linux 7.4 image](https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM).
- Select Availability Set created earlier
+ Select the scale set, availability zone or availability set created in step 3.
1. Create Virtual Machine 2 Use at least RHEL 7, in this example the [Red Hat Enterprise Linux 7.4 image](https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM).
- Select Availability Set created earlier
+ Select the scale set, availability zone or availability set created in step 3 (not the same zone as in step 4).
1. Add at least one data disk to both virtual machines The data disks are used for the /usr/sap/`<SAPSID`> directory 1. Create load balancer (internal, standard):
The following items are prefixed with either **[A]** - applicable to all nodes,
# LD_LIBRARY_PATH=/usr/sap/<b>NW1</b>/ERS<b>02</b>/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/<b>NW1</b>/ERS<b>02</b>/exe/sapstartsrv pf=/usr/sap/<b>NW1</b>/ERS<b>02</b>/profile/<b>NW1</b>_ERS<b>02</b>_<b>nw1-aers</b> -D -u <b>nw1</b>adm </code></pre>
-2. **[1]** Create the SAP cluster resources
+1. **[1]** Create the SAP cluster resources
- If using enqueue server 1 architecture (ENSA1), define the resources as follows:
+ If using enqueue server 1 architecture (ENSA1), define the resources as follows:
<pre><code>sudo pcs property set maintenance-mode=true
The following items are prefixed with either **[A]** - applicable to all nodes,
SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support. If using enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), install resource agent resource-agents-sap-4.1.1-12.el7.x86_64 or newer and define the resources as follows:
-<pre><code>sudo pcs property set maintenance-mode=true
+ <pre><code>sudo pcs property set maintenance-mode=true
sudo pcs resource create rsc_sap_<b>NW1</b>_ASCS00 SAPInstance \ InstanceName=<b>NW1</b>_ASCS00_<b>nw1-ascs</b> START_PROFILE="/sapmnt/<b>NW1</b>/profile/<b>NW1</b>_ASCS00_<b>nw1-ascs</b>" \
sap High Availability Guide Suse Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-pacemaker.md
You can configure the SBD device by using either of two options:
- An Azure shared disk with Premium SSD is supported as an SBD device. - SBD devices that use an Azure shared disk are supported on SLES High Availability 15 SP01 and later. - SBD devices that use an Azure premium shared disk are supported on [locally redundant storage (LRS)](../../virtual-machines/disks-redundancy.md#locally-redundant-storage-for-managed-disks) and [zone-redundant storage (ZRS)](../../virtual-machines/disks-redundancy.md#zone-redundant-storage-for-managed-disks).
- - Depending on the type of your deployment (availability set or availability zones), choose the appropriate redundant storage for an Azure shared disk as your SBD device.
+ - Depending on the [type of your deployment](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload), choose the appropriate redundant storage for an Azure shared disk as your SBD device.
- An SBD device using LRS for Azure premium shared disk (skuName - Premium_LRS) is only supported with deployment in availability set. - An SBD device using ZRS for an Azure premium shared disk (skuName - Premium_ZRS) is recommended with deployment in availability zones. - A ZRS for managed disk is currently unavailable in all regions with availability zones. For more information, review the ZRS "Limitations" section in [Redundancy options for managed disks](../../virtual-machines/disks-redundancy.md#limitations).
To create a service principal, do the following:
### **[1]** Create a custom role for the fence agent
-By default, neither managed identity norservice principal have permissions to access your Azure resources. You need to give the managed identity or service principal permissions to start and stop (deallocate) all virtual machines in the cluster. If you didn't already create the custom role, you can do so by using [PowerShell](../../role-based-access-control/custom-roles-powershell.md#create-a-custom-role) or the [Azure CLI](../../role-based-access-control/custom-roles-cli.md).
+By default, neither managed identity nor service principal have permissions to access your Azure resources. You need to give the managed identity or service principal permissions to start and stop (deallocate) all virtual machines in the cluster. If you didn't already create the custom role, you can do so by using [PowerShell](../../role-based-access-control/custom-roles-powershell.md#create-a-custom-role) or the [Azure CLI](../../role-based-access-control/custom-roles-cli.md).
Use the following content for the input file. You need to adapt the content to your subscriptions. That is, replace *xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx* and *yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy* with your own subscription IDs. If you have only one subscription, remove the second entry under AssignableScopes.
sap Sap Hana High Availability Netapp Files Red Hat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-red-hat.md
First you need to create the Azure NetApp Files volumes. Then do the following s
1. Create a resource group. 2. Create a virtual network.
-3. Create an availability set. Set the max update domain.
+3. Choose a [suitable deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) for SAP virtual machines. Typically a virtual machine scale set with flexible orchestration.
4. Create a load balancer (internal). We recommend standard load balancer. Select the virtual network created in step 2. 5. Create Virtual Machine 1 (**hanadb1**).
sap Sap Hana High Availability Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-suse.md
First you need to create the Azure NetApp Files volumes. Then do the following s
1. Create a resource group. 2. Create a virtual network.
-3. Create an availability set. Set the max update domain.
+3. Choose a [suitable deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) for SAP virtual machines. Typically a virtual machine scale set with flexible orchestration.
4. Create a load balancer (internal). We recommend standard load balancer. Select the virtual network created in step 2. 5. Create Virtual Machine 1 (**hanadb1**).
sap Sap Hana High Availability Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-rhel.md
To deploy the template, follow these steps:
1. Create a resource group. 1. Create a virtual network.
-1. Create an availability set.
- Set the max update domain.
+1. Choose a [suitable deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) for SAP virtual machines. Typically a virtual machine scale set with flexible orchestration.
1. Create a load balancer (internal). We recommend [standard load balancer](../../load-balancer/load-balancer-overview.md). * Select the virtual network created in step 2. 1. Create virtual machine 1. Use at least Red Hat Enterprise Linux 7.4 for SAP HANA. This example uses the [Red Hat Enterprise Linux 7.4 for SAP HANA image](https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux75forSAP-ARM).
- Select the availability set created in step 3.
+ Select the scale set, availability zone or availability set created in step 3.
1. Create virtual machine 2. Use at least Red Hat Enterprise Linux 7.4 for SAP HANA. This example uses the [Red Hat Enterprise Linux 7.4 for SAP HANA image](https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux75forSAP-ARM).
- Select the availability set created in step 3.
+ Select the scale set, availability zone or availability set created in step 3 (not the same zone as in step 4).
1. Add data disks. > [!IMPORTANT]
To deploy the template, follow these steps:
> [!Note] > When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
-1. To set up standard load balancer, follow these configuration steps:
+To set up standard load balancer, follow these configuration steps:
1. First, create a front-end IP pool: 1. Open the load balancer, select **frontend IP pool**, and select **Add**.
Be aware of the second virtual IP behavior, while testing a HANA cluster configu
1. When you migrate **SAPHana_HN1_03** cluster resource to secondary site **hn1-db-1**, the second virtual IP will continue to run on the same site **hn1-db-1**. If you have set AUTOMATED_REGISTER="true" for the resource and HANA system replication is registered automatically on **hn1-db-0**, then your second virtual IP will also move to **hn1-db-0**.
-2. On testing server crash, second virtual IP resources (**secvip_HN1_03**) and azure load balancer port resource (**secnc_HN1_03**) will run on primary server alongside the primary virtual IP resources. So, till the time secondary server is down, application that are connected to read-enabled HANA database will connect to primary HANA database. The behavior is expected as you do not want applications that are connected to read-enabled HANA database to be inaccessible till the time secondary server is unavailable.
+2. On testing server crash, second virtual IP resources (**secvip_HN1_03**) and Azure load balancer port resource (**secnc_HN1_03**) will run on primary server alongside the primary virtual IP resources. So, till the time secondary server is down, application that are connected to read-enabled HANA database will connect to primary HANA database. The behavior is expected as you do not want applications that are connected to read-enabled HANA database to be inaccessible till the time secondary server is unavailable.
-3. During failover and fallback of second virtual IP address, it may happen that the existing connections on applications that uses second virtual IP to connect to the HANA database may get interrupted.
+3. During failover and fallback of second virtual IP address, it may happen that the existing connections on applications that use second virtual IP to connect to the HANA database may get interrupted.
The setup maximizes the time that the second virtual IP resource will be assigned to a node where a healthy SAP HANA instance is running.
sap Sap Hana High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability.md
To manually deploy SAP HANA system replication:
1. Create a virtual network.
-1. Create an availability set.
-
- - Set the max update domain.
+1. Choose a [suitable deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) for SAP virtual machines. Typically a virtual machine scale set with flexible orchestration.
1. Create a load balancer (internal).
To manually deploy SAP HANA system replication:
1. Create virtual machine 1. - Use an SLES4SAP image in the Azure gallery that's supported for SAP HANA on the VM type you selected.
- - Select the availability set you created in step 3.
+ - Select the scale set, availability zone or availability set created in step 3.
1. Create virtual machine 2. - Use an SLES4SAP image in the Azure gallery that's supported for SAP HANA on the VM type you selected.
- - Select the availability set you created in step 3.
+ - Select the scale set, availability zone or availability set created in step 3.
1. Add data disks.
search Search Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-traffic-analytics.md
public HomeController(TelemetryClient telemetry)
**Use JavaScript**
-To create an object that sends events to Application Insights by using the SDK Loader Script, see [Microsoft Azure Monitor Application Insights JavaScript SDK](../azure-monitor/app/javascript-sdk.md?tabs=sdkloaderscript#get-started).
+To create an object that sends events to Application Insights by using the JavaScript (Web) SDK Loader Script, see [Microsoft Azure Monitor Application Insights JavaScript SDK](../azure-monitor/app/javascript-sdk.md?tabs=javascriptwebsdkloaderscript#get-started).
### Step 2: Request a Search ID for correlation
security Network Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/network-overview.md
This article covers some of the options that Azure offers in the area of network
* Traffic manager * Monitoring and threat detection
+> [!NOTE]
+> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to deploy [**Azure Front Door**](../../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../../frontdoor/front-door-ddos.md).
+ ## Azure networking Azure requires virtual machines to be connected to an Azure Virtual Network. A virtual network is a logical construct built on top of the physical Azure network fabric. Each virtual network is isolated from all other virtual networks. This helps ensure that network traffic in your deployments is not accessible to other Azure customers.
You can access these enhanced network security features by using an Azure partne
Azure Firewall is a cloud-native and intelligent network firewall security service that provides threat protection for your cloud workloads running in Azure. It's a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. It provides both east-west and north-south traffic inspection.
-Azure Firewall is offered in two SKUs: Standard and Premium. [Azure Firewall Standard](../../firewall/features.md) provides L3-L7 filtering and threat intelligence feeds directly from Microsoft Cyber Security. [Azure Firewall Premium](../../firewall/premium-features.md) provides advanced capabilities include signature-based IDPS to allow rapid detection of attacks by looking for specific patterns.
+Azure Firewall is offered in three SKUs: Standard, Premium, and Basic. [Azure Firewall Standard](../../firewall/features.md) provides L3-L7 filtering and threat intelligence feeds directly from Microsoft Cyber Security. [Azure Firewall Premium](../../firewall/premium-features.md) provides advanced capabilities include signature-based IDPS to allow rapid detection of attacks by looking for specific patterns. [Azure Firewall Basic](../../firewall/basic-features.md) is a simplified SKU that provides the same level of security as the Standard SKU but without the advanced capabilities.
Learn more:
Azure Front Door Service enables you to define, manage, and monitor the global r
Front Door platform itself is protected by an Azure infrastructure-level DDoS protection. For further protection, Azure DDoS Network Protection may be enabled at your VNETs and safeguard resources from network layer (TCP/UDP) attacks via auto tuning and mitigation. Front Door is a layer 7 reverse proxy, it only allows web traffic to pass through to back end servers and block other types of traffic by default.
+> [!NOTE]
+> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to deploy [**Azure Front Door**](../../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../../frontdoor/front-door-ddos.md).
+ Learn more: * For more information on the whole set of Azure Front door capabilities you can review the [Azure Front Door overview](../../frontdoor/front-door-overview.md)
sentinel Forward Syslog Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/forward-syslog-monitor-agent.md
Title: Tutorial - Forward syslog data to Microsoft Sentinel and Azure Monitor by using the Azure Monitor agent
-description: In this tutorial, you'll learn how to monitor linux-based devices by forwarding syslog data to a Log Analytics workspace.
+ Title: 'Tutorial: Forward Syslog data to Microsoft Sentinel and Azure Monitor by using Azure Monitor Agent'
+description: In this tutorial, you learn how to monitor Linux-based devices by forwarding Syslog data to a Log Analytics workspace.
Last updated 01/05/2023
-#Customer intent: As a security-engineer, I want to get syslog data into Microsoft Sentinel so that I can use the data with other data to do attack detection, threat visibility, proactive hunting, and threat response. As an IT administrator, I want to get syslog data into my Log Analytics workspace to monitor my linux-based devices.
+#Customer intent: As a security engineer, I want to get Syslog data into Microsoft Sentinel so that I can do attack detection, threat visibility, proactive hunting, and threat response. As an IT administrator, I want to get Syslog data into my Log Analytics workspace to monitor my Linux-based devices.
-# Tutorial: Forward syslog data to a Log Analytics workspace with Microsoft Sentinel by using the Azure Monitor agent
+# Tutorial: Forward Syslog data to a Log Analytics workspace with Microsoft Sentinel by using Azure Monitor Agent
-In this tutorial, you'll configure a Linux virtual machine (VM) to forward syslog data to your workspace by using the Azure Monitor agent. These steps allow you to collect and monitor data from Linux-based devices where you can't install an agent like a firewall network device.
-
-Configure your linux-based device to send data to a Linux VM. The Azure Monitor agent on the VM forwards the syslog data to the Log Analytics workspace. Then use Microsoft Sentinel or Azure Monitor to monitor the device from the data stored in the Log Analytics workspace.
+In this tutorial, you configure a Linux virtual machine (VM) to forward Syslog data to your workspace by using Azure Monitor Agent. These steps allow you to collect and monitor data from Linux-based devices where you can't install an agent like a firewall network device.
+
+Configure your Linux-based device to send data to a Linux VM. Azure Monitor Agent on the VM forwards the Syslog data to the Log Analytics workspace. Then use Microsoft Sentinel or Azure Monitor to monitor the device from the data stored in the Log Analytics workspace.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create a data collection rule
-> * Verify the Azure Monitor agent is running
-> * Enable log reception on port 514
-> * Verify syslog data is forwarded to your Log Analytics workspace
+> * Create a data collection rule.
+> * Verify that Azure Monitor Agent is running.
+> * Enable log reception on port 514.
+> * Verify that Syslog data is forwarded to your Log Analytics workspace.
## Prerequisites
-To complete the steps in this tutorial, you must have the following resources and roles.
+To complete the steps in this tutorial, you must have the following resources and roles:
-- Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Azure account with the following roles to deploy the agent and create the data collection rules:
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account with the following roles to deploy the agent and create the data collection rules.
- |Built-in Role |Scope |Reason |
+ |Built-in role |Scope |Reason |
||||
- |- [Virtual Machine Contributor](../role-based-access-control/built-in-roles.md)</br>- [Azure Connected Machine Resource Administrator](../role-based-access-control/built-in-roles.md) | - Virtual machines</br>- Scale sets</br>- Arc-enabled servers | To deploy the agent |
- |Any role that includes the action Microsoft.Resources/deployments/* | - Subscription and/or</br>- Resource group and/or</br>- An existing data collection rule | To deploy ARM templates |
- |[Monitoring Contributor ](../role-based-access-control/built-in-roles.md) |- Subscription and/or </br>- Resource group and/or</br>- An existing data collection rule | To create or edit data collection rules |
-- Log Analytics workspace.-- Linux server that's running an operating system that supports Azure Monitor agent.-
- - [Supported Linux operating systems for Azure Monitor agent](../azure-monitor/agents/agents-overview.md#linux)
- - [Create a Linux virtual machine in the Azure portal](../virtual-machines/linux/quick-create-portal.md) or
- - Onboard an on-premises Linux server to Azure Arc. See [Quickstart: Connect hybrid machines with Azure Arc-enabled servers](../azure-arc/servers/learn/quick-enable-hybrid-vm.md)
--- Linux-based device that generates event log data like a firewall network device.
+ |- [Virtual Machine Contributor](../role-based-access-control/built-in-roles.md)</br>- [Azure Connected Machine Resource Administrator](../role-based-access-control/built-in-roles.md) | - Virtual machines</br>- Scale sets</br>- Azure Arc-enabled servers | To deploy the agent |
+ |Any role that includes the action Microsoft.Resources/deployments/* | - Subscription </br>- Resource group</br>- Existing data collection rule | To deploy Azure Resource Manager templates |
+ |[Monitoring Contributor ](../role-based-access-control/built-in-roles.md) |- Subscription </br>- Resource group </br>- Existing data collection rule | To create or edit data collection rules |
+- A Log Analytics workspace.
+- A Linux server that's running an operating system that supports Azure Monitor Agent.
+ - [Supported Linux operating systems for Azure Monitor Agent](../azure-monitor/agents/agents-overview.md#linux).
+ - [Create a Linux VM in the Azure portal](../virtual-machines/linux/quick-create-portal.md) or [add an on-premises Linux server to Azure Arc](../azure-arc/servers/learn/quick-enable-hybrid-vm.md).
+- A Linux-based device that generates event log data like a firewall network device.
## Create a data collection rule
-See step by step guide [here](../azure-monitor/agents/data-collection-syslog.md#create-a-data-collection-rule).
+See the step-by-step instructions in [Create a data collection rule](../azure-monitor/agents/data-collection-syslog.md#create-a-data-collection-rule).
-## Verify the Azure Monitor agent is running
+## Verify that Azure Monitor Agent is running
-In Microsoft Sentinel or Azure Monitor, verify that the Azure Monitor agent is running on your VM.
+In Microsoft Sentinel or Azure Monitor, verify that Azure Monitor Agent is running on your VM.
-1. In the Azure portal, search for and open **Microsoft Sentinel** or **Monitor**.
+1. In the Azure portal, search for and open **Microsoft Sentinel** or **Azure Monitor**.
1. If you're using Microsoft Sentinel, select the appropriate workspace. 1. Under **General**, select **Logs**.
-1. Close the **Queries** page so that the **New Query** tab is displayed.
-1. Run the following query where you replace the computer value with the name of your Linux virtual machine.
+1. Close the **Queries** page so that the **New Query** tab appears.
+1. Run the following query where you replace the computer value with the name of your Linux VM.
```kusto Heartbeat
In Microsoft Sentinel or Azure Monitor, verify that the Azure Monitor agent is r
## Enable log reception on port 514
-Verify that the VM that's collecting the log data allows reception on port 514 TCP or UDP depending on the syslog source. Then configure the built-in Linux syslog daemon on the VM to listen for syslog messages from your devices. After you complete those steps, configure your linux-based device to send logs to your VM.
+Verify that the VM that's collecting the log data allows reception on port 514 TCP or UDP depending on the Syslog source. Then configure the built-in Linux Syslog daemon on the VM to listen for Syslog messages from your devices. After you finish those steps, configure your Linux-based device to send logs to your VM.
-The following two sections cover how to add an inbound port rule for an Azure VM and configure the built-in Linux syslog daemon.
+The following two sections cover how to add an inbound port rule for an Azure VM and configure the built-in Linux Syslog daemon.
-### Allow inbound syslog traffic on the VM
+### Allow inbound Syslog traffic on the VM
-If you're forwarding syslogs to an Azure VM, use the following steps to allow reception on port 514.
+If you're forwarding Syslog data to an Azure VM, follow these steps to allow reception on port 514.
1. In the Azure portal, search for and select **Virtual Machines**. 1. Select the VM.
If you're forwarding syslogs to an Azure VM, use the following steps to allow re
|Field |Value | ||| |Destination port ranges | 514 |
- |Protocol | TCP or UDP depending on syslog source |
+ |Protocol | TCP or UDP depending on Syslog source |
|Action | Allow | |Name | AllowSyslogInbound |
If you're forwarding syslogs to an Azure VM, use the following steps to allow re
1. Select **Add**.
-### Configure Linux syslog daemon
+### Configure the Linux Syslog daemon
-> [!NOTE]
-> To avoid [Full Disk scenarios](../azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md) where the agent can't function, we recommend that you set the `syslog-ng` or `rsyslog` configuration not to store unneeded logs. A Full Disk scenario disrupts the function of the installed AMA.
-> Read more about [RSyslog](https://www.rsyslog.com/doc/master/configuration/actions.html) or [Syslog-ng](
-https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.26/administration-guide/34#TOPIC-1431029).
+> [!NOTE]
+> To avoid [Full Disk scenarios](../azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md) where the agent can't function, we recommend that you set the `syslog-ng` or `rsyslog` configuration not to store unneeded logs. A Full Disk scenario disrupts the function of the installed Azure Monitor Agent.
+> Read more about [rsyslog](https://www.rsyslog.com/doc/master/configuration/actions.html) or [syslog-ng](https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.26/administration-guide/34#TOPIC-1431029).
-Connect to your Linux VM and run the following command to configure the Linux syslog daemon:
+Connect to your Linux VM and run the following command to configure the Linux Syslog daemon:
```bash sudo wget -O Forwarder_AMA_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/Syslog/Forwarder_AMA_installer.py&&sudo python3 Forwarder_AMA_installer.py
sudo wget -O Forwarder_AMA_installer.py https://raw.githubusercontent.com/Azure/
This script can make changes for both rsyslog.d and syslog-ng.
-## Verify syslog data is forwarded to your Log Analytics workspace
+## Verify Syslog data is forwarded to your Log Analytics workspace
-After you configured your linux-based device to send logs to your VM, verify that the Azure Monitor agent is forwarding syslog data to your workspace.
+After you configure your Linux-based device to send logs to your VM, verify that Azure Monitor Agent is forwarding Syslog data to your workspace.
1. In the Azure portal, search for and open **Microsoft Sentinel** or **Azure Monitor**. 1. If you're using Microsoft Sentinel, select the appropriate workspace. 1. Under **General**, select **Logs**.
-1. Close the **Queries** page so that the **New Query** tab is displayed.
-1. Run the following query where you replace the computer value with the name of your Linux virtual machine.
+1. Close the **Queries** page so that the **New Query** tab appears.
+1. Run the following query where you replace the computer value with the name of your Linux VM.
```kusto Syslog
After you configured your linux-based device to send logs to your VM, verify tha
## Clean up resources
-Evaluate whether you still need the resources you created like the virtual machine. Resources you leave running can cost you money. Delete the resources you don't need individually. Or delete the resource group to delete all the resources you've created.
+Evaluate whether you need the resources like the VM that you created. Resources you leave running can cost you money. Delete the resources you don't need individually. You can also delete the resource group to delete all the resources you created.
## Next steps
-Learn more about:
+Learn more about:
> [!div class="nextstepaction"] > [Data collection rules in Azure Monitor](../azure-monitor/essentials/data-collection-rule-overview.md)
-> [Collect syslog with Azure Monitor Agent overview](../azure-monitor/agents/data-collection-syslog.md)
+> [Collect Syslog events with Azure Monitor Agent](../azure-monitor/agents/data-collection-syslog.md)
sentinel Normalization Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-content.md
The following built-in network session related content is supported for ASIM nor
### Solutions -- [Network Threat Protection Essentials](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-networkthreatdetection?tab=Overview)
+- [Network Session Essentials](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-networksession?tab=Overview)
- [Log4j Vulnerability Detection](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-apachelog4jvulnerability?tab=Overview) - [Legacy IOC Based Threat Detection](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-ioclegacy?tab=Overview)
sentinel Upload Indicators Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/upload-indicators-api.md
To get a v1.0 token, use [ADAL](../active-directory/azuread-dev/active-directory
- client_secret: {Client secret of Azure AD App} - resource: `"https://management.azure.com/"`
-To get a v2.0 token, use Microsoft Authentication Library [MSAL](../active-directory/develop/msal-overview.md) or can send requests to the REST API in the following format:
+To get a v2.0 token, use Microsoft Authentication Library [MSAL](../active-directory/develop/msal-overview.md) or send requests to the REST API in the following format:
- POST `https://login.microsoftonline.com/{{tenantId}}/oauth2/v2.0/token` - Headers for using Azure AD App: - grant_type: "client_credentials"
service-bus-messaging Jms Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/jms-developer-guide.md
Each connection factory is an instance of `ConnectionFactory`, `QueueConnectionF
To simplify connecting with Azure Service Bus, these interfaces are implemented through `ServiceBusJmsConnectionFactory`, `ServiceBusJmsQueueConnectionFactory` and `ServiceBusJmsTopicConnectionFactory` respectively. > [!IMPORTANT]
-> Java applications leveraging JMS 2.0 API can connect to Azure Service Bus using the connection string, or using a `TokenCredential` for leveraging Azure Active Directory (AAD) backed authentication. When using AAD backed authentication, ensure to [assign roles and permissions](service-bus-managed-service-identity.md#assigning-azure-roles-for-access-rights) to the identity as needed.
+> Java applications leveraging JMS 2.0 API can connect to Azure Service Bus using the connection string, or using a `TokenCredential` for leveraging Azure Active Directory (Azure AD) backed authentication. When using Azure AD backed authentication, ensure to [assign roles and permissions](service-bus-managed-service-identity.md#azure-built-in-roles-for-azure-service-bus) to the identity as needed.
# [System Assigned Managed Identity](#tab/system-assigned-managed-identity-backed-authentication)
service-bus-messaging Service Bus Dotnet Multi Tier App Using Service Bus Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dotnet-multi-tier-app-using-service-bus-queues.md
The following sections discuss the code that implements this architecture.
In this tutorial, you'll use Azure Active Directory (Azure AD) authentication to create `ServiceBusClient` and `ServiceBusAdministrationClient` objects. You'll also use `DefaultAzureCredential` and to use it, you need to do the following steps to test the application locally in a development environment. 1. [Register an application in the Azure AD](../active-directory/develop/quickstart-register-app.md).
-1. [Add the application to the `Service Bus Data Owner` role](service-bus-managed-service-identity.md#to-assign-azure-roles-using-the-azure-portal).
+1. [Add the application to the `Service Bus Data Owner` role](../role-based-access-control/role-assignments-portal.md).
1. Set the `AZURE-CLIENT-ID`, `AZURE-TENANT-ID`, AND `AZURE-CLIENT-SECRET` environment variables. For instructions, see [this article](/dotnet/api/overview/azure/identity-readme#environment-variables).
+For a list of Service Bus built-in roles, see [Azure built-in roles for Service Bus](service-bus-managed-service-identity.md#azure-built-in-roles-for-azure-service-bus).
## Create a namespace
service-bus-messaging Service Bus Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-managed-service-identity.md
Title: Managed identities for Azure resources with Service Bus description: This article describes how to use managed identities to access with Azure Service Bus entities (queues, topics, and subscriptions). Previously updated : 06/23/2022- Last updated : 06/15/2023 # Authenticate a managed identity with Azure Active Directory to access Azure Service Bus resources
-[Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) is a cross-Azure feature that enables you to create a secure identity associated with the deployment under which your application code runs. You can then associate that identity with access-control roles that grant custom permissions for accessing specific Azure resources that your application needs.
+Managed identities for Azure resources provide Azure services with an automatically managed identity in Azure Active Directory. You can use this identity to authenticate to any service such as Azure Service Bus that supports Azure AD authentication, without having credentials in your code. If you aren't familiar with managed identities, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) before proceeding to read through this article.
-With managed identities, the Azure platform manages this runtime identity. You do not need to store and protect access keys in your application code or configuration, either for the identity itself, or for the resources you need to access. A Service Bus client app running inside an Azure App Service application or in a virtual machine with enabled managed entities for Azure resources support does not need to handle SAS rules and keys, or any other access tokens. The client app only needs the endpoint address of the Service Bus Messaging namespace. When the app connects, Service Bus binds the managed entity's context to the client in an operation that is shown in an example later in this article. Once it is associated with a managed identity, your Service Bus client can do all authorized operations. Authorization is granted by associating a managed entity with Service Bus roles.
+Here are the high-level steps to use a managed identity to access a Service Bus entity:
-> [!IMPORTANT]
-> You can disable local or SAS key authentication for a Service Bus namespace and allow only Azure Active Directory authentication. For step-by-step instructions, see [Disable local authentication](disable-local-authentication.md).
-
-## Overview
-When a security principal (a user, group, or application) attempts to access a Service Bus entity, the request must be authorized. With Azure AD, access to a resource is a two-step process.
-
- 1. First, the security principalΓÇÖs identity is authenticated, and an OAuth 2.0 token is returned. The resource name to request a token is `https://servicebus.azure.net`.
- 1. Next, the token is passed as part of a request to the Service Bus service to authorize access to the specified resource.
-
-The authentication step requires that an application request contains an OAuth 2.0 access token at runtime. If an application is running within an Azure entity such as an Azure VM, a virtual machine scale set, or an Azure Function app, it can use a managed identity to access the resources.
-
-The authorization step requires that one or more Azure roles be assigned to the security principal. Azure Service Bus provides Azure roles that encompass sets of permissions for Service Bus resources. The roles that are assigned to a security principal determine the permissions that the principal will have. To learn more about assigning Azure roles to Azure Service Bus, see [Azure built-in roles for Azure Service Bus](#azure-built-in-roles-for-azure-service-bus).
-
-Native applications and web applications that make requests to Service Bus can also authorize with Azure AD. This article shows you how to request an access token and use it to authorize requests for Service Bus resources.
--
-## Assigning Azure roles for access rights
-Azure Active Directory (Azure AD) authorizes access rights to secured resources through [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md). Azure Service Bus defines a set of Azure built-in roles that encompass common sets of permissions used to access Service Bus entities and you can also define custom roles for accessing the data.
-
-When an Azure role is assigned to an Azure AD security principal, Azure grants access to those resources for that security principal. Access can be scoped to the level of subscription, the resource group, or the Service Bus namespace. An Azure AD security principal may be a user, a group, an application service principal, or a managed identity for Azure resources.
+1. Enable managed identity for your client app or environment. For example, enable managed identity for your Azure App Service app, Azure Functions app, or a virtual machine in which your app is running. Here are the articles that help you with this step:
+ - [Configure managed identities for App Service and Azure Functions](../app-service/overview-managed-identity.md)
+ - [Configure managed identities for Azure resources on a VM](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md)
+1. Assign Azure Service Bus Data Owner, Azure Service Bus Data Sender, or Azure Service Bus Data Receiver role to the managed identity at the appropriate scope (Azure subscription, resource group, Service Bus namespace, or Service Bus queue or topic). For instructions to assign a role to a managed identity, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+1. In your application, use the managed identity and the endpoint to Service Bus namespace to connect to the namespace. For example, in .NET, you use the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient.-ctor#azure-messaging-servicebus-servicebusclient-ctor(system-string-azure-core-tokencredential)) constructor that takes `TokenCredential` and `fullyQualifiedNamespace` (a string, for example: `cotosons.servicebus.windows.net`) parameters to connect to Service Bus using the managed identity. You pass in [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential), which derives from `TokenCredential` and uses the managed identity.
+ > [!IMPORTANT]
+ > You can disable local or SAS key authentication for a Service Bus namespace and allow only Azure Active Directory authentication. For step-by-step instructions, see [Disable local authentication](disable-local-authentication.md).
+
## Azure built-in roles for Azure Service Bus
-For Azure Service Bus, the management of namespaces and all related resources through the Azure portal and the Azure resource management API is already protected using the Azure RBAC model. Azure provides the below Azure built-in roles for authorizing access to a Service Bus namespace:
+Azure Active Directory (Azure AD) authorizes access to secured resources through [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md). Azure Service Bus defines a set of Azure built-in roles that encompass common sets of permissions used to access Service Bus entities. You can also define custom roles for accessing the data.
+
+Azure provides the following Azure built-in roles for authorizing access to a Service Bus namespace:
- [Azure Service Bus Data Owner](../role-based-access-control/built-in-roles.md#azure-service-bus-data-owner): Use this role to allow full access to Service Bus namespace and its entities (queues, topics, subscriptions, and filters) - [Azure Service Bus Data Sender](../role-based-access-control/built-in-roles.md#azure-service-bus-data-sender): Use this role to allow sending messages to Service Bus queues and topics. - [Azure Service Bus Data Receiver](../role-based-access-control/built-in-roles.md#azure-service-bus-data-receiver): Use this role to allow receiving messages from Service Bus queues and subscriptions.
+To assign a role to a managed identity in the Azure portal, use the **Access control (IAM)** page. Navigate to this page by selecting **Access control (IAM)** on the **Service Bus Namespace** page or **Service Bus queue** page, or **Service Bus topic** page. For step-by-step instructions for assigning a role, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+ ## Resource scope
-Before you assign an Azure role to a security principal, determine the scope of access that the security principal should have. Best practices dictate that it's always best to grant only the narrowest possible scope.
+Before you assign an Azure role to a managed identity, determine the scope of access that the managed identity should have. Best practices dictate that it's always best to grant only the narrowest possible scope.
The following list describes the levels at which you can scope access to Service Bus resources, starting with the narrowest scope: -- **Queue**, **topic**, or **subscription**: Role assignment applies to the specific Service Bus entity. Currently, the Azure portal doesn't support assigning users/groups/managed identities to Service Bus Azure roles at the subscription level. Here's an example of using the Azure CLI command: [az-role-assignment-create](/cli/azure/role/assignment?#az-role-assignment-create) to assign an identity to a Service Bus Azure role: -
- ```azurecli
- az role assignment create \
- --role $service_bus_role \
- --assignee $assignee_id \
- --scope /subscriptions/$subscription_id/resourceGroups/$resource_group/providers/Microsoft.ServiceBus/namespaces/$service_bus_namespace/topics/$service_bus_topic/subscriptions/$service_bus_subscription
- ```
+- **Queue**, **topic**, or **subscription**: Role assignment applies to the specific Service Bus entity.
- **Service Bus namespace**: Role assignment spans the entire topology of Service Bus under the namespace and to the consumer group associated with it. - **Resource group**: Role assignment applies to all the Service Bus resources under the resource group. - **Subscription**: Role assignment applies to all the Service Bus resources in all of the resource groups in the subscription.
-> [!NOTE]
-> Keep in mind that Azure role assignments may take up to five minutes to propagate.
-
-For more information about how built-in roles are defined, see [Understand role definitions](../role-based-access-control/role-definitions.md#control-and-data-actions). For information about creating Azure custom roles, see [Azure custom roles](../role-based-access-control/custom-roles.md).
-
-## Enable managed identities on a VM
-Before you can use managed identities for Azure Resources to authorize Service Bus resources from your VM, you must first enable managed identities for Azure Resources on the VM. To learn how to enable managed identities for Azure Resources, see one of these articles:
+ > [!NOTE]
+ > Keep in mind that Azure role assignments may take up to five minutes to propagate.
-- [Azure portal](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md)-- [Azure PowerShell](../active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vm.md)-- [Azure CLI](../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md)-- [Azure Resource Manager template](../active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm.md)-- [Azure Resource Manager client libraries](../active-directory/managed-identities-azure-resources/qs-configure-sdk-windows-vm.md)
+Currently, the Azure portal doesn't support assigning users, groups, or managed identities to Service Bus Azure roles at the topic's subscription level. Here's an example of using the Azure CLI command: [az-role-assignment-create](/cli/azure/role/assignment?#az-role-assignment-create) to assign an identity to a Service Bus Azure role:
-## Grant permissions to a managed identity in Azure AD
-To authorize a request to the Service Bus service from a managed identity in your application, the managed identity needs to be added to a Service Bus RBAC role (Azure Service Bus Data Owner, Azure Service Bus Data Sender, Azure Service Bus Data Receiver) at the appropriate scope (subscription, resource group, or namespace). When the Azure role is assigned to a managed identity, the managed identity is granted access to Service Bus entities at the specified scope. For descriptions of Service Bus roles, see the [Azure built-in roles for Azure Service Bus](#azure-built-in-roles-for-azure-service-bus) section. For more information about assigning Azure roles, see [Authenticate and authorize with Azure Active Directory for access to Service Bus resources](authenticate-application.md#azure-built-in-roles-for-azure-service-bus).
+```azurecli
+az role assignment create \
+ --role $service_bus_role \
+ --assignee $assignee_id \
+ --scope /subscriptions/$subscription_id/resourceGroups/$resource_group/providers/Microsoft.ServiceBus/namespaces/$service_bus_namespace/topics/$service_bus_topic/subscriptions/$service_bus_subscription
+```
-## Use Service Bus with managed identities for Azure resources
-To use Service Bus with managed identities, you need to assign the identity the role and the appropriate scope. The procedure in this section uses a simple application that runs under a managed identity and accesses Service Bus resources.
+For more information about how built-in roles are defined, see [Understand role definitions](../role-based-access-control/role-definitions.md#control-and-data-actions). For information about creating Azure custom roles, see [Azure custom roles](../role-based-access-control/custom-roles.md).
-Here we're using a sample web application hosted in [Azure App Service](https://azure.microsoft.com/services/app-service/). For step-by-step instructions for creating a web application, see [Create an ASP.NET Core web app in Azure](../app-service/quickstart-dotnetcore.md)
+## Using SDKs
-Once the application is created, follow these steps:
+In .NET, the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object is initialized by using a constructor that takes a fully qualified namespace and a `TokenCredential`. The `DefaultAzureCredential` derives from `TokenCredential`, which automatically uses the managed identity configured for the app. The flow of the managed identity context to Service Bus and the authorization handshake are automatically handled by the token credential. It's a simpler model than using SAS.
-1. Go to **Settings** and select **Identity**.
-1. Select the **Status** to be **On**.
-1. Select **Save** to save the setting.
+```csharp
+var client = new ServiceBusClient('cotosons.servicebus.windows.net', new DefaultAzureCredential());
+```
- ![Managed identity for a web app](./media/service-bus-managed-service-identity/identity-web-app.png)
+You send and receive messages as usual using [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender) and [ServiceBusReceiver](/dotnet/api/azure.messaging.servicebus.servicebusreceiver) or [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor).
-Once you've enabled this setting, a new service identity is created in your Azure Active Directory (Azure AD) and configured into the App Service host.
+For complete step-by-step instructions to send and receive messages using a managed identity, see the following quickstarts. These quickstarts have the code to use a service principal to send and receive messages, but the code is the same for using a managed identity.
-### To Assign Azure roles using the Azure portal
-Assign one of the [Service Bus roles](#azure-built-in-roles-for-azure-service-bus) to the managed service identity at the desired scope (Service Bus namespace, resource group, subscription). For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+- [.NET](service-bus-dotnet-get-started-with-queues.md).
+- [Java](service-bus-java-how-to-use-queues.md).
+- [JavaScript](service-bus-nodejs-how-to-use-queues.md)
+- [Python](service-bus-python-how-to-use-queues.md)
> [!NOTE]
-> For a list of services that support managed identities, see [Services that support managed identities for Azure resources](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md).
-
-### Run the app
-Now, modify the default page of the ASP.NET application you created. You can use the web application code from [this GitHub repository](https://github.com/Azure-Samples/app-service-msi-servicebus-dotnet).
-
-The Default.aspx page is your landing page. The code can be found in the Default.aspx.cs file. The result is a minimal web application with a few entry fields, and with **send** and **receive** buttons that connect to Service Bus to either send or receive messages.
+> The managed identity works only inside the Azure environment, on App services, Azure VMs, and scale sets. For .NET applications, the Microsoft.Azure.Services.AppAuthentication library, which is used by the Service Bus NuGet package, provides an abstraction over this protocol and supports a local development experience. This library also allows you to test your code locally on your development machine, using your user account from Visual Studio, Azure CLI 2.0 or Active Directory Integrated Authentication. For more on local development options with this library, see [Service-to-service authentication to Azure Key Vault using .NET](/dotnet/api/overview/azure/service-to-service-authentication).
-Note how the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object is initialized by using a constructor that takes a TokenCredential. The DefaultAzureCredential derives from TokenCredential and can be passed here. As such, there are no secrets to retain and use. The flow of the managed identity context to Service Bus and the authorization handshake are automatically handled by the token credential. It is a simpler model than using SAS.
-
-After you make these changes, publish and run the application. You can obtain the correct publishing data easily by downloading and then importing a publishing profile in Visual Studio:
-
-![Get publish profile](./media/service-bus-managed-service-identity/msi3.png)
-
-To send or receive messages, enter the name of the namespace and the name of the entity you created. Then, click either **send** or **receive**.
--
-> [!NOTE]
-> - The managed identity works only inside the Azure environment, on App services, Azure VMs, and scale sets. For .NET applications, the Microsoft.Azure.Services.AppAuthentication library, which is used by the Service Bus NuGet package, provides an abstraction over this protocol and supports a local development experience. This library also allows you to test your code locally on your development machine, using your user account from Visual Studio, Azure CLI 2.0 or Active Directory Integrated Authentication. For more on local development options with this library, see [Service-to-service authentication to Azure Key Vault using .NET](/dotnet/api/overview/azure/service-to-service-authentication).
## Next steps
+See [this .NET web application sample on GitHub](https://github.com/Azure-Samples/app-service-msi-servicebus-dotnet/tree/master), which uses a managed identity to connect to Service Bus to send and receive messages. Add the identity of the app service to the **Azure Service Bus Data Owner** role.
-To learn more about Service Bus messaging, see the following topics:
-
-* [Service Bus queues, topics, and subscriptions](service-bus-queues-topics-subscriptions.md)
-* [Get started with Service Bus queues](service-bus-dotnet-get-started-with-queues.md)
-* [How to use Service Bus topics and subscriptions](service-bus-dotnet-how-to-use-topics-subscriptions.md)
service-bus-messaging Service Bus Messaging Sql Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-messaging-sql-filter.md
Last updated 05/31/2022
# Subscription Rule SQL Filter Syntax
-A *SQL filter* is one of the available filter types for Service Bus topic subscriptions. It's a text expression that leans on a subset of the SQL-92 standard. Filter expressions are used with the `sqlExpression` element of the 'sqlFilter' property of a Service Bus `Rule` in an [Azure Resource Manager template](service-bus-resource-manager-namespace-topic-with-rule.md), or the Azure CLI `az servicebus topic subscription rule create` command's [`--filter-sql-expression`](/cli/azure/servicebus/topic/subscription/rule#az-servicebus-topic-subscription-rule-create) argument, and several SDK functions that allow managing subscription rules.
+A *SQL filter* is one of the available filter types for Service Bus topic subscriptions. It's a text expression that leans on a subset of the SQL-92 standard. Filter expressions are used with the `sqlExpression` element of the 'sqlFilter' property of a Service Bus `Rule` in an [Azure Resource Manager template](service-bus-resource-manager-namespace-topic-with-rule.md), or the Azure CLI `az servicebus topic subscription rule create` command's [`--filter-sql-expression`](/cli/azure/servicebus/topic/subscription/rule#az-servicebus-topic-subscription-rule-create) argument, and several SDK functions that allow managing subscription rules. The allowed expressions are shown below.
Service Bus Premium also supports the [JMS SQL message selector syntax](https://docs.oracle.com/javaee/7/api/javax/jms/Message.html) through the JMS 2.0 API.
spring-apps Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart.md
Deploying the application can take a few minutes.
::: zone-end ## [Azure CLI](#tab/Azure-CLI)
Use the following steps to create an Azure Spring Apps service instance.
--location eastus ``` -- 1. Use the following command to create an Azure Spring Apps service instance: ```azurecli-interactive
Use the following steps to create an Azure Spring Apps service instance.
--name <Azure-Spring-Apps-instance-name> ```
+1. Select **Y** to install the Azure Spring Apps extension and run it.
+
+## Create an app in your Azure Spring Apps instance
+
+An *App* is an abstraction of one business app. For more information, see [App and deployment in Azure Spring Apps](concept-understand-app-and-deployment.md). Apps run in an Azure Spring Apps service instance, as shown in the following diagram.
++
+Use the following command to specify the app name on Azure Spring Apps as `hellospring`:
+
+```azurecli-interactive
+az spring app create \
+ --resource-group <name-of-resource-group> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name hellospring \
+ --assign-endpoint true
+```
+
+## Clone and build the Spring Boot sample project
+
+Use the following steps to clone the Spring Boot sample project.
+
+1. Use the following command to clone the [Spring Boot sample project](https://github.com/spring-guides/gs-spring-boot.git) from GitHub.
+
+ ```azurecli-interactive
+ git clone -b boot-2.7 https://github.com/spring-guides/gs-spring-boot.git
+ ```
+
+1. Use the following command to move to the project folder:
+
+ ```azurecli-interactive
+ cd gs-spring-boot/complete
+ ```
+
+1. Use the following [Maven](https://maven.apache.org/what-is-maven.html) command to build the project.
+
+ ```azurecli-interactive
+ mvn clean package -DskipTests
+ ```
+
+## Deploy the local app to Azure Spring Apps
+
+Use the following command to deploy the *.jar* file for the app (*target/spring-boot-complete-0.0.1-SNAPSHOT.jar* on Windows):
+
+```azurecli-interactive
+az spring app deploy \
+ --resource-group <name-of-resource-group> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name hellospring \
+ --artifact-path target/spring-boot-complete-0.0.1-SNAPSHOT.jar
+```
+
+Deploying the application can take a few minutes. After deployment, you can access the app at `https://<service-instance-name>-hellospring.azuremicroservices.io/`.
+
+## [IntelliJ](#tab/IntelliJ)
+
+- [IntelliJ IDEA](https://www.jetbrains.com/idea/).
+- [Azure Toolkit for IntelliJ](/azure/developer/java/toolkit-for-intellij/install-toolkit).
+
+## Generate a Spring project
+
+Use the following steps to create the project:
+
+1. Use [Spring Initializr](https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.10&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client) to generate a sample project with recommended dependencies for Azure Spring Apps. The following URL provides default settings for you.
+
+ ```url
+ https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.10&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client
+ ```
+
+ The following image shows the recommended Initializr settings for the `hellospring` sample project.
+
+ This example uses Java version 11. To use a different Java version, change the Java version setting under **Project Metadata**.
+
+ :::image type="content" source="media/quickstart/initializr-page.png" alt-text="Screenshot of Spring Initializr settings with Java options highlighted." lightbox="media/quickstart/initializr-page.png":::
+
+1. When all dependencies are set, select **Generate**.
+1. Download and unpack the package, and then create a web controller for your web application by adding the file *src/main/java/com/example/hellospring/HelloController.java* with the following contents:
+
+ ```java
+ package com.example.hellospring;
+
+ import org.springframework.web.bind.annotation.RestController;
+ import org.springframework.web.bind.annotation.RequestMapping;
+
+ @RestController
+ public class HelloController {
+
+ @RequestMapping("/")
+ public String index() {
+ return "Greetings from Azure Spring Apps!";
+ }
+ }
+ ```
+
+## Create an instance of Azure Spring Apps
+
+Use the following steps to create an instance of Azure Spring Apps using the Azure portal.
+
+1. In a new tab, open the [Azure portal](https://portal.azure.com/).
+
+1. From the top search box, search for **Azure Spring Apps**.
+
+1. Select **Azure Spring Apps** from the results.
+
+ :::image type="content" source="media/quickstart/spring-apps-start.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps service in search results." lightbox="media/quickstart/spring-apps-start.png":::
+
+1. On the Azure Spring Apps page, select **Create**.
+
+ :::image type="content" source="media/quickstart/spring-apps-create.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps resource with Create button highlighted." lightbox="media/quickstart/spring-apps-create.png":::
+
+1. Fill out the form on the Azure Spring Apps **Create** page. Consider the following guidelines:
+
+ - **Subscription**: Select the subscription you want to be billed for this resource.
+ - **Resource group**: Creating new resource groups for new resources is a best practice.
+ - **Name**: Specify the service instance name.
+ - **Plan**: Select the *Standard* plan for your service instance.
+ - **Region**: Select the region for your service instance.
+ - **Zone Redundant**: Select the zone redundant checkout to create your Azure Spring Apps service in an Azure availability zone.
+
+ :::image type="content" source="media/quickstart/portal-start.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps Create page." lightbox="media/quickstart/portal-start.png":::
+
+1. Select **Review and Create** to review your selections. Select **Create** to provision the Azure Spring Apps instance.
+
+## Import the project
+
+Use the following steps to import the project.
+
+1. Open IntelliJ IDEA, and then select **Open**.
+1. In the **Open File or Project** dialog box, select the *hellospring* folder.
+
+ :::image type="content" source="media/quickstart/intellij-new-project.png" alt-text="Screenshot of IntelliJ IDEA showing Open File or Project dialog box." lightbox="media/quickstart/intellij-new-project.png":::
+
+## Build and deploy your app
+
+> [!NOTE]
+> To run the project locally, add `spring.config.import=optional:configserver:` to the project's *application.properties* file.
+
+Use the following steps to build and deploy your app.
+
+1. If you haven't already installed the Azure Toolkit for IntelliJ, follow the steps in [Install the Azure Toolkit for IntelliJ](/azure/developer/java/toolkit-for-intellij/install-toolkit).
+
+1. Right-click your project in the IntelliJ Project window, and then select **Azure** -> **Deploy to Azure Spring Apps**.
+
+ :::image type="content" source="media/quickstart/intellij-deploy-azure.png" alt-text="Screenshot of IntelliJ IDEA menu showing Deploy to Azure Spring Apps option." lightbox="media/quickstart/intellij-deploy-azure.png":::
+
+1. Accept the name for the app in the **Name** field. **Name** refers to the configuration, not the app name. You don't usually need to change it.
+1. In the **Artifact** textbox, select **Maven:com.example:hellospring-0.0.1-SNAPSHOT**.
+1. In the **Subscription** textbox, verify that your subscription is correct.
+1. In the **Service** textbox, select the instance of Azure Spring Apps that you created in the [Provision an instance of Azure Spring Apps](#provision-an-instance-of-azure-spring-apps-1) section.
+1. In the **App** textbox, select the plus sign (**+**) to create a new app.
+
+ :::image type="content" source="media/quickstart/intellij-create-new-app.png" alt-text="Screenshot of IntelliJ IDEA showing Deploy Azure Spring Apps dialog box." lightbox="media/quickstart/intellij-create-new-app.png":::
+
+1. In the **App name:** textbox under **App Basics**, enter *hellospring*, and then select the **More settings** check box.
+1. Select the **Enable** button next to **Public endpoint**. The button changes to **Disable \<to be enabled\>**.
+1. If you're using Java 11, select **Java 11** for the **Runtime** option.
+1. Select **OK**.
+
+ :::image type="content" source="media/quickstart/intellij-more-settings.png" alt-text="Screenshot of IntelliJ IDEA Create Azure Spring Apps dialog box with public endpoint Disable button highlighted." lightbox="media/quickstart/intellij-more-settings.png":::
+
+1. Under **Before launch**, select **Run Maven Goal 'hellospring:package'**, and then select the pencil icon to edit the command line.
+
+ :::image type="content" source="media/quickstart/intellij-edit-maven-goal.png" alt-text="Screenshot of IntelliJ IDEA Create Azure Spring Apps dialog box with Maven Goal edit button highlighted." lightbox="media/quickstart/intellij-edit-maven-goal.png":::
+
+1. In the **Command line** textbox, enter *-DskipTests* after *package*, and then select **OK**.
+
+ :::image type="content" source="media/quickstart/intellij-maven-goal-command-line.png" alt-text="Screenshot of IntelliJ IDEA Select Maven Goal dialog box with Command Line value highlighted." lightbox="media/quickstart/intellij-maven-goal-command-line.png":::
+
+1. To start the deployment, select the **Run** button at the bottom of the **Deploy Azure Spring Apps app** dialog box. The plug-in runs the command `mvn package -DskipTests` on the `hellospring` app and deploys the *.jar* file generated by the `package` command.
+
+Deploying the application can take a few minutes. After deployment, you can access the app at `https://<service-instance-name>-hellospring.azuremicroservices.io/`.
+
+## [Visual Studio Code](#tab/visual-studio-code)
+
+> [!NOTE]
+> To deploy a Spring Boot web app to Azure Spring Apps by using Visual Studio Code, follow the steps in [Java on Azure Spring Apps](https://code.visualstudio.com/docs/java/java-spring-apps).
+++ ::: zone-end ::: zone pivot="sc-enterprise"
-1. Accept the legal terms and privacy statements for the Enterprise plan.
+## [Azure CLI](#tab/Azure-CLI)
+
+- [Apache Maven](https://maven.apache.org/download.cgi)
+- [Azure CLI](/cli/azure/install-azure-cli). Install the Azure Spring Apps extension with the following command: `az extension add --name spring`
+
+## Provision an instance of Azure Spring Apps
+
+Use the following steps to create an Azure Spring Apps service instance.
+
+1. Select **Open Cloudshell** and sign in to your Azure account in [Azure Cloud Shell](../cloud-shell/overview.md).
+
+ ```azurecli-interactive
+ az account show
+ ```
+
+1. Azure Cloud Shell workspaces are temporary. When first started, the shell prompts you to associate an Azure Storage instance with your subscription to persist files across sessions. For more information, see [Introduction to Azure Storage](../storage/common/storage-introduction.md).
+
+ :::image type="content" source="media/quickstart/azure-storage-subscription.png" alt-text="Screenshot of an Azure portal alert that no storage is mounted in the Azure Cloud Shell." lightbox="media/quickstart/azure-storage-subscription.png":::
+
+1. After you sign in successfully, use the following command to display a list of your subscriptions:
+
+ ```azurecli-interactive
+ az account list --output table
+ ```
+
+1. Use the following command to set your default subscription:
+
+ ```azurecli-interactive
+ az account set --subscription <subscription-ID>
+ ```
+
+1. Use the following command to create a resource group:
+
+ ```azurecli-interactive
+ az group create \
+ --resource-group <name-of-resource-group> \
+ --location eastus
+ ```
- > [!NOTE]
- > This step is necessary only if your subscription has never been used to create an Enterprise plan instance of Azure Spring Apps.
+1. Use the following commands to accept the legal terms and privacy statements for the Enterprise plan. This step is necessary only if your subscription has never been used to create an Enterprise plan instance of Azure Spring Apps.
```azurecli-interactive az provider register --namespace Microsoft.SaaS
Use the following steps to create an Azure Spring Apps service instance.
--plan asa-ent-hr-mtr ```
-1. Use the following command to create an Azure Spring Apps Enterprise plan service instance:
+1. Use the following command to create an Azure Spring Apps service instance:
```azurecli-interactive az spring create \
Use the following steps to create an Azure Spring Apps service instance.
--sku Enterprise ``` -- ## Create an app in your Azure Spring Apps instance An *App* is an abstraction of one business app. For more information, see [App and deployment in Azure Spring Apps](concept-understand-app-and-deployment.md). Apps run in an Azure Spring Apps service instance, as shown in the following diagram. Use the following command to specify the app name on Azure Spring Apps as `hellospring`:
Use the following steps to create an instance of Azure Spring Apps using the Azu
1. Fill out the form on the Azure Spring Apps **Create** page. Consider the following guidelines:
- - **Subscription**: Select the subscription you want to be billed for this resource.
- - **Resource group**: Creating new resource groups for new resources is a best practice.
- - **Service Name**: Specify the service instance name. You use this name later in this article where the *\<Azure-Spring-Apps-instance-name\>* placeholder appears. The name must be between 4 and 32 characters long and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
- - **Region**: Select the region for your service instance.
+ - **Subscription**: Select the subscription you want to be billed for this resource.
+ - **Resource group**: Creating new resource groups for new resources is a best practice.
+ - **Name**: Specify the service instance name.
+ - **Plan**: Select the *Enterprise* plan for your service instance.
+ - **Region**: Select the region for your service instance.
+ - **Zone Redundant**: Select the zone redundant checkout to create your Azure Spring Apps service in an Azure availability zone.
+ - **Plan**: Pay as you go with Azure Spring Apps.
+ - **Terms**: It's required to select the agreement checkbox associated with [Marketplace offering](https://aka.ms/ascmpoffer).
- :::image type="content" source="media/quickstart/portal-start.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps Create page." lightbox="media/quickstart/portal-start.png":::
+ :::image type="content" source="media/quickstart/enterprise-plan-creation.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps Create with enterprise plan page." lightbox="media/quickstart/enterprise-plan-creation.png":::
-1. Select **Review and create**.
+1. Select **Review and Create** to review your selections. Select **Create** to provision the Azure Spring Apps instance.
## Import the project
storage Storage Blob Upload Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-java.md
Previously updated : 04/21/2023 Last updated : 06/16/2023
The following example uploads `BinaryData` to a block blob using a `BlobClient`
:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobUpload.java" id="Snippet_UploadBlobData":::
-## Upload a block blob with index tags
+## Upload a block blob with configuration options
-The following example uploads a block blob with index tags set using `BlobUploadFromFileOptions`:
+You can define client library configuration options when uploading a blob. These options can be tuned to improve performance, enhance reliability, and optimize costs. The following code examples show how to use [BlobUploadFromFileOptions](/java/api/com.azure.storage.blob.options.blobuploadfromfileoptions) to define configuration options when calling an upload method. If you're not uploading from a file, you can set similar options using [BlobParallelUploadOptions](/java/api/com.azure.storage.blob.options.blobparalleluploadoptions) on an upload method.
+
+### Specify data transfer options on upload
+
+You can configure values in [ParallelTransferOptions](/java/api/com.azure.storage.blob.models.paralleltransferoptions) to improve performance for data transfer operations. The following table lists the methods you can use to set these options, along with a description:
+
+| Method | Description |
+| | |
+| [`setBlockSizeLong(Long blockSize)`](/java/api/com.azure.storage.blob.models.paralleltransferoptions#com-azure-storage-blob-models-paralleltransferoptions-setblocksizelong(java-lang-long)) | Sets the block size to transfer for each request. For uploads, the parameter `blockSize` is the size of each block that's staged. This value also determines the number of requests that need to be made. If `blockSize` is large, the upload makes fewer network calls, but each individual call sends more data. |
+| [`setMaxConcurrency(Integer maxConcurrency)`](/java/api/com.azure.storage.blob.models.paralleltransferoptions#com-azure-storage-blob-models-paralleltransferoptions-setmaxconcurrency(java-lang-integer)) | The parameter `maxConcurrency` is the maximum number of parallel requests that are issued at any given time as a part of a single parallel transfer. This value applies per API call. |
+| [`setMaxSingleUploadSizeLong(Long maxSingleUploadSize)`](/java/api/com.azure.storage.blob.models.paralleltransferoptions#com-azure-storage-blob-models-paralleltransferoptions-setmaxsingleuploadsizelong(java-lang-long)) | If the size of the data is less than or equal to this value, it's uploaded in a single put rather than broken up into chunks. If the data is uploaded in a single shot, the block size is ignored. |
+
+The following code example shows how to set values for [ParallelTransferOptions](/java/api/com.azure.storage.blob.models.paralleltransferoptions) and include the options as part of a [BlobUploadFromFileOptions](/java/api/com.azure.storage.blob.options.blobuploadfromfileoptions) instance. The values provided in this sample aren't intended to be a recommendation. To properly tune these values, you need to consider the specific needs of your app.
++
+### Upload a block blob with index tags
+
+Blob index tags categorize data in your storage account using key-value tag attributes. These tags are automatically indexed and exposed as a searchable multi-dimensional index to easily find data.
+
+The following example uploads a block blob with index tags set using [BlobUploadFromFileOptions](/java/api/com.azure.storage.blob.options.blobuploadfromfileoptions):
:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobUpload.java" id="Snippet_UploadBlobTags":::
+### Set a blob's access tier on upload
+
+You can set a blob's access tier on upload by using the [BlobUploadFromFileOptions](/java/api/com.azure.storage.blob.options.blobuploadfromfileoptions) class. The following code example shows how to set the access tier when uploading a blob:
++
+Setting the access tier is only allowed for block blobs. You can set the access tier for a block blob to `Hot`, `Cool`, `Cold`, or `Archive`.
+
+To learn more about access tiers, see [Access tiers overview](access-tiers-overview.md).
+
+## Upload a block blob by staging blocks and committing
+
+You can have greater control over how to divide uploads into blocks by manually staging individual blocks of data. When all of the blocks that make up a blob are staged, you can commit them to Blob Storage. You can use this approach to enhance performance by uploading blocks in parallel.
++ ## Resources To learn more about uploading blobs using the Azure Blob Storage client library for Java, see the following resources.
storage Storage Blob Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload.md
description: Learn how to upload a blob to your Azure Storage account using the
Previously updated : 06/13/2023 Last updated : 06/16/2023
You can open a stream in Blob Storage and write to it. The following example cre
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/BlobDevGuideBlobs/UploadBlob.cs" id="Snippet_UploadToStream":::
-## Upload a block blob by staging blocks and committing
-
-You can have greater control over how to divide uploads into blocks by manually staging individual blocks of data. When all of the blocks that make up a blob are staged, you can commit them to Blob Storage. You can use this approach to enhance performance by uploading blocks in parallel.
-- ## Upload a block blob with configuration options You can define client library configuration options when uploading a blob. These options can be tuned to improve performance, enhance reliability, and optimize costs. The following code examples show how to use [BlobUploadOptions](/dotnet/api/azure.storage.blobs.models.blobuploadoptions) to define configuration options when calling an upload method.
Setting the access tier is only allowed for block blobs. You can set the access
To learn more about access tiers, see [Access tiers overview](access-tiers-overview.md).
+## Upload a block blob by staging blocks and committing
+
+You can have greater control over how to divide uploads into blocks by manually staging individual blocks of data. When all of the blocks that make up a blob are staged, you can commit them to Blob Storage. You can use this approach to enhance performance by uploading blocks in parallel.
++ ## Resources To learn more about uploading blobs using the Azure Blob Storage client library for .NET, see the following resources.
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
Previously updated : 06/13/2023 Last updated : 06/16/2023
Resources of some services that are registered in your subscription can access y
The following table lists services that can access your storage account data if the resource instances of those services have the appropriate permission.
-| Service | Resource provider name | Purpose |
-| :-- | :- | :-- |
-| Azure API Management | `Microsoft.ApiManagement/service` | Enables access to storage accounts behind firewalls via policies. [Learn more](../../api-management/authentication-managed-identity-policy.md#use-managed-identity-in-send-request-policy). |
+| Service | Resource provider name | Purpose |
+| : | :-- | :-- |
+| Azure FarmBeats | `Microsoft.AgFoodPlatform/farmBeats` | Enables access to storage accounts. |
+| Azure API Management | `Microsoft.ApiManagement/service` | Enables access to storage accounts behind firewalls via policies. [Learn more](../../api-management/authentication-managed-identity-policy.md#use-managed-identity-in-send-request-policy). |
+| Microsoft Autonomous Systems | `Microsoft.AutonomousSystems/workspaces` | Enables access to storage accounts. |
| Azure Cache for Redis | `Microsoft.Cache/Redis` | Enables access to storage accounts. [Learn more](../../azure-cache-for-redis/cache-managed-identity.md).|
-| Azure Cognitive Search | `Microsoft.Search/searchServices` | Enables access to storage accounts for indexing, processing, and querying. |
-| Azure Cognitive Services | `Microsoft.CognitiveService/accounts` | Enables access to storage accounts. [Learn more](../..//cognitive-services/cognitive-services-virtual-networks.md).|
-| Azure Container Registry | `Microsoft.ContainerRegistry/registries` | Through the ACR Tasks suite of features, enables access to storage accounts when you're building container images. |
-| Azure Data Factory | `Microsoft.DataFactory/factories` | Enables access to storage accounts through the Data Factory runtime. |
-| Azure Data Share | `Microsoft.DataShare/accounts` | Enables access to storage accounts. |
-| Azure DevTest Labs | `Microsoft.DevTestLab/labs` | Enables access to storage accounts. |
-| Azure Event Grid | `Microsoft.EventGrid/topics` | Enables access to storage accounts. |
-| Azure Healthcare APIs | `Microsoft.HealthcareApis/services` | Enables access to storage accounts. |
-| Azure IoT Central | `Microsoft.IoTCentral/IoTApps` | Enables access to storage accounts. |
-| Azure IoT Hub | `Microsoft.Devices/IotHubs` | Allows data from an IoT hub to be written to Blob Storage. [Learn more](../../iot-hub/virtual-network-support.md#egress-connectivity-from-iot-hub-to-other-azure-resources). |
-| Azure Logic Apps | `Microsoft.Logic/workflows` | Enables logic apps to access storage accounts. [Learn more](../../logic-apps/create-managed-service-identity.md#authenticate-access-with-managed-identity). |
-| Azure Machine Learning | `Microsoft.MachineLearningServices` | Enables authorized Azure Machine Learning workspaces to write experiment output, models, and logs to Blob Storage and read the data. [Learn more](../../machine-learning/how-to-network-security-overview.md#secure-the-workspace-and-associated-resources). |
-| Azure Media Services | `Microsoft.Media/mediaservices` | Enables access to storage accounts. |
-| Azure Migrate | `Microsoft.Migrate/migrateprojects` | Enables access to storage accounts. |
-| Microsoft Purview | `Microsoft.Purview/accounts` | Enables access to storage accounts. |
-| Azure Site Recovery | `Microsoft.RecoveryServices/vaults` | Enables access to storage accounts. |
-| Azure SQL Database | `Microsoft.Sql` | Allows [writing audit data to storage accounts behind a firewall](/azure/azure-sql/database/audit-write-storage-account-behind-vnet-firewall). |
-| Azure Synapse Analytics | `Microsoft.Sql` | Allows import and export of data from specific SQL databases via the `COPY` statement or PolyBase (in a dedicated pool), or the `openrowset` function and external tables in a serverless pool. [Learn more](/azure/azure-sql/database/vnet-service-endpoint-rule-overview). |
-| Azure Stream Analytics | `Microsoft.StreamAnalytics` | Allows data from a streaming job to be written to Blob Storage. [Learn more](../../stream-analytics/blob-output-managed-identity.md). |
-| Azure Synapse Analytics | `Microsoft.Synapse/workspaces` | Enables access to data in Azure Storage. |
+| Azure Cognitive Search | `Microsoft.Search/searchServices` | Enables access to storage accounts for indexing, processing, and querying. |
+| Azure Cognitive Services | `Microsoft.CognitiveService/accounts` | Enables access to storage accounts. [Learn more](../..//cognitive-services/cognitive-services-virtual-networks.md).|
+| Azure Container Registry | `Microsoft.ContainerRegistry/registries`| Through the ACR Tasks suite of features, enables access to storage accounts when you're building container images. |
+| Azure Databricks | `Microsoft.Databricks/accessConnectors` | Enables access to storage accounts. |
+| Azure Data Factory | `Microsoft.DataFactory/factories` | Enables access to storage accounts through the Data Factory runtime. |
+| Azure Backup Vault | `Microsoft.DataProtection/BackupVaults` | Enables access to storage accounts. |
+| Azure Data Share | `Microsoft.DataShare/accounts` | Enables access to storage accounts. |
+| Azure Database for PostgreSQL | `Microsoft.DBForPostgreSQL` | Enables access to storage accounts. |
+| Azure IoT Hub | `Microsoft.Devices/IotHubs` | Allows data from an IoT hub to be written to Blob Storage. [Learn more](../../iot-hub/virtual-network-support.md#egress-connectivity-from-iot-hub-to-other-azure-resources). |
+| Azure DevTest Labs | `Microsoft.DevTestLab/labs` | Enables access to storage accounts. |
+| Azure Event Grid | `Microsoft.EventGrid/domains` | Enables access to storage accounts. |
+| Azure Event Grid | `Microsoft.EventGrid/partnerTopics` | Enables access to storage accounts. |
+| Azure Event Grid | `Microsoft.EventGrid/systemTopics` | Enables access to storage accounts. |
+| Azure Event Grid | `Microsoft.EventGrid/topics` | Enables access to storage accounts. |
+| Azure Healthcare APIs | `Microsoft.HealthcareApis/services` | Enables access to storage accounts. |
+| Azure Healthcare APIs | `Microsoft.HealthcareApis/workspaces` | Enables access to storage accounts. |
+| Azure IoT Central | `Microsoft.IoTCentral/IoTApps` | Enables access to storage accounts. |
+| Azure Key Vault Managed HSM | `Microsoft.keyvault/managedHSMs` | Enables access to storage accounts. |
+| Azure Logic Apps | `Microsoft.Logic/integrationAccounts` | Enables logic apps to access storage accounts. [Learn more](../../logic-apps/create-managed-service-identity.md#authenticate-access-with-managed-identity). |
+| Azure Logic Apps | `Microsoft.Logic/workflows` | Enables logic apps to access storage accounts. [Learn more](../../logic-apps/create-managed-service-identity.md#authenticate-access-with-managed-identity). |
+| Azure Machine Learning Studio | `Microsoft.MachineLearning/registries` | Enables authorized Azure Machine Learning workspaces to write experiment output, models, and logs to Blob Storage and read the data. [Learn more](../../machine-learning/how-to-network-security-overview.md#secure-the-workspace-and-associated-resources). |
+| Azure Machine Learning | `Microsoft.MachineLearningServices` | Enables authorized Azure Machine Learning workspaces to write experiment output, models, and logs to Blob Storage and read the data. [Learn more](../../machine-learning/how-to-network-security-overview.md#secure-the-workspace-and-associated-resources). |
+| Azure Machine Learning | `Microsoft.MachineLearningServices/workspaces` | Enables authorized Azure Machine Learning workspaces to write experiment output, models, and logs to Blob Storage and read the data. [Learn more](../../machine-learning/how-to-network-security-overview.md#secure-the-workspace-and-associated-resources). |
+| Azure Media Services | `Microsoft.Media/mediaservices` | Enables access to storage accounts. |
+| Azure Migrate | `Microsoft.Migrate/migrateprojects` | Enables access to storage accounts. |
+| Azure Spatial Anchors | `Microsoft.MixedReality/remoteRenderingAccounts` | Enables access to storage accounts. |
+| Azure ExpressRoute | `Microsoft.Network/expressRoutePorts` | Enables access to storage accounts. |
+| Microsoft Power Platform | `Microsoft.PowerPlatform/enterprisePolicies` | Enables access to storage accounts. |
+| Microsoft Project Arcadia | `Microsoft.ProjectArcadia/workspaces` | Enables access to storage accounts. |
+| Azure Data Catalog | `Microsoft.ProjectBabylon/accounts` | Enables access to storage accounts. |
+| Microsoft Purview | `Microsoft.Purview/accounts` | Enables access to storage accounts. |
+| Azure Site Recovery | `Microsoft.RecoveryServices/vaults` | Enables access to storage accounts. |
+| Security Center | `Microsoft.Security/dataScanners` | Enables access to storage accounts. |
+| Singularity | `Microsoft.Singularity/accounts` | Enables access to storage accounts. |
+| Azure SQL Database | `Microsoft.Sql` | Allows [writing audit data to storage accounts behind a firewall](/azure/azure-sql/database/audit-write-storage-account-behind-vnet-firewall). |
+| Azure SQL Servers | `Microsoft.Sql/servers` | Allows [writing audit data to storage accounts behind a firewall](/azure/azure-sql/database/audit-write-storage-account-behind-vnet-firewall). |
+| Azure Synapse Analytics | `Microsoft.Sql` | Allows import and export of data from specific SQL databases via the `COPY` statement or PolyBase (in a dedicated pool), or the `openrowset` function and external tables in a serverless pool. [Learn more](/azure/azure-sql/database/vnet-service-endpoint-rule-overview). |
+| Azure Stream Analytics | `Microsoft.StreamAnalytics` | Allows data from a streaming job to be written to Blob Storage. [Learn more](../../stream-analytics/blob-output-managed-identity.md). |
+| Azure Stream Analytics | `Microsoft.StreamAnalytics/streamingjobs` | Allows data from a streaming job to be written to Blob Storage. [Learn more](../../stream-analytics/blob-output-managed-identity.md). |
+| Azure Synapse Analytics | `Microsoft.Synapse/workspaces` | Enables access to data in Azure Storage. |
+| Azure Video Indexer | `Microsoft.VideoIndexer/Accounts` | Enables access to storage accounts. |
If your account doesn't have the hierarchical namespace feature enabled on it, you can grant permission by explicitly assigning an Azure role to the [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for each resource instance. In this case, the scope of access for the instance corresponds to the Azure role that's assigned to the managed identity.
storage File Sync Troubleshoot Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-sync-errors.md
To see these errors, run the **FileSyncErrorsReport.ps1** PowerShell script (loc
| 0x80c80018 | -2134376424 | ECS_E_SYNC_FILE_IN_USE | The file can't be synced because it's in use. The file will be synced when it's no longer in use. | No action required. Azure File Sync creates a temporary VSS snapshot once a day on the server to sync files that have open handles. | | 0x80c8031d | -2134375651 | ECS_E_CONCURRENCY_CHECK_FAILED | The file has changed, but the change hasn't yet been detected by sync. Sync will recover after this change is detected. | No action required. | | 0x80070002 | -2147024894 | ERROR_FILE_NOT_FOUND | The file was deleted and sync isn't aware of the change. | No action required. Sync will stop logging this error once change detection detects the file was deleted. |
-| 0x80070003 | -2147942403 | ERROR_PATH_NOT_FOUND | Deletion of a file or directory can't be synced because the item was already deleted in the destination and sync isn't aware of the change. | No action required. Sync will stop logging this error once change detection runs on the destination and sync detects the item was deleted. |
+| 0x80070003 | -2147024893 | ERROR_PATH_NOT_FOUND | Deletion of a file or directory can't be synced because the item was already deleted in the destination and sync isn't aware of the change. | No action required. Sync will stop logging this error once change detection runs on the destination and sync detects the item was deleted. |
| 0x80c80205 | -2134375931 | ECS_E_SYNC_ITEM_SKIP | The file or directory was skipped but will be synced during the next sync session. If this error is reported when downloading the item, the file or directory name is more than likely invalid. | No action required if this error is reported when uploading the file. If the error is reported when downloading the file, rename the file or directory in question. See [Handling unsupported characters](?tabs=portal1%252cazure-portal#handling-unsupported-characters) for more information. | | 0x800700B7 | -2147024713 | ERROR_ALREADY_EXISTS | Creation of a file or directory can't be synced because the item already exists in the destination and sync isn't aware of the change. | No action required. Sync will stop logging this error once change detection runs on the destination and sync is aware of this new item. | | 0x80c8603e | -2134351810 | ECS_E_AZURE_STORAGE_SHARE_SIZE_LIMIT_REACHED | The file can't be synced because the Azure file share limit is reached. | To resolve this issue, see [You reached the Azure file share storage limit](?tabs=portal1%252cazure-portal#-2134351810) section in the troubleshooting guide. |
storage Files Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-redundancy.md
description: Understand the data redundancy options available in Azure file shar
Previously updated : 06/13/2023 Last updated : 06/16/2023
For pricing information for each redundancy option, see [Azure Files pricing](ht
## See also -- [Change the redundancy option for a storage account](../common/redundancy-migration.md)
+- [Change the redundancy option for a storage account](../common/redundancy-migration.md?toc=/azure/storage/files/toc.json)
+- [Use geo-redundancy to design highly available applications](../common/geo-redundant-design.md?toc=/azure/storage/files/toc.json)
storage Files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md
Note: Azure File Sync is zone-redundant in all regions that [support zones](../.
### 2022 quarter 4 (October, November, December) #### Azure Active Directory (Azure AD) Kerberos authentication for hybrid identities on Azure Files is generally available
-This [feature](storage-files-identity-auth-azure-active-directory-enable.md) builds on top of [FSLogix profile container support](../../virtual-desktop/create-profile-container-azure-ad.md) released in December 2022 and expands it to support more use cases (SMB only). Hybrid identities, which are user identities created in Active Directory Domain Services (AD DS) and synced to Azure AD, can mount and access Azure file shares without the need for line-of-sight to an Active Directory domain controller. While the initial support is limited to hybrid identities, itΓÇÖs a significant milestone as we simplify identity-based authentication for Azure Files customers. [Read the blog post](https://techcommunity.microsoft.com/t5/azure-storage-blog/general-availability-azure-active-directory-kerberos-with-azure/ba-p/3612111).
+This [feature](storage-files-identity-auth-hybrid-identities-enable.md) builds on top of [FSLogix profile container support](../../virtual-desktop/create-profile-container-azure-ad.md) released in December 2022 and expands it to support more use cases (SMB only). Hybrid identities, which are user identities created in Active Directory Domain Services (AD DS) and synced to Azure AD, can mount and access Azure file shares without the need for line-of-sight to an Active Directory domain controller. While the initial support is limited to hybrid identities, itΓÇÖs a significant milestone as we simplify identity-based authentication for Azure Files customers. [Read the blog post](https://techcommunity.microsoft.com/t5/azure-storage-blog/general-availability-azure-active-directory-kerberos-with-azure/ba-p/3612111).
### 2022 quarter 2 (April, May, June) #### SUSE Linux support for SAP HANA System Replication (HSR) and Pacemaker
storage Storage Files Active Directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-active-directory-overview.md
The following diagram represents the workflow for Azure AD DS authentication to
:::image type="content" source="media/storage-files-active-directory-overview/files-azure-ad-ds-auth-diagram.png" alt-text="Diagram of configuration for Azure AD DS authentication with Azure Files over SMB.":::
-To learn how to enable Azure AD DS authentication, see [Enable Azure Active Directory Domain Services authentication on Azure Files](storage-files-identity-auth-active-directory-domain-service-enable.md).
+To learn how to enable Azure AD DS authentication, see [Enable Azure Active Directory Domain Services authentication on Azure Files](storage-files-identity-auth-domain-services-enable.md).
### Azure AD Kerberos for hybrid identities
Enabling and configuring Azure AD for authenticating [hybrid user identities](..
:::image type="content" source="media/storage-files-active-directory-overview/files-azure-ad-kerberos-diagram.png" alt-text="Diagram of configuration for Azure AD Kerberos authentication for hybrid identities over SMB.":::
-To learn how to enable Azure AD Kerberos authentication for hybrid identities, see [Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files](storage-files-identity-auth-azure-active-directory-enable.md).
+To learn how to enable Azure AD Kerberos authentication for hybrid identities, see [Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files](storage-files-identity-auth-hybrid-identities-enable.md).
You can also use this feature to store FSLogix profiles on Azure file shares for Azure AD-joined VMs. For more information, see [Create a profile container with Azure Files and Azure Active Directory](../../virtual-desktop/create-profile-container-azure-ad.md).
For more information about Azure Files and identity-based authentication over SM
- [Planning for an Azure Files deployment](storage-files-planning.md) - [Overview - on-premises Active Directory Domain Services authentication over SMB for Azure file shares](storage-files-identity-auth-active-directory-enable.md)-- [Enable Azure Active Directory Domain Services authentication on Azure Files](storage-files-identity-auth-active-directory-domain-service-enable.md)-- [Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files](storage-files-identity-auth-azure-active-directory-enable.md)
+- [Enable Azure Active Directory Domain Services authentication on Azure Files](storage-files-identity-auth-domain-services-enable.md)
+- [Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files](storage-files-identity-auth-hybrid-identities-enable.md)
- [Enable AD Kerberos authentication for Linux clients](storage-files-identity-auth-linux-kerberos-enable.md) - [FAQ](storage-files-faq.md)
storage Storage Files Identity Auth Domain Services Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-domain-services-enable.md
+
+ Title: Use Azure Active Directory Domain Services (Azure AD DS) to authorize user access to Azure Files over SMB
+description: Learn how to enable identity-based authentication over Server Message Block (SMB) for Azure Files through Azure Active Directory Domain Services (Azure AD DS). Your domain-joined Windows VMs can then access Azure file shares by using Azure AD credentials.
+++ Last updated : 05/03/2023+++
+recommendations: false
++
+# Enable Azure Active Directory Domain Services authentication on Azure Files
+
+This article focuses on enabling and configuring Azure AD DS for identity-based authentication with Azure file shares. In this authentication scenario, Azure AD credentials and Azure AD DS credentials are the same and can be used interchangeably.
+
+We strongly recommend that you review the [How it works section](./storage-files-active-directory-overview.md#how-it-works) to select the right AD source for authentication. The setup is different depending on the AD source you choose.
+
+If you're new to Azure Files, we recommend reading our [planning guide](storage-files-planning.md) before reading this article.
+
+> [!NOTE]
+> Azure Files supports Kerberos authentication with Azure AD DS with RC4-HMAC and AES-256 encryption. We recommend using AES-256.
+>
+> Azure Files supports authentication for Azure AD DS with full or partial (scoped) synchronization with Azure AD. For environments with scoped synchronization present, administrators should be aware that Azure Files only honors Azure RBAC role assignments granted to principals that are synchronized. Role assignments granted to identities not synchronized from Azure AD to Azure AD DS will be ignored by the Azure Files service.
+
+## Applies to
+| File share type | SMB | NFS |
+|-|:-:|:-:|
+| Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Standard file shares (GPv2), GRS/GZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+
+## Prerequisites
+
+Before you enable Azure AD DS over SMB for Azure file shares, make sure you've completed the following prerequisites:
+
+1. **Select or create an Azure AD tenant.**
+
+ You can use a new or existing tenant. The tenant and the file share that you want to access must be associated with the same subscription.
+
+ To create a new Azure AD tenant, you can [Add an Azure AD tenant and an Azure AD subscription](/windows/client-management/mdm/add-an-azure-ad-tenant-and-azure-ad-subscription). If you have an existing Azure AD tenant but want to create a new tenant for use with Azure file shares, see [Create an Azure Active Directory tenant](/rest/api/datacatalog/create-an-azure-active-directory-tenant).
+
+1. **Enable Azure AD Domain Services on the Azure AD tenant.**
+
+ To support authentication with Azure AD credentials, you must enable Azure AD DS for your Azure AD tenant. If you aren't the administrator of the Azure AD tenant, contact the administrator and follow the step-by-step guidance to [Enable Azure Active Directory Domain Services using the Azure portal](../../active-directory-domain-services/tutorial-create-instance.md).
+
+ It typically takes about 15 minutes for an Azure AD DS deployment to complete. Verify that the health status of Azure AD DS shows **Running**, with password hash synchronization enabled, before proceeding to the next step.
+
+1. **Domain-join an Azure VM with Azure AD DS.**
+
+ To access an Azure file share by using Azure AD credentials from a VM, your VM must be domain-joined to Azure AD DS. For more information about how to domain-join a VM, see [Join a Windows Server virtual machine to a managed domain](../../active-directory-domain-services/join-windows-vm.md). Azure AD DS authentication over SMB with Azure file shares is supported only on Azure VMs running on OS versions above Windows 7 or Windows Server 2008 R2.
+
+ > [!NOTE]
+ > Non-domain-joined VMs can access Azure file shares using Azure AD DS authentication only if the VM has line-of-sight to the domain controllers for Azure AD DS. Usually this requires either site-to-site or point-to-site VPN.
+
+1. **Select or create an Azure file share.**
+
+ Select a new or existing file share that's associated with the same subscription as your Azure AD tenant. For information about creating a new file share, see [Create a file share in Azure Files](storage-how-to-create-file-share.md).
+ For optimal performance, we recommend that your file share be in the same region as the VM from which you plan to access the share.
+
+1. **Verify Azure Files connectivity by mounting Azure file shares using your storage account key.**
+
+ To verify that your VM and file share are properly configured, try mounting the file share using your storage account key. For more information, see [Mount an Azure file share and access the share in Windows](storage-how-to-use-files-windows.md).
+
+## Regional availability
+
+Azure Files authentication with Azure AD DS is available in [all Azure Public, Gov, and China regions](https://azure.microsoft.com/global-infrastructure/locations/).
+
+## Overview of the workflow
+
+Before you enable Azure AD DS authentication over SMB for Azure file shares, verify that your Azure AD and Azure Storage environments are properly configured. We recommend that you walk through the [prerequisites](#prerequisites) to make sure you've completed all the required steps.
+
+Follow these steps to grant access to Azure Files resources with Azure AD credentials:
+
+1. Enable Azure AD DS authentication over SMB for your storage account to register the storage account with the associated Azure AD DS deployment.
+1. Assign share-level permissions to an Azure AD identity (a user, group, or service principal).
+1. Connect to your Azure file share using a storage account key and configure Windows access control lists (ACLs) for directories and files.
+1. Mount an Azure file share from a domain-joined VM.
+
+The following diagram illustrates the end-to-end workflow for enabling Azure AD DS authentication over SMB for Azure Files.
+
+![Diagram showing Azure AD over SMB for Azure Files workflow](media/storage-files-active-directory-enable/azure-active-directory-over-smb-workflow.png)
+
+## Enable Azure AD DS authentication for your account
+
+To enable Azure AD DS authentication over SMB for Azure Files, you can set a property on storage accounts by using the Azure portal, Azure PowerShell, or Azure CLI. Setting this property implicitly "domain joins" the storage account with the associated Azure AD DS deployment. Azure AD DS authentication over SMB is then enabled for all new and existing file shares in the storage account.
+
+Keep in mind that you can enable Azure AD DS authentication over SMB only after you've successfully deployed Azure AD DS to your Azure AD tenant. For more information, see the [prerequisites](#prerequisites).
+
+# [Portal](#tab/azure-portal)
+
+To enable Azure AD DS authentication over SMB with the [Azure portal](https://portal.azure.com), follow these steps:
+
+1. In the Azure portal, go to your existing storage account, or [create a storage account](../common/storage-account-create.md).
+1. In the **File shares** section, select **Active directory: Not Configured**.
+
+ :::image type="content" source="media/storage-files-active-directory-enable/files-azure-ad-enable-storage-account-identity.png" alt-text="Screenshot of the File shares pane in your storage account, Active directory is highlighted." lightbox="media/storage-files-active-directory-enable/files-azure-ad-enable-storage-account-identity.png":::
+
+1. Select **Azure Active Directory Domain Services** then enable the feature by ticking the checkbox.
+1. Select **Save**.
+
+ :::image type="content" source="media/storage-files-active-directory-enable/files-azure-ad-ds-highlight.png" alt-text="Screenshot of the Active Directory pane, Azure Active Directory Domain Services is enabled." lightbox="media/storage-files-active-directory-enable/files-azure-ad-ds-highlight.png":::
+
+# [PowerShell](#tab/azure-powershell)
+
+To enable Azure AD DS authentication over SMB with Azure PowerShell, install the latest Az module (2.4 or newer) or the Az.Storage module (1.5 or newer). For more information about installing PowerShell, see [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-azure-powershell).
+
+To create a new storage account, call [New-AzStorageAccount](/powershell/module/az.storage/New-azStorageAccount), and then set the **EnableAzureActiveDirectoryDomainServicesForFile** parameter to **true**. In the following example, remember to replace the placeholder values with your own values. (If you were using the previous preview module, the parameter for enabling the feature is **EnableAzureFilesAadIntegrationForSMB**.)
+
+```powershell
+# Create a new storage account
+New-AzStorageAccount -ResourceGroupName "<resource-group-name>" `
+ -Name "<storage-account-name>" `
+ -Location "<azure-region>" `
+ -SkuName Standard_LRS `
+ -Kind StorageV2 `
+ -EnableAzureActiveDirectoryDomainServicesForFile $true
+```
+
+To enable this feature on existing storage accounts, use the following command:
+
+```powershell
+# Update a storage account
+Set-AzStorageAccount -ResourceGroupName "<resource-group-name>" `
+ -Name "<storage-account-name>" `
+ -EnableAzureActiveDirectoryDomainServicesForFile $true
+```
++
+# [Azure CLI](#tab/azure-cli)
+
+To enable Azure AD authentication over SMB with Azure CLI, install the latest CLI version (Version 2.0.70 or newer). For more information about installing Azure CLI, see [Install the Azure CLI](/cli/azure/install-azure-cli).
+
+To create a new storage account, call [az storage account create](/cli/azure/storage/account#az-storage-account-create), and set the `--enable-files-aadds` argument. In the following example, remember to replace the placeholder values with your own values. (If you were using the previous preview module, the parameter for feature enablement is **file-aad**.)
+
+```azurecli-interactive
+# Create a new storage account
+az storage account create -n <storage-account-name> -g <resource-group-name> --enable-files-aadds
+```
+
+To enable this feature on existing storage accounts, use the following command:
+
+```azurecli-interactive
+# Update a new storage account
+az storage account update -n <storage-account-name> -g <resource-group-name> --enable-files-aadds
+```
++
+## Recommended: Use AES-256 encryption
+
+By default, Azure AD DS authentication uses Kerberos RC4 encryption. We recommend configuring it to use Kerberos AES-256 encryption instead by following these instructions.
+
+The action requires running an operation on the Active Directory domain that's managed by Azure AD DS to reach a domain controller to request a property change to the domain object. The cmdlets below are Windows Server Active Directory PowerShell cmdlets, not Azure PowerShell cmdlets. Because of this, these PowerShell commands must be run from a client machine that's domain-joined to the Azure AD DS domain.
+
+> [!IMPORTANT]
+> The Windows Server Active Directory PowerShell cmdlets in this section must be run in Windows PowerShell 5.1 from a client machine that's domain-joined to the Azure AD DS domain. PowerShell 7.x and Azure Cloud Shell won't work in this scenario.
+
+Log into the domain-joined client machine as an Azure AD DS user with the required permissions (typically, members of the **AAD DC Administrators** group will have the necessary permissions). Open a normal (non-elevated) PowerShell session and execute the following commands.
+
+```powershell
+# 1. Find the service account in your managed domain that represents the storage account.
+
+$storageAccountName= ΓÇ£<InsertStorageAccountNameHere>ΓÇ¥
+$searchFilter = "Name -like '*{0}*'" -f $storageAccountName
+$userObject = Get-ADUser -filter $searchFilter
+
+if ($userObject -eq $null)
+{
+ Write-Error "Cannot find AD object for storage account:$storageAccountName" -ErrorAction Stop
+}
+
+# 2. Set the KerberosEncryptionType of the object
+
+Set-ADUser $userObject -KerberosEncryptionType AES256
+
+# 3. Validate that the object now has the expected (AES256) encryption type.
+
+Get-ADUser $userObject -properties KerberosEncryptionType
+```
+
+> [!IMPORTANT]
+> If you were previously using RC4 encryption and update the storage account to use AES-256, you should run `klist purge` on the client and then remount the file share to get new Kerberos tickets with AES-256.
++
+## Next steps
+
+To grant additional users access to your file share, follow the instructions in [Assign share-level permissions](#assign-share-level-permissions) and [Configure Windows ACLs](#configure-windows-acls).
+
+For more information about identity-based authentication for Azure Files, see these resources:
+
+- [Overview of Azure Files identity-based authentication support for SMB access](storage-files-active-directory-overview.md)
+- [FAQ](storage-files-faq.md)
storage Storage Files Identity Auth Hybrid Identities Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-hybrid-identities-enable.md
+
+ Title: Use Azure Active Directory to access Azure file shares over SMB for hybrid identities using Kerberos authentication
+description: Learn how to enable identity-based Kerberos authentication for hybrid user identities over Server Message Block (SMB) for Azure Files through Azure Active Directory (Azure AD). Your users can then access Azure file shares by using their Azure AD credentials.
+++ Last updated : 05/24/2023+++
+recommendations: false
++
+# Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files
+
+This article focuses on enabling and configuring Azure Active Directory (Azure AD) for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md), which are on-premises AD DS identities that are synced to Azure AD. Cloud-only identities aren't currently supported.
+
+This configuration allows hybrid users to access Azure file shares using Kerberos authentication, using Azure AD to issue the necessary Kerberos tickets to access the file share with the SMB protocol. This means your end users can access Azure file shares over the internet without requiring line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined clients. However, configuring Windows access control lists (ACLs)/directory and file-level permissions for a user or group requires line-of-sight to the on-premises domain controller.
+
+For more information on supported options and considerations, see [Overview of Azure Files identity-based authentication options for SMB access](storage-files-active-directory-overview.md). For more information about Azure AD Kerberos, see [Deep dive: How Azure AD Kerberos works](https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-azure-ad-kerberos-works/ba-p/3070889).
+
+> [!IMPORTANT]
+> You can only use one AD source for identity-based authentication with Azure Files. If Azure AD Kerberos authentication for hybrid identities doesn't fit your requirements, you might be able to use [on-premises Active Directory Domain Service (AD DS)](storage-files-identity-auth-active-directory-enable.md) or [Azure Active Directory Domain Services (Azure AD DS)](storage-files-identity-auth-domain-services-enable.md) instead. The configuration steps and supported scenarios are different for each method.
+
+## Applies to
+| File share type | SMB | NFS |
+|-|:-:|:-:|
+| Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Standard file shares (GPv2), GRS/GZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+
+## Prerequisites
+
+Before you enable Azure AD Kerberos authentication over SMB for Azure file shares, make sure you've completed the following prerequisites.
+
+> [!NOTE]
+> Your Azure storage account can't authenticate with both Azure AD and a second method like AD DS or Azure AD DS. If you've already chosen another AD source for your storage account, you must disable it before enabling Azure AD Kerberos.
+
+The Azure AD Kerberos functionality for hybrid identities is only available on the following operating systems:
+
+ - Windows 11 Enterprise/Pro single or multi-session.
+ - Windows 10 Enterprise/Pro single or multi-session, versions 2004 or later with the latest cumulative updates installed, especially the [KB5007253 - 2021-11 Cumulative Update Preview for Windows 10](https://support.microsoft.com/topic/november-22-2021-kb5007253-os-builds-19041-1387-19042-1387-19043-1387-and-19044-1387-preview-d1847be9-46c1-49fc-bf56-1d469fc1b3af).
+ - Windows Server, version 2022 with the latest cumulative updates installed, especially the [KB5007254 - 2021-11 Cumulative Update Preview for Microsoft server operating system version 21H2](https://support.microsoft.com/topic/november-22-2021-kb5007254-os-build-20348-380-preview-9a960291-d62e-486a-adcc-6babe5ae6fc1).
+
+To learn how to create and configure a Windows VM and log in by using Azure AD-based authentication, see [Log in to a Windows virtual machine in Azure by using Azure AD](../../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md).
+
+Clients must be Azure AD-joined or [hybrid Azure AD-joined](../../active-directory/devices/hybrid-azuread-join-plan.md). Azure AD Kerberos isnΓÇÖt supported on clients joined to Azure AD DS or joined to AD only.
+
+This feature doesn't currently support user accounts that you create and manage solely in Azure AD. User accounts must be [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md), which means you'll also need AD DS and either [Azure AD Connect](../../active-directory/hybrid/whatis-azure-ad-connect.md) or [Azure AD Connect cloud sync](../../active-directory/cloud-sync/what-is-cloud-sync.md). You must create these accounts in Active Directory and sync them to Azure AD. To assign Azure Role-Based Access Control (RBAC) permissions for the Azure file share to a user group, you must create the group in Active Directory and sync it to Azure AD.
+
+You must disable multi-factor authentication (MFA) on the Azure AD app representing the storage account.
+
+With Azure AD Kerberos, the Kerberos ticket encryption is always AES-256. But you can set the SMB channel encryption that best fits your needs.
+
+## Regional availability
+
+Azure Files authentication with Azure AD Kerberos is available in Azure public cloud in [all Azure regions](https://azure.microsoft.com/global-infrastructure/locations/).
+
+## Enable Azure AD Kerberos authentication for hybrid user accounts
+
+You can enable Azure AD Kerberos authentication on Azure Files for hybrid user accounts using the Azure portal, PowerShell, or Azure CLI.
+
+# [Portal](#tab/azure-portal)
+
+To enable Azure AD Kerberos authentication using the [Azure portal](https://portal.azure.com), follow these steps.
+
+1. Sign in to the Azure portal and select the storage account you want to enable Azure AD Kerberos authentication for.
+1. Under **Data storage**, select **File shares**.
+1. Next to **Active Directory**, select the configuration status (for example, **Not configured**).
+
+ :::image type="content" source="media/storage-files-identity-auth-hybrid-identities-enable/configure-active-directory.png" alt-text="Screenshot of the Azure portal showing file share settings for a storage account. Active Directory configuration settings are selected." lightbox="media/storage-files-identity-auth-hybrid-identities-enable/configure-active-directory.png" border="true":::
+
+1. Under **Azure AD Kerberos**, select **Set up**.
+1. Select the **Azure AD Kerberos** checkbox.
+
+ :::image type="content" source="media/storage-files-identity-auth-hybrid-identities-enable/enable-azure-ad-kerberos.png" alt-text="Screenshot of the Azure portal showing Active Directory configuration settings for a storage account. Azure AD Kerberos is selected." lightbox="media/storage-files-identity-auth-hybrid-identities-enable/enable-azure-ad-kerberos.png" border="true":::
+
+1. **Optional:** If you want to configure directory and file-level permissions through Windows File Explorer, then you need to specify the domain name and domain GUID for your on-premises AD. You can get this information from your domain admin or by running the following Active Directory PowerShell cmdlet from an on-premises AD-joined client: `Get-ADDomain`. Your domain name should be listed in the output under `DNSRoot` and your domain GUID should be listed under `ObjectGUID`. If you'd prefer to configure directory and file-level permissions using icacls, you can skip this step. However, if you want to use icacls, the client will need line-of-sight to the on-premises AD.
+
+1. Select **Save**.
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+To enable Azure AD Kerberos using Azure PowerShell, run the following command. Remember to replace placeholder values, including brackets, with your values.
+
+```azurepowershell
+Set-AzStorageAccount -ResourceGroupName <resourceGroupName> -StorageAccountName <storageAccountName> -EnableAzureActiveDirectoryKerberosForFile $true
+```
+
+**Optional:** If you want to configure directory and file-level permissions through Windows File Explorer, then you also need to specify the domain name and domain GUID for your on-premises AD. If you'd prefer to configure directory and file-level permissions using icacls, you can skip this step. However, if you want to use icacls, the client will need line-of-sight to the on-premises AD.
+
+You can get this information from your domain admin or by running the following Active Directory PowerShell cmdlets from an on-premises AD-joined client:
+
+```PowerShell
+$domainInformation = Get-ADDomain
+$domainGuid = $domainInformation.ObjectGUID.ToString()
+$domainName = $domainInformation.DnsRoot
+```
+
+To specify the domain name and domain GUID for your on-premises AD, run the following Azure PowerShell command. Remember to replace placeholder values, including brackets, with your values.
+
+```azurepowershell
+Set-AzStorageAccount -ResourceGroupName <resourceGroupName> -StorageAccountName <storageAccountName> -EnableAzureActiveDirectoryKerberosForFile $true -ActiveDirectoryDomainName $domainName -ActiveDirectoryDomainGuid $domainGuid
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+To enable Azure AD Kerberos using Azure CLI, run the following command. Remember to replace placeholder values, including brackets, with your values.
+
+```azurecli
+az storage account update --name <storageaccountname> --resource-group <resourcegroupname> --enable-files-aadkerb true
+```
+
+**Optional:** If you want to configure directory and file-level permissions through Windows File Explorer, then you also need to specify the domain name and domain GUID for your on-premises AD. If you'd prefer to configure directory and file-level permissions using icacls, you can skip this step. However, if you want to use icacls, the client will need line-of-sight to the on-premises AD.
+
+You can get this information from your domain admin or by running the following Active Directory PowerShell cmdlets from an on-premises AD-joined client:
+
+```PowerShell
+$domainInformation = Get-ADDomain
+$domainGuid = $domainInformation.ObjectGUID.ToString()
+$domainName = $domainInformation.DnsRoot
+```
+
+To specify the domain name and domain GUID for your on-premises AD, run the following command. Remember to replace placeholder values, including brackets, with your values.
+
+```azurecli
+az storage account update --name <storageAccountName> --resource-group <resourceGroupName> --enable-files-aadkerb true --domain-name <domainName> --domain-guid <domainGuid>
+```
+++
+> [!WARNING]
+> If you've previously enabled Azure AD Kerberos authentication through manual limited preview steps to store FSLogix profiles on Azure Files for Azure AD-joined VMs, the password for the storage account's service principal is set to expire every six months. Once the password expires, users won't be able to get Kerberos tickets to the file share. To mitigate this, see "Error - Service principal password has expired in Azure AD" under [Potential errors when enabling Azure AD Kerberos authentication for hybrid users](files-troubleshoot-smb-authentication.md#potential-errors-when-enabling-azure-ad-kerberos-authentication-for-hybrid-users).
+
+## Grant admin consent to the new service principal
+
+After enabling Azure AD Kerberos authentication, you'll need to explicitly grant admin consent to the new Azure AD application registered in your Azure AD tenant to complete your configuration. You can configure the API permissions from the [Azure portal](https://portal.azure.com) by following these steps:
+
+1. Open **Azure Active Directory**.
+2. Select **App registrations** on the left pane.
+3. Select **All Applications**.
+
+ :::image type="content" source="media/storage-files-identity-auth-hybrid-identities-enable/azure-portal-azuread-app-registrations.png" alt-text="Screenshot of the Azure portal. Azure Active Directory is open. App registrations is selected in the left pane. All applications is highlighted in the right pane." lightbox="media/storage-files-identity-auth-hybrid-identities-enable/azure-portal-azuread-app-registrations.png":::
+
+4. Select the application with the name matching **[Storage Account] `<your-storage-account-name>`.file.core.windows.net**.
+5. Select **API permissions** in the left pane.
+6. Select **Grant admin consent for [Directory Name]** to grant consent for the three requested API permissions (openid, profile, and User.Read) for all accounts in the directory.
+7. Select **Yes** to confirm.
+
+ > [!IMPORTANT]
+ > If you're connecting to a storage account via a private endpoint/private link using Azure AD Kerberos authentication, you'll also need to add the private link FQDN to the storage account's Azure AD application. For instructions, see the entry in our [troubleshooting guide](files-troubleshoot-smb-authentication.md#error-1326the-username-or-password-is-incorrect-when-using-private-link).
+
+## Disable multi-factor authentication on the storage account
+
+Azure AD Kerberos doesn't support using MFA to access Azure file shares configured with Azure AD Kerberos. You must exclude the Azure AD app representing your storage account from your MFA conditional access policies if they apply to all apps.
+
+The storage account app should have the same name as the storage account in the conditional access exclusion list. When searching for the storage account app in the conditional access exclusion list, search for: **[Storage Account] `<your-storage-account-name>`.file.core.windows.net**
+
+Remember to replace `<your-storage-account-name>` with the proper value.
+
+ > [!IMPORTANT]
+ > If you don't exclude MFA policies from the storage account app, you won't be able to access the file share. Trying to map the file share using `net use` will result in an error message that says "System error 1327: Account restrictions are preventing this user from signing in. For example: blank passwords aren't allowed, sign-in times are limited, or a policy restriction has been enforced."
+
+For guidance on disabling MFA, see the following:
+
+- [Add exclusions for service principals of Azure resources](../../active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa.md#user-exclusions)
+- [Create a conditional access policy](../../active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa.md#create-a-conditional-access-policy)
+
+## Assign share-level permissions
+
+When you enable identity-based access, you can set for each share which users and groups have access to that particular share. Once a user is allowed into a share, Windows ACLs (also called NTFS permissions) on individual files and directories take over. This allows for fine-grained control over permissions, similar to an SMB share on a Windows server.
+
+To set share-level permissions, follow the instructions in [Assign share-level permissions to an identity](storage-files-identity-ad-ds-assign-permissions.md).
+
+## Configure directory and file-level permissions
+
+Once share-level permissions are in place, you can assign directory/file-level permissions to the user or group. **This requires using a device with line-of-sight to an on-premises AD**. To use Windows File Explorer, the device also needs to be domain-joined.
+
+There are two options for configuring directory and file-level permissions with Azure AD Kerberos authentication:
+
+- **Windows File Explorer:** If you choose this option, then the client must be domain-joined to the on-premises AD.
+- **icacls utility:** If you choose this option, then the client doesn't need to be domain-joined, but needs line-of-sight to the on-premises AD.
+
+To configure directory and file-level permissions through Windows File Explorer, you also need to specify domain name and domain GUID for your on-premises AD. You can get this information from your domain admin or from an on-premises AD-joined client. If you prefer to configure using icacls, this step is not required.
+
+To configure directory and file-level permissions, follow the instructions in [Configure directory and file-level permissions over SMB](storage-files-identity-ad-ds-configure-permissions.md).
+
+## Configure the clients to retrieve Kerberos tickets
+
+Enable the Azure AD Kerberos functionality on the client machine(s) you want to mount/use Azure File shares from. You must do this on every client on which Azure Files will be used.
+
+Use one of the following three methods:
+
+- Configure this Intune [Policy CSP](/windows/client-management/mdm/policy-configuration-service-provider) and apply it to the client(s): [Kerberos/CloudKerberosTicketRetrievalEnabled](/windows/client-management/mdm/policy-csp-kerberos#kerberos-cloudkerberosticketretrievalenabled)
+- Configure this group policy on the client(s): `Administrative Templates\System\Kerberos\Allow retrieving the Azure AD Kerberos Ticket Granting Ticket during logon`
+- Create the following registry value on the client(s): `reg add HKLM\SYSTEM\CurrentControlSet\Control\Lsa\Kerberos\Parameters /v CloudKerberosTicketRetrievalEnabled /t REG_DWORD /d 1`
+
+Changes are not instant, and require a policy refresh or a reboot to take effect.
+
+> [!IMPORTANT]
+> Once this change is applied, the client(s) won't be able to connect to storage accounts that are configured for on-premises AD DS integration without configuring Kerberos realm mappings. If you want the client(s) to be able to connect to storage accounts configured for AD DS as well as storage accounts configured for Azure AD Kerberos, follow the steps in [Configure coexistence with storage accounts using on-premises AD DS](#configure-coexistence-with-storage-accounts-using-on-premises-ad-ds).
+
+### Configure coexistence with storage accounts using on-premises AD DS
+
+If you want to enable client machines to connect to storage accounts that are configured for AD DS as well as storage accounts configured for Azure AD Kerberos, follow these steps. If you're only using Azure AD Kerberos, skip this section.
+
+Add an entry for each storage account that uses on-premises AD DS integration. Use one of the following three methods to configure Kerberos realm mappings:
+
+- Configure this Intune [Policy CSP](/windows/client-management/mdm/policy-configuration-service-provider) and apply it to the client(s): [Kerberos/HostToRealm](/windows/client-management/mdm/policy-csp-admx-kerberos#hosttorealm)
+- Configure this group policy on the client(s): `Administrative Template\System\Kerberos\Define host name-to-Kerberos realm mappings`
+- Configure the following registry value on the client(s): `reg add HKLM\SYSTEM\CurrentControlSet\Control\Lsa\Kerberos\domain_realm /v <DomainName> /d <StorageAccountEndPoint>`
+ - For example, `reg add HKLM\SYSTEM\CurrentControlSet\Control\Lsa\Kerberos\domain_realm /v contoso.local /d <your-storage-account-name>.file.core.windows.net`
+
+Changes are not instant, and require a policy refresh or a reboot to take effect.
+
+## Disable Azure AD authentication on your storage account
+
+If you want to use another authentication method, you can disable Azure AD authentication on your storage account by using the Azure portal, Azure PowerShell, or Azure CLI.
+
+> [!NOTE]
+> Disabling this feature means that there will be no Active Directory configuration for file shares in your storage account until you enable one of the other Active Directory sources to reinstate your Active Directory configuration.
+
+# [Portal](#tab/azure-portal)
+
+To disable Azure AD Kerberos authentication on your storage account by using the Azure portal, follow these steps.
+
+1. Sign in to the Azure portal and select the storage account you want to disable Azure AD Kerberos authentication for.
+1. Under **Data storage**, select **File shares**.
+1. Next to **Active Directory**, select the configuration status.
+1. Under **Azure AD Kerberos**, select **Configure**.
+1. Uncheck the **Azure AD Kerberos** checkbox.
+1. Select **Save**.
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+To disable Azure AD Kerberos authentication on your storage account by using Azure PowerShell, run the following command. Remember to replace placeholder values, including brackets, with your values.
+
+```azurepowershell
+Set-AzStorageAccount -ResourceGroupName <resourceGroupName> -StorageAccountName <storageAccountName> -EnableAzureActiveDirectoryKerberosForFile $false
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+To disable Azure AD Kerberos authentication on your storage account by using Azure CLI, run the following command. Remember to replace placeholder values, including brackets, with your values.
+
+```azurecli
+az storage account update --name <storageaccountname> --resource-group <resourcegroupname> --enable-files-aadkerb false
+```
+++
+## Next steps
+
+For more information, see these resources:
+
+- [Potential errors when enabling Azure AD Kerberos authentication for hybrid users](files-troubleshoot-smb-authentication.md#potential-errors-when-enabling-azure-ad-kerberos-authentication-for-hybrid-users)
+- [Overview of Azure Files identity-based authentication support for SMB access](storage-files-active-directory-overview.md)
+- [Create a profile container with Azure Files and Azure Active Directory](../../virtual-desktop/create-profile-container-azure-ad.md)
+- [FAQ](storage-files-faq.md)
storage Storage Files Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-overview.md
A user of Active Directory, which is their on-premises domain controller, can na
The alternative data stream is the primary aspect of file fidelity that currently can't be stored on a file in an Azure file share. It's preserved on-premises when Azure File Sync is used.
-Learn more about [on-premises Active Directory authentication](storage-files-identity-auth-active-directory-enable.md) and [Azure AD DS authentication](storage-files-identity-auth-active-directory-domain-service-enable.md) for Azure file shares.
+Learn more about [on-premises Active Directory authentication](storage-files-identity-auth-active-directory-enable.md) and [Azure AD DS authentication](storage-files-identity-auth-domain-services-enable.md) for Azure file shares.
## Migration guides
synapse-analytics Browse Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/browse-partners.md
Title: Discover third-party solutions from Azure Synapse partners through Synapse Studio description: Learn how to discover new third-party solutions that are tightly integrated with Azure Synapse partners---- Previously updated : 07/14/2021 + Last updated : 06/14/2023+++
The following table lists partner solutions that are currently supported. Make s
| Partner | Solution name | | - | - |
-| ![Incorta](./media/data-integration/incorta-logo.png) | Incorta Intelligent Ingest for Azure Synapse |
-| ![Informatica](./media/data-integration/informatica_logo.png) | Informatica Intelligent Data Management Cloud |
-| ![Qlik Data Integration (formerly Attunity)](./media/business-intelligence/qlik_logo.png) | Qlik Data Integration (formerly Attunity) |
+| :::image type="content" source="./media/data-integration/incorta-logo.png" alt-text="The corporate logo of Incorta."::: | Incorta Intelligent Ingest for Azure Synapse |
+| :::image type="content" source="./media/data-integration/informatica_logo.png" alt-text="The corporate logo of Informatica."::: | Informatica Intelligent Data Management Cloud |
+| :::image type="content" source="./media/business-intelligence/qlik_logo.png" alt-text="The corporate logo of Qlik Data Integration (formerly Attunity)."::: | Qlik Data Integration (formerly Attunity) |
## Requirements When you chose a partner application, Azure Synapse Studio provisions a sandbox environment you can use for this trial, ensuring you can experiment with partner solutions quickly before you decide to use it with your production data. The following objects are created:
When you chose a partner application, Azure Synapse Studio provisions a sandbox
In all cases, **[PartnerName]** is the name of the third-party ISV who offers the trial.
-### Security
+### Security
After the required objects are created, Synapse Studio sends information about your new sandbox environment to the partner application, allowing a customized trial experience. The following information is sent to our partners: - First name - Last name
We never share any passwords with the partner application, including the passwor
### Costs The dedicated SQL pool that is created for your partner trial incurs ongoing costs, which are based on the number of DWU blocks and hours running. Make sure you pause the SQL pool created for this partner trial when it isn't in use, to avoid unnecessary charges.
-## Starting a new partner trial
+## Start a new partner trial
1) On the Synapse Studio home page, under **Discover more**, select **browse partners**. 2) The Browse partners page shows all partners currently offering trials that allow direct connectivity with Azure Synapse. Choose a partner solution.
The required objects will be created for your partner trial. You'll then be forw
## Next steps
-To learn more about some of our other partners, see [Data integration partners](data-integration.md), [Data management partners](data-management.md), and [Machine Learning and AI partners](machine-learning-ai.md).
+- To learn more about some of our other partners, see [Data integration partners](data-integration.md), [Data management partners](data-management.md), and [Machine Learning and AI partners](machine-learning-ai.md).
synapse-analytics Business Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/business-intelligence.md
Title: Business Intelligence partners
+ Title: Business Intelligence partners
description: Lists of third-party business intelligence partners with solutions that support Azure Synapse Analytics.---- Previously updated : 07/09/2021 + - Last updated : 06/14/2023+++ # Azure Synapse Analytics business intelligence partners
To create your data warehouse solution, you can choose from different kinds of i
| Partner | Description | Website/Product link | | - | -- | -- |
-| ![AtScale](./media/business-intelligence/atscale-logo.png) |**AtScale**<br>AtScale provides a single, secured, and governed workspace for distributed data. AtScale's Cloud OLAP, Autonomous Data Engineering&trade;, and Universal Semantic Layer&trade; powers business intelligence results for faster, more accurate business decisions. |[Product page](https://www.atscale.com/partners/microsoft/)<br> |
-| ![Birst](./media/business-intelligence/birst_logo.png) |**Birst**<br>Birst connects the entire organization through a network of interwoven virtualized BI instances on-top of a shared common analytical fabric|[Product page](https://www.infor.com/solutions/advanced-analytics/business-intelligence/birst)<br> |
-| ![Count](./media/business-intelligence/count-logo.png) |**Count**<br> Count is the next generation SQL editor, giving you the fastest way to explore and share your data with your team. At Count's core is a data notebook built for SQL, allowing you to structure your code, iterate quickly and stay in flow. Visualize your results instantly or customize them to build beautifully detailed charts in just a few clicks. Instantly share anything from one-off queries to full interactive data stories built off any of your Azure Synapse data sources. |[Product page](https://count.co/)<br>|
-| ![Dremio](./media/business-intelligence/dremio-logo.png) |**Dremio**<br> Analysts and data scientists can discover, explore and curate data using Dremio's intuitive UI, while IT maintains governance and security. Dremio makes it easy to join ADLS with Blob Storage, Azure SQL Database, Azure Synapse SQL, HDInsight, and more. With Dremio, Power BI analysts can search for new datasets stored on ADLS, immediately access that data in Power BI with no preparation by IT, create visualizations, and iteratively refine reports in real-time. And analysts can create new reports that combine data between ADLS and other databases. |[Product page](https://www.dremio.com/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dremiocorporation.dremio_ce)<br> |
-| ![Dundas](./media/business-intelligence/dundas_software_logo.png) |**Dundas BI**<br>Dundas Data Visualization is a leading, global provider of Business Intelligence and Data Visualization software. Dundas dashboards, reporting, and visual data analytics provide seamless integration into business applications, enabling better decisions and faster insights.|[Product page](https://www.dundas.com/dundas-bi)<br> |
-| ![IBM Cognos](./media/business-intelligence/cognos_analytics_logo.png) |**IBM Cognos Analytics**<br>Cognos Analytics includes self-service capabilities that make it simple, clear, and easy to use, whether you're an experienced business analyst examining a vast supply chain, or a marketer optimizing a campaign. Cognos Analytics uses AI and other capabilities to guide data exploration. It makes it easier for users to get the answers they need|[Product page](https://www.ibm.com/products/cognos-analytics)<br>|
-| ![Information Builders](./media/business-intelligence/informationbuilders_logo.png) |**Information Builders (WebFOCUS)**<br>WebFOCUS business intelligence helps companies use data more strategically across and beyond the enterprise. It allows users and administrators to rapidly create dashboards that combine content from multiple data sources and formats. It also provides robust security and comprehensive governance that enables seamless and secure sharing of any BI and analytics content|[Product page](https://www.informationbuilders.com/products/bi-and-analytics-platform)<br> |
-| ![Jinfonet](./media/business-intelligence/jinfonet_logo.png) |**Jinfonet JReport**<br>JReport is an embeddable BI solution for the enterprise. The solution offers capabilities such as report creation, dashboards, and data analysis on cloud, big data, and transactional data sources. By visualizing data, you can conduct your own reporting and data discovery for agile, on-the-fly decision making. |[Product page](https://www.logianalytics.com/jreport/)<br> |
-| ![LogiAnalytics](./media/business-intelligence/logianalytics_logo.png) |**Logi Analytics**<br>Together, Logi Analytics enables your organization to collect, analyze, and immediately act on the largest and most diverse data sets in the world. |[Product page](https://www.logianalytics.com/)<br>|
-| ![Looker](./media/business-intelligence/looker_logo.png) |**Looker BI**<br>Looker gives everyone in your company the ability to explore and understand the data that drives your business. Looker also gives the data analyst a flexible and reusable modeling layer to control and curate that data. Companies have fundamentally transformed their culture using Looker as the catalyst.|[Product page](https://looker.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/aad.lookeranalyticsplatform)<br> |
-| ![Microstrategy](./media/business-intelligence/microstrategy_logo.png) |**MicroStrategy**<br>The MicroStrategy platform offers a complete set of business intelligence and analytics capabilities that enable organizations to get value from their business data. MicroStrategy's powerful analytical engine, comprehensive toolsets, variety of data connectors, and open architecture ensures you have everything you need to extend access to analytics across every team.|[Product page](https://www.microstrategy.com/us/product/analytics)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/microstrategy.microstrategy_cloud_environment_mce)<br> |
-| ![Mode Analytics](./media/business-intelligence/mode-logo.png) |**Mode**<br>Mode is a modern analytics and BI solution that helps teams make decisions through unreasonably fast and unexpectedly delightful data analysis. Data teams move faster through a preferred workflow that combines SQL, Python, R, and visual analysis, while stakeholders work alongside them exploring and sharing data on their own. With data more accessible to everyone, we shorten the distance from questions to answers and help businesses make better decisions, faster.|[Product page](https://mode.com/)<br> |
-| ![Pyramid Analytics](./media/business-intelligence/pyramid-logo.png) |**Pyramid Analytics**<br>Pyramid 2020 is the trusted analytics platform that connects your teams, drives confident decisions, and produces winning results. Business users can do high-end, cloud-scale analytics and data science without IT help ΓÇö on any browser or device. Data scientists can take advantage of machine learning algorithms and scripting to understand difficult business problems. Power users can prepare and model their own data to create illuminating analytic content. Non-technical users can benefit from stunning visualizations and guided analytic presentations. It's the next generation of self-service analytics with governance. |[Product page](https://www.pyramidanalytics.com/resources/analyst-reports/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/pyramidanalytics.pyramid2020-25-102) |
-| ![Qlik](./media/business-intelligence/qlik_logo.png) |**Qlik Sense Enterprise**<br>Drive insight discovery with the data visualization app that anyone can use. With Qlik Sense, everyone in your organization can easily create flexible, interactive visualizations and make meaningful decisions. |[Product page](https://www.qlik.com/us/products/qlik-sense/enterprise)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qlik.qlik-sense) |
-| ![SAS](./media/business-intelligence/sas-logo.jpg) |**SAS® Viya®**<br>SAS® Viya® is an AI, analytic, and data management solution running on a scalable, cloud-native architecture. It enables you to operationalize insights, empowering everyone – from data scientists to business users – to collaborate and realize innovative results faster. Using open source or SAS models, SAS® Viya® can be accessed through APIs or interactive interfaces to transform raw data into actions. |[Product page](https://www.sas.com/microsoft)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/sas-institute-560503.sas-viya-saas?tab=Overview)<br>|
-| ![SiSense](./media/business-intelligence/sisense_logo.png) |**SiSense**<br>SiSense is a full-stack Business Intelligence software that comes with tools that a business needs to analyze and visualize data: a high-performance analytical database, the ability to join multiple sources, simple data extraction (ETL), and web-based data visualization. Start to analyze and visualize large data sets with SiSense BI and Analytics today. |[Product page](https://www.sisense.com/)<br> |
-| ![Tableau](./media/business-intelligence/tableau_sparkle_logo.png) |**Tableau**<br>Tableau's self-service analytics help anyone see and understand their data, across many kinds of data from flat files to databases. Tableau has a native, optimized connector to Synapse SQL pool that supports both live data and in-memory analytics. |[Product page](https://www.tableau.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tableau.tableau-server)<br>|
-| ![Targit](./media/business-intelligence/targit_logo.png) |**Targit (Decision Suite)**<br>Targit Decision Suite provides a BI platform that delivers real-time dashboards, self-service analytics, user-friendly reporting, stunning mobile capabilities, and simple data-discovery technology. Everything in a single, cohesive solution. Targit gives companies the courage to act. |[Product page](https://www.targit.com/targit-decision-suite/analytics)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/targit.targit-decision-suite)<br> |
-| ![ThoughtSpot](./media/business-intelligence/thoughtspot-logo.png) |**ThoughtSpot**<br>Use search to get granular insights from billions of rows, or let AI uncover insights from questions you might not have thought about. ThoughtSpot helps businesspeople find insights hidden in their company data in seconds. Use search to analyze your data and get automated insights when you need them.|[Product page](https://www.thoughtspot.com)<br>|
-| ![Yellowfin](./media/business-intelligence/yellowfin_logo.png) |**Yellowfin**<br>Yellowfin is a top rated Cloud BI vendor for _ad hoc_ Reporting and Dashboards by BARC; The BI Survey. Connect to a dedicated SQL pool in Azure Synapse Analytics, then create and share beautiful reports and dashboards with award winning collaborative BI and location intelligence features. |[Product page](https://www.yellowfinbi.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/yellowfininternationalptyltd1616363974066.yellowfin-for-azure-byol-v2) |
+| :::image type="content" source="./media/business-intelligence/atscale-logo.png" alt-text="The logo of AtScale."::: |**AtScale**<br>AtScale provides a single, secured, and governed workspace for distributed data. AtScale's Cloud OLAP, Autonomous Data Engineering&trade;, and Universal Semantic Layer&trade; powers business intelligence results for faster, more accurate business decisions. |[AtScale](https://www.atscale.com/solutions/atscale-and-microsoft-azure/)<br> |
+| :::image type="content" source="./media/business-intelligence/birst_logo.png" alt-text="The logo of Birst."::: |**Birst**<br>Birst connects the entire organization through a network of interwoven virtualized BI instances on-top of a shared common analytical fabric|[Birst](https://www.infor.com/solutions/advanced-analytics/business-intelligence/birst)<br> |
+| :::image type="content" source="./media/business-intelligence/count-logo.png" alt-text="The logo of Count."::: |**Count**<br> Count is the next generation SQL editor, giving you the fastest way to explore and share your data with your team. At Count's core is a data notebook built for SQL, allowing you to structure your code, iterate quickly and stay in flow. Visualize your results instantly or customize them to build beautifully detailed charts in just a few selects. Instantly share anything from one-off queries to full interactive data stories built off any of your Azure Synapse data sources. |[Count](https://count.co/)<br>|
+| :::image type="content" source="./media/business-intelligence/dremio-logo.png" alt-text="The logo of Dremio."::: |**Dremio**<br> Analysts and data scientists can discover, explore and curate data using Dremio's intuitive UI, while IT maintains governance and security. Dremio makes it easy to join ADLS with Blob Storage, Azure SQL Database, Azure Synapse SQL, HDInsight, and more. With Dremio, Power BI analysts can search for new datasets stored on ADLS, immediately access that data in Power BI with no preparation by IT, create visualizations, and iteratively refine reports in real-time. And analysts can create new reports that combine data between ADLS and other databases. |[Dremio](https://www.dremio.com/azure/)<br>[Dremio Community Edition in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dremiocorporation.dremio_ce)<br> |
+| :::image type="content" source="./media/business-intelligence/dundas_software_logo.png" alt-text="The logo of Dundas."::: |**Dundas BI**<br>Dundas Data Visualization is a leading, global provider of Business Intelligence and Data Visualization software. Dundas dashboards, reporting, and visual data analytics provide seamless integration into business applications, enabling better decisions and faster insights.|[Dundas](https://www.dundas.com/dundas-bi)<br> |
+| :::image type="content" source="./media/business-intelligence/cognos_analytics_logo.png" alt-text="The logo of IBM Cognos."::: |**IBM Cognos Analytics**<br>Cognos Analytics includes self-service capabilities that make it simple, clear, and easy to use, whether you're an experienced business analyst examining a vast supply chain, or a marketer optimizing a campaign. Cognos Analytics uses AI and other capabilities to guide data exploration. It makes it easier for users to get the answers they need|[IBM](https://www.ibm.com/products/cognos-analytics)<br>|
+| :::image type="content" source="./media/business-intelligence/informationbuilders_logo.png" alt-text="The logo of Information Builders."::: |**Information Builders (WebFOCUS)**<br>WebFOCUS business intelligence helps companies use data more strategically across and beyond the enterprise. It allows users and administrators to rapidly create dashboards that combine content from multiple data sources and formats. It also provides robust security and comprehensive governance that enables seamless and secure sharing of any BI and analytics content|[Information Builders](https://www.ibi.com/)<br> |
+| :::image type="content" source="./media/business-intelligence/logianalytics_logo.png" alt-text="The logo of LogiAnalytics."::: |**Logi Analytics**<br>Together, Logi Analytics enables your organization to collect, analyze, and immediately act on the largest and most diverse data sets in the world. |[Logi Analytics](https://www.logianalytics.com/)<br>|
+| :::image type="content" source="./media/business-intelligence/logianalytics_logo.png" alt-text="The logo of LogiAnalytics."::: |**Logi Report**<br>Logi Report is an embeddable BI solution for the enterprise. The solution offers capabilities such as report creation, dashboards, and data analysis on cloud, big data, and transactional data sources. By visualizing data, you can conduct your own reporting and data discovery for agile, on-the-fly decision making. |[Logi Report](https://www.logianalytics.com/jreport/)<br> |
+| :::image type="content" source="./media/business-intelligence/looker_logo.png" alt-text="The logo of Looker."::: |**Looker for Business Intelligence**<br>Looker gives everyone in your company the ability to explore and understand the data that drives your business. Looker also gives the data analyst a flexible and reusable modeling layer to control and curate that data. Companies have fundamentally transformed their culture using Looker as the catalyst.|[Looker for BI](https://looker.com/)<br> [Looker Analytics Platform Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/aad.lookeranalyticsplatform)<br> |
+| :::image type="content" source="./media/business-intelligence/microstrategy_logo.png" alt-text="The logo of Microstrategy."::: |**MicroStrategy**<br>The MicroStrategy platform offers a complete set of business intelligence and analytics capabilities that enable organizations to get value from their business data. MicroStrategy's powerful analytical engine, comprehensive toolsets, variety of data connectors, and open architecture ensures you have everything you need to extend access to analytics across every team.|[MicroStrategy](https://www.microstrategy.com/en/business-intelligence)<br> [MicroStrategy Cloud in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/microstrategy.microstrategy_cloud)<br> |
+| :::image type="content" source="./media/business-intelligence/mode-logo.png" alt-text="The logo of Mode Analytics."::: |**Mode**<br>Mode is a modern analytics and BI solution that helps teams make decisions through unreasonably fast and unexpectedly delightful data analysis. Data teams move faster through a preferred workflow that combines SQL, Python, R, and visual analysis, while stakeholders work alongside them exploring and sharing data on their own. With data more accessible to everyone, we shorten the distance from questions to answers and help businesses make better decisions, faster.|[Mode](https://mode.com/)<br> |
+| :::image type="content" source="./media/business-intelligence/pyramid-logo.png" alt-text="The logo of Pyramid Analytics."::: |**Pyramid Analytics**<br>Pyramid 2020 is the trusted analytics platform that connects your teams, drives confident decisions, and produces winning results. Business users can do high-end, cloud-scale analytics and data science without IT help ΓÇö on any browser or device. Data scientists can take advantage of machine learning algorithms and scripting to understand difficult business problems. Power users can prepare and model their own data to create illuminating analytic content. Non-technical users can benefit from stunning visualizations and guided analytic presentations. It's the next generation of self-service analytics with governance. |[Pyramid Analytics](https://www.pyramidanalytics.com/resources/analyst-reports/)<br> [Pyramid Analytics in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/pyramidanalytics.pyramid2020-25-102) |
+| :::image type="content" source="./media/business-intelligence/qlik_logo.png" alt-text="The logo of Qlik."::: |**Qlik Sense**<br>Drive insight discovery with the data visualization app that anyone can use. With Qlik Sense, everyone in your organization can easily create flexible, interactive visualizations and make meaningful decisions. |[Qlik Sense](https://www.qlik.com/us/products/qlik-sense)<br> [Qlik Sense in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qlik.qlik-sense) |
+| :::image type="content" source="./media/business-intelligence/sas-logo.jpg" alt-text="The logo of SAS."::: |**SAS&reg; Viya&reg;**<br>SAS&reg; Viya&reg; is an AI, analytic, and data management solution running on a scalable, cloud-native architecture. It enables you to operationalize insights, empowering everyone ΓÇô from data scientists to business users ΓÇô to collaborate and realize innovative results faster. Using open source or SAS models, SAS&reg; Viya&reg; can be accessed through APIs or interactive interfaces to transform raw data into actions. |[SAS&reg; Viya&reg;](https://www.sas.com/microsoft)<br> [SAS&reg; Viya&reg; in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/sas-institute-560503.sas-viya-saas?tab=Overview)<br>|
+| :::image type="content" source="./media/business-intelligence/sisense_logo.png" alt-text="The logo of SiSense."::: |**SiSense**<br>SiSense is a full-stack Business Intelligence software that comes with tools that a business needs to analyze and visualize data: a high-performance analytical database, the ability to join multiple sources, simple data extraction (ETL), and web-based data visualization. Start to analyze and visualize large data sets with SiSense BI and Analytics today. |[SiSense](https://www.sisense.com/)<br> |
+| :::image type="content" source="./media/business-intelligence/tableau_sparkle_logo.png" alt-text="The logo of Tableau."::: |**Tableau**<br>Tableau's self-service analytics help anyone see and understand their data, across many kinds of data from flat files to databases. Tableau has a native, optimized connector to Synapse SQL pool that supports both live data and in-memory analytics. |[Tableau](https://www.tableau.com/)<br> [Tableau Server in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tableau.tableau-server)<br>|
+| :::image type="content" source="./media/business-intelligence/targit_logo.png" alt-text="The logo of Targit."::: |**Targit (Decision Suite)**<br>Targit Decision Suite provides a BI platform that delivers real-time dashboards, self-service analytics, user-friendly reporting, stunning mobile capabilities, and simple data-discovery technology. Everything in a single, cohesive solution. Targit gives companies the courage to act. |[Targit](https://www.targit.com/targit-decision-suite/analytics)<br> [Targit in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/targit.targit-decision-suite)<br> |
+| :::image type="content" source="./media/business-intelligence/thoughtspot-logo.png" alt-text="The logo of ThoughtSpot."::: |**ThoughtSpot**<br>Use search to get granular insights from billions of rows, or let AI uncover insights from questions you might not have thought about. ThoughtSpot helps businesspeople find insights hidden in their company data in seconds. Use search to analyze your data and get automated insights when you need them.|[ThoughtSpot](https://www.thoughtspot.com)<br>|
+| :::image type="content" source="./media/business-intelligence/yellowfin_logo.png" alt-text="The logo of Yellowfin."::: |**Yellowfin**<br>Yellowfin is a top rated Cloud BI vendor for _ad hoc_ Reporting and Dashboards by BARC; The BI Survey. Connect to a dedicated SQL pool in Azure Synapse Analytics, then create and share beautiful reports and dashboards with award winning collaborative BI and location intelligence features. |[Yellowfin](https://www.yellowfinbi.com/)<br> [Yellowfin in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/yellowfininternationalptyltd1616363974066.yellowfin-for-azure-byol-v2) |
+
+## Next steps
-## Next Steps
-To learn more about some of our other partners, see [Data Integration partners](data-integration.md), [Data Management partners](data-management.md), and [Machine Learning and AI partners](machine-learning-ai.md).
+- To learn more about some of our other partners, see [Data Integration partners](data-integration.md), [Data Management partners](data-management.md), and [Machine Learning and AI partners](machine-learning-ai.md).
-See how to [discover partner solutions through Synapse Studio](browse-partners.md).
+- See how to [discover partner solutions through Synapse Studio](browse-partners.md).
synapse-analytics Compatibility Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/compatibility-issues.md
Title: Compatibility issues with third-party applications and Azure Synapse Analytics
+ Title: Compatibility issues with third-party applications and Azure Synapse Analytics
description: Describes known issues that third-party applications may find with Azure Synapse- --- Previously updated : 11/18/2020 -+ Last updated : 06/16/2023+++ # Compatibility issues with third-party applications and Azure Synapse Analytics Applications built for SQL Server will seamlessly work with Azure Synapse dedicated SQL pools. In some cases, however, features and language elements that are commonly used in SQL Server may not be available in Azure Synapse, or they may behave differently.
-This article lists known issues you may come across when using third-party applications with Azure Synapse Analytics.
+## Common issues
+
+This article lists common issues you may come across when using third-party applications with Azure Synapse Analytics.
-## Tableau error: ΓÇ£An attempt to complete a transaction has failed. No corresponding transaction foundΓÇ¥
+### Tableau error: "An attempt to complete a transaction has failed. No corresponding transaction found"
Starting from Azure Synapse dedicated SQL pool version 10.0.11038.0, some Tableau queries making stored procedure calls may fail with the following error message: "**[Microsoft][ODBC Driver 17 for SQL Server][SQL Server]111214; An attempt to complete a transaction has failed. No corresponding transaction found.**"
-### Cause
+#### Cause
This is an issue in Azure Synapse dedicated SQL pool caused by the introduction of new system stored procedures that are called automatically by the ODBC and JDBC drivers. One of those system stored procedures can cause open transactions to be aborted if they fail execution. This issue can happen depending on the client application logic.
-### Solution
+#### Solution
Customers seeing this particular issue when using Tableau connected to Azure Synapse dedicated SQL pools should set FMTONLY to YES in the SQL connection. For Tableau Desktop and Tableau Server, you should use a Tableau Data source Customization (TDC) file to ensure Tableau passes this parameter to the driver. > [!NOTE] > Microsoft does not provide support for third-party tools. While we have tested that this solution works with Tableau Desktop 2020.3.2, you should use this workaround on your own capacity. >
-* [To learn how to make global customizations with a TDC file on Tableau Desktop, refer to Tableau Desktop documentation.](https://help.tableau.com/current/pro/desktop/en-us/odbc_customize.htm)
-* [To learn how to make global customizations with a TDC file on Tableau Server, refer to Using a .TDC File with Tableau Server.](https://kb.tableau.com/articles/howto/using-a-tdc-file-with-tableau-server)
+- [To learn how to make global customizations with a TDC file on Tableau Desktop, refer to Tableau Desktop documentation.](https://help.tableau.com/current/pro/desktop/en-us/odbc_customize.htm)
+- [To learn how to make global customizations with a TDC file on Tableau Server, refer to Using a .TDC File with Tableau Server.](https://kb.tableau.com/articles/howto/using-a-tdc-file-with-tableau-server)
The example below shows a Tableau TDC file that passes the FMTONLY=YES parameter to the SQL connection string: ```json <connection-customization class='azure_sql_dw' enabled='true' version='18.1'>
- <vendor name='azure_sql_dw' />
- <driver name='azure_sql_dw' />
- <customizations>
+ <vendor name='azure_sql_dw' />
+ <driver name='azure_sql_dw' />
+ <customizations>
<customization name='odbc-connect-string-extras' value='UseFMTONLY=yes' />
- </customizations>
+ </customizations>
</connection-customization> ```+ For more details about using TDC files, contact Tableau support.
-## See also
+## Next steps
-* [T-SQL language elements for dedicated SQL pool in Azure Synapse Analytics.](../sql-data-warehouse/sql-data-warehouse-reference-tsql-language-elements.md)
-* [T-SQL statements supported for dedicated SQL pool in Azure Synapse Analytics.](../sql-data-warehouse/sql-data-warehouse-reference-tsql-statements.md)
+- [T-SQL language elements for dedicated SQL pool in Azure Synapse Analytics.](../sql-data-warehouse/sql-data-warehouse-reference-tsql-language-elements.md)
+- [T-SQL statements supported for dedicated SQL pool in Azure Synapse Analytics.](../sql-data-warehouse/sql-data-warehouse-reference-tsql-statements.md)
synapse-analytics Data Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/data-integration.md
Title: Data integration partners
+ Title: Data integration partners
description: Lists of third-party partners with data integration solutions that support Azure Synapse Analytics. + + Last updated : 06/14/2023 + - Previously updated : 03/27/2019--- # Azure Synapse Analytics data integration partners
To create your data warehouse solution using the dedicated SQL pool in Azure Syn
| Partner | Description | Website/Product link | | - | -- | -- |
-| ![Ab Initio](./media/data-integration/abinitio-logo.png) |**Ab Initio**<br> Ab Initio's agile digital engineering platform helps you solve the toughest data processing and data management problems in corporate computing. Ab Initio's cloud-native platform lets you access and use data anywhere in your corporate ecosystem, whether in Azure or on-premises, including data stored on legacy systems. The combination of an intuitive interface with powerful automation, data quality, data governance, and active metadata capabilities enables rapid development and true data self-service, freeing analysts to do their jobs quickly and effectively. Join the world's largest businesses in using Ab Initio to turn big data into meaningful data. |[Product page](https://www.abinitio.com/) |
-| ![Aecorsoft](./media/data-integration/aecorsoft-logo.png) |**Aecorsoft**<br> AecorSoft offers fast, scalable, and real-time ELT/ETL software solution to help SAP customers bring complex SAP data to Azure Synapse Analytics and Azure data platform. With full compliance with SAP application layer security, AecorSoft solution is officially SAP Premium Certified to integrate with SAP applications. AecorSoft's unique Super Delta and Change-Data-Capture features enable SAP users to stream delta data from SAP transparent, pool, and cluster tables to Azure in CSV, Parquet, Avro, ORC, or GZIP format. Besides SAP tabular data, many other business-rule-heavy SAP objects like BW queries and S/4HANA CDS Views are fully supported. |[Product page](https://www.aecorsoft.com/products/dataintegrator)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/aecorsoftinc1588038796343.aecorsoftintegrationservice_adf)<br>|
-| ![Alooma](./media/data-integration/alooma_logo.png) |**Alooma**<br> Alooma is an Extract, Transform, and Load (ETL) solution that enables data teams to integrate, enrich, and stream data from various data silos to an Azure Synapse data warehouse all in real time. | |
-| ![Alteryx](./media/data-integration/alteryx_logo.png) |**Alteryx**<br> Alteryx Designer provides a repeatable workflow for self-service data analytics that leads to deeper insights in hours, not the weeks typical of traditional approaches! Alteryx Designer helps data analysts by combining data preparation, data blending, and analytics ΓÇô predictive, statistical, and spatial ΓÇô using the same intuitive user interface. |[Product page](https://www.alteryx.com/partners/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/alteryx.alteryx-analytics-platform?tab=Overview)<br>|
-| ![BI Builders (Xpert BI)](./media/data-integration/bibuilders-logo.png) |**BI Builders (Xpert BI)**<br> Xpert BI helps organizations build and maintain a robust and scalable data platform in Azure faster through metadata-based automation. It extends Azure Synapse with best practices and DataOps, for agile data development with built-in data governance functionalities. Use Xpert BI to quickly test out and switch between different Azure solutions such as Azure Synapse, Azure Data Lake Storage, and Azure SQL Database, as your business and analytics needs changes and grows.|[Product page](https://www.bi-builders.com/adding-automation-and-governance-to-azure-analytics/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/bi-builders-as.xpert-bi-vm)<br>|
-| ![BryteFlow](./media/data-integration/bryteflow-logo.png) |**BryteFlow**<br> With BryteFlow, you can continually replicate data from transactional sources like Oracle, SQL Server, SAP, MySQL, and more to Azure Synapse Analytics in real time, with best practices, and access reconciled data that is ready-to-use. BryteFlow extracts and replicates data in minutes using log-based Change Data Capture and merges deltas automatically to update data. It can be configured with times series as well. There's no coding for any process (just point and select!) and tables are created automatically on the destination. BryteFlow supports enterprise-scale automated data integration with extremely high throughput, ingesting terabytes of data, with smart partitioning, and multi-threaded, parallel loading.|[Product page](https://bryteflow.com/data-integration-on-azure-synapse/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/bryte.bryteflowingest-azure-standard?tab=Overview)<br>|
-| ![CData](./media/data-integration/cdata-logo.png) |**CData Sync - Cloud Data Pipeline**<br>Build high-performance data pipelines for Microsoft Azure Synapse in minutes. CData Sync is an easy-to-use, go-anywhere ETL/ELT pipeline that streamlines data flow from more than 200+ enterprise data sources to Azure Synapse. With CData Sync, users can easily create automated continuous data replication between Accounting, CRM, ERP, Marketing Automation, On-Premises, and cloud data.|[Product page](https://www.cdata.com/sync/to/azuresynapse/?utm_source=azuresynapse&utm_medium=partner)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/cdatasoftware.cdatasync?tab=Overview)<br>|
-| ![Datometry](./media/data-integration/datometry-logo.png) |**Datometry**<br>Datometry Hyper-Q makes existing applications written for Teradata run natively on Azure Synapse. Datometry emulates commonly used Teradata SQL, including analytical SQL, and advanced operational concepts like stored procedures, macros, SET tables, and more. Because Hyper-Q returns results that are bit-identical to Teradata, existing applications can be replatformed to Azure Synapse without any significant modifications. With Datometry, enterprises can move to Azure rapidly and take full advantage of Synapse immediately.|[Product page](https://datometry.com/platform/hyper-q-for-azure-synapse/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/datometry1601339937807.dtm-hyperq-azure-101?tab=Overview)<br>
-| ![Denodo](./media/data-integration/denodo_logo.png) |**Denodo**<br>Denodo provide real-time access to data across an organization's diverse data sources. It uses data virtualization to bridge data across many sources without replication. Denodo offers broad access to structured and unstructured data residing in enterprise, big data, and cloud sources, in both batch and real time.|[Product page](https://www.denodo.com/en)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/denodo.denodo-8_0-std-vm-payg?tab=Overview)<br> |
-| ![Dimodelo](./media/data-integration/dimodelo-logo.png) |**Dimodelo**<br>Dimodelo Data Warehouse Studio is a data warehouse automation tool for the Azure data platform. Dimodelo enhances developer productivity through a dedicated data warehouse modeling and ETL design tool, pattern-based best practice code generation, one-click deployment, and ETL orchestration. Dimodelo enhances maintainability with change propagation, allows developers to stay focused on business outcomes, and automates portability across data platforms.|[Product page](https://www.dimodelo.com/data-warehouse-studio-for-azure-synapse/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dimodelosolutions.dimodeloazurevs)<br> |
-| ![Fivetran](./media/data-integration/fivetran_logo.png) |**Fivetran**<br>Fivetran helps you centralize data from disparate sources. It features a zero maintenance, zero configuration data pipeline product with a growing list of built-in connectors to all the popular data sources. Setup takes five minutes after authenticating to data sources and target data warehouse.|[Product page](https://www.fivetran.com/partners-microsoft-azure)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/fivetran.fivetran_data_pipelines?tab=Overview)<br> |
-| ![HVR](./media/data-integration/hvr-logo.png) |**HVR**<br>HVR provides a real-time cloud data replication solution that supports enterprise modernization efforts. The HVR platform is a reliable, secure, and scalable way to quickly and efficiently integrate large data volumes in complex environments, enabling real-time data updates, access, and analysis. Global market leaders in various industries trust HVR to address their real-time data integration challenges and revolutionize their businesses. HVR is a privately held company based in San Francisco, with offices across North America, Europe, and Asia.|[Product page](https://www.hvr-software.com/solutions/azure-data-integration/)|
-| ![Incorta](./media/data-integration/incorta-logo.png) |**Incorta**<br>Incorta enables organizations to go from raw data to quickly discovering actionable insights in Azure by automating the various data preparation steps typically required to analyze complex data. which. Using a proprietary technology called Direct Data Mapping and Incorta's Blueprints (pre-built content library and best practices captured from real customer implementations), customers experience unprecedented speed and simplicity in accessing, organizing, and presenting data and insights for critical business decision-making.|[Product page](https://www.incorta.com/solutions/microsoft-azure-synapse)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/incorta.incorta_direct_data_platform)<br>|
-| ![Informatica](./media/data-integration/informatica_logo.png) |**1.Informatica Cloud Services for Azure**<br> Informatica Cloud offers a best-in-class solution for self-service data migration, integration, and management capabilities. Customers can quickly and reliably import, and export petabytes of data to Azure from different kinds of sources. Informatica Cloud Services for Azure provides native, high volume, high-performance connectivity to Azure Synapse, SQL Database, Blob Storage, Data Lake Store, and Azure Cosmos DB. <br><br> **2.Informatica PowerCenter** PowerCenter is a metadata-driven data integration platform that jumpstarts and accelerates data integration projects to deliver data to the business more quickly than manual hand coding. It serves as the foundation for your data integration investments |**Informatica Cloud services for Azure**<br>[Product page](https://www.informatica.com/products/cloud-integration.html)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/informatica.iics-secure-agent)<br><br> **Informatica PowerCenter**<br>[Product page](https://www.informatica.com/products/data-integration/powercenter.html)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/informatica.powercenter-1041?tab=Overview)<br>|
-| ![Information Builders](./media/data-integration/informationbuilders_logo.png) |**Information Builders (Omni-Gen Data Management)**<br>Information Builder's Omni-Gen data management platform provides data integration, data quality, and master data management solutions. It makes it easy to access, move, and blend all data no matter the format, location, volume, or latency.|[Product page](https://www.informationbuilders.com/3i-platform) |
-| ![Loome](./media/data-integration/loome-logo.png) |**Loome**<br>Loome provides a unique governance workbench that seamlessly integrates with Azure Synapse. It allows you to quickly onboard your data to the cloud and load your entire data source into ADLS in Parquet format. You can orchestrate data pipelines across data engineering, data science and HPC workloads, including native integration with Azure Data Factory, Python, SQL, Synapse Spark, and Databricks. Loome allows you to easily monitor Data Quality exceptions reinforcing Synapse as your strategic Data Quality Hub. Loome keeps an audit trail of resolved issues, and proactively manages data quality with a fully automated data quality engine generating audience targeted alerts in real time.| [Product page](https://www.loomesoftware.com)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/bizdataptyltd1592265042221.loome?tab=Overview) |
-| ![Lyftron](./media/data-integration/lyftron-logo.png) |**Lyftron**<br>Lyftron modern data hub combines an effortless data hub with agile access to data sources. Lyftron eliminates traditional ETL/ELT bottlenecks with automatic data pipeline and make data instantly accessible to BI user with the modern cloud compute of Azure Synapse, Spark & Snowflake. Lyftron connectors automatically convert any source into normalized, ready-to-query relational format and replication. It offers advanced security, data governance and transformation, with simple ANSI SQL along with search capability on your enterprise data catalog.| [Product page](https://lyftron.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/lyftron.lyftronapp?tab=Overview) |
-| ![Matillion](./media/data-integration/matillion-logo.png) |**Matillion**<br>Matillion is data transformation software for cloud data warehouses. Only Matillion is purpose-built for Azure Synapse enabling businesses to achieve new levels of simplicity, speed, scale, and savings. Matillion products are highly rated and trusted by companies of all sizes to meet their data integration and transformation needs. Learn more about how you can unlock the potential of your data with Matillion's cloud-based approach to data transformation.| [Product page](https://www.matillion.com/technology/cloud-data-warehouse/microsoft-azure-synapse/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/matillion.matillion-solution-template?tab=Overview) |
-| ![oh22 HEDDA.IO](./media/data-integration/heddaiowhitebg-logo.png) |**oh22 HEDDA<span></span>.IO**<br>oh22's HEDDA<span></span>.IO is a knowledge-driven data quality product built for Microsoft Azure. It enables you to build a knowledge base and use it to perform various critical data quality tasks, including correction, enrichment, and standardization of your data. HEDDA<span></span>.IO also allows you to do data cleansing by using cloud-based reference data services provided by reference data providers or developed and provided by you.| [Product page](https://github.com/oh22is/HEDDA.IO)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/oh22.hedda-io) |
-| ![Precisely](./media/data-integration/precisely-logo.png) |**Precisely**<br>Precisely Connect ETL enables extract transfer and load (ETL) of data from multiple sources to Azure targets. Connect ETL is an easy to configure tool that doesn't require coding or tuning. ETL transformation can be done on the fly. It eliminates the need for costly database staging areas or manual pushes, allowing you to create your own data blends with consistent sustainable performance. Import legacy data from multiple sources including mainframe DB2, VSAM, IMS, Oracle, SQL Server, Teradata, and write them to cloud targets including Azure Databricks, Azure Synapse Analytics, and Azure Data Lake Storage. By using the high performance Connect ETL engine, you can expect optimal performance and consistency.|[Product page](https://www.precisely.com/solution/microsoft-azure)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/syncsort.dmx) |
-| ![Qlik Data Integration](./media/business-intelligence/qlik_logo.png) |**Qlik Data Integration**<br>Qlik Data Integration provides an automated solution for loading data into an Azure Synapse. It simplifies batch loading and incremental replication of data from many sources: SQL Server, Oracle, DB2, Sybase, MySQL, and more. |[Product page](https://www.qlik.com/us/products/data-integration-products)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qlik.qlik_data_integration_platform) <br> |
-| ![Qubole](./media/data-integration/qubole_logo.png) |**Qubole**<br>Qubole provides a cloud-native platform that enables users to conduct ETL, analytics, and AI/ML workloads. It supports different kinds of open-source engines - Apache Spark, TensorFlow, Presto, Airflow, Hadoop, Hive, and more. It provides easy-to-use end-user tools for data processing from SQL query tools, to notebooks, and dashboards that use powerful open-source engines.|[Product page](https://www.qubole.com/company/partners/partners-microsoft-azure/) |
-| ![SAS](./media/business-intelligence/sas-logo.jpg) |**SAS® Viya®**<br>SAS® Viya® is an AI, analytic, and data management solution running on a scalable, cloud-native architecture. It enables you to operationalize insights, empowering everyone – from data scientists to business users – to collaborate and realize innovative results faster. Using open source or SAS models, SAS® Viya® can be accessed through APIs or interactive interfaces to transform raw data into actions. |[Product page](https://www.sas.com/microsoft)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/sas-institute-560503.sas-viya-saas?tab=Overview)<br> |
-| ![Segment](./media/data-integration/segment_logo.png) |**Segment**<br>Segment is a data management and analytics solution that helps you make sense of customer data coming from various sources. It allows you to connect your data to over 200 tools to create better decisions, products, and experiences. Segment will transform and load multiple data sources into your warehouse for you using its built-in data connectors|[Product page](https://segment.com/)<br> |
-| ![Skyvia](./media/data-integration/skyvia_logo.png) |**Skyvia (data integration)**<br>Skyvia data integration provides a wizard that automates data imports. This wizard allows you to migrate data between different kinds of sources - CRMs, application database, CSV files, and more. |[Product page](https://skyvia.com/)<br> |
-| ![SnapLogic](./media/data-integration/snaplogic_logo.png) |**SnapLogic**<br>The SnapLogic Platform enables customers to quickly transfer data into and out of an Azure Synapse data warehouse. It offers the ability to integrate hundreds of applications, services, and IoT scenarios in one solution.|[Product page](https://www.snaplogic.com/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/snaplogic.snaplogic-elastic-integration-windows)<br> |
-| ![SnowMirror](./media/data-integration/snowmirror-logo.png) |**SnowMirror by GuideVision**<br>SnowMirror is a smart data replication tool for ServiceNow. It loads data from a ServiceNow instance and stores it in an on-premises or cloud database. You can then use your replicated data for custom reporting and dashboards with tools like Power BI. Because your data is replicated, it reduces load on your ServiceNow cloud instance. It can be used for system integration, disaster recovery and more. SnowMirror can be used either on premises or in the cloud, and is compatible with all leading databases, including Microsoft SQL Server and Azure Synapse.|[Product page](https://www.snow-mirror.com/)|
-| ![StreamSets](./media/data-integration/streamsets_logo.png) |**StreamSets**<br>StreamSets provides a data integration platform for DataOps. It operationalizes the full design-deploy-operate lifecycle of integrating data into an Azure Synapse data warehouse. You can quickly ingest and integrate data to and from the warehouse via streaming, batch, or changed data capture. Also, you can ensure continuous operations with smart data pipelines that provide end-to-end data flow visibility and resiliency.|[Product page](https://streamsets.com/partners/microsoft)|
-| ![Talend](./media/data-integration/talend-logo.png) |**Talend Cloud**<br>Talend Cloud is an enterprise data integration platform to connect, access, and transform any data across the cloud or on-premises. It's an integration platform-as-a-service that provides broad connectivity, built-in data quality, and native support for the latest big data and cloud technologies. |[Product page](https://www.talend.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/talend.talendremoteengine?source=datamarket&tab=Overview) |
-| ![Theobald](./media/data-integration/theobald-logo.png) |**Theobald Software**<br>Theobald Software has been offering various solutions for data integration with SAP since 2004. Secure, stable, fast and, if required, incremental access to all types of SAP data objects on SAP ERP, S/4, BW or BW/4 systems is their area of expertise; an expertise that has been officially certified by SAP and which more than 3,500 global customers are making use of. Their products, Xtract IS for Azure and Xtract Universal, are constantly improving and have evolved into SAP ETL/ELT solutions that seamlessly integrate with Microsoft Azure, where Synapse and Data Factory pipelines can be used to orchestrate SAP data extractions, while Azure Storage serves as a destination for SAP data ingestions. |[Product page](https://theobald-software.com/en/products-technologies/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/theobaldsoftwaregmbh.xtractisforazure) |
-| ![TimeXtender](./media/data-integration/timextender-logo.png) |**TimeXtender**<br>TimeXtender's Discovery Hub helps companies build a modern data estate by providing an integrated data management platform that accelerates time to data insights by up to 10 times. Going beyond everyday ETL and ELT, it provides capabilities for data access, data modeling, and compliance in a single platform. Discovery Hub provides a cohesive data fabric for cloud scale analytics. It allows you to connect and integrate various data silos, catalog, model, move, and document data for analytics and AI. | [Product page](https://www.timextender.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?search=timextender&page=1) |
-| ![Trifacta](./media/data-integration/trifacta_logo.png) |**Trifacta Wrangler**<br> Trifacta helps individuals and organizations explore, and join together diverse data for analysis. Trifacta Wrangler is designed to handle data wrangling workloads that need to support data at scale and a large number of end users.|[Product page](https://www.trifacta.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/trifactainc1587522950142.trifactaazure?tab=Overview) |
-| ![WhereScape](./media/data-integration/wherescape_logo.png) |**Wherescape RED**<br> WhereScape RED is an IDE that provides teams with automation tools to streamline ETL workflows. The IDE provides best practice, optimized native code for popular data targets. Use WhereScape RED to cut the time to develop, deploy, and operate your data infrastructure.|[Product page](https://www.wherescape.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/wherescapesoftware.wherescape-red?source=datamarket&tab=Overview) |
-| ![Xplenty](./media/data-integration/xplenty-logo.png) |**Xplenty**<br> Xplenty ELT platform lets you quickly and easily prepare your data for analytics and production use cases using a simple cloud service. Xplenty's point & select, drag & drop interface enables data integration, processing and preparation without installing, deploying, or maintaining any software. Connect and integrate with a wide set of data repositories and SaaS applications including Azure Synapse, Azure blob storage, and SQL Server. Xplenty also supports all Web Services that are accessible via REST API.|[Product page](https://www.xplenty.com/integrations/azure-synapse-analytics/ )<br> |
-
+| :::image type="content" source="./media/data-integration/abinitio-logo.png" alt-text="The logo of Ab Initio."::: |**Ab Initio**<br> Ab Initio's agile digital engineering platform helps you solve the toughest data processing and data management problems in corporate computing. Ab Initio's cloud-native platform lets you access and use data anywhere in your corporate ecosystem, whether in Azure or on-premises, including data stored on legacy systems. The combination of an intuitive interface with powerful automation, data quality, data governance, and active metadata capabilities enables rapid development and true data self-service, freeing analysts to do their jobs quickly and effectively. Join the world's largest businesses in using Ab Initio to turn big data into meaningful data. |[Ab Initio](https://www.abinitio.com/) |
+| :::image type="content" source="./media/data-integration/aecorsoft-logo.png" alt-text="The logo of Aecorsoft."::: |**Aecorsoft**<br> AecorSoft offers fast, scalable, and real-time ELT/ETL software solution to help SAP customers bring complex SAP data to Azure Synapse Analytics and Azure data platform. With full compliance with SAP application layer security, AecorSoft solution is officially SAP Premium Certified to integrate with SAP applications. AecorSoft's unique Super Delta and Change-Data-Capture features enable SAP users to stream delta data from SAP transparent, pool, and cluster tables to Azure in CSV, Parquet, Avro, ORC, or GZIP format. Besides SAP tabular data, many other business-rule-heavy SAP objects like BW queries and S/4HANA CDS Views are fully supported. |[Aecorsoft](https://www.aecorsoft.com/en/products/dataintegrator)<br>[Aecorsoft in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/aecorsoftinc1588038796343.aecorsoftintegrationservice_adf)<br>|
+| :::image type="content" source="./media/data-integration/alooma_logo.png" alt-text="The logo of Alooma."::: |**Alooma**<br> Alooma is an Extract, Transform, and Load (ETL) solution that enables data teams to integrate, enrich, and stream data from various data silos to an Azure Synapse data warehouse all in real time. | |
+| :::image type="content" source="./media/data-integration/alteryx_logo.png" alt-text="The logo of Alteryx."::: |**Alteryx**<br> Alteryx Designer provides a repeatable workflow for self-service data analytics that leads to deeper insights in hours, not the weeks typical of traditional approaches! Alteryx Designer helps data analysts by combining data preparation, data blending, and analytics ΓÇô predictive, statistical, and spatial ΓÇô using the same intuitive user interface. |[Alteryx](https://www.alteryx.com/partners/microsoft/)<br>[Alteryx in the Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/alteryx.alteryx-analytics-platform?tab=Overview)<br>|
+| :::image type="content" source="./media/data-integration/bibuilders-logo.png" alt-text="The logo of BI Builders (Xpert BI)."::: |**BI Builders (Xpert BI)**<br> Xpert BI helps organizations build and maintain a robust and scalable data platform in Azure faster through metadata-based automation. It extends Azure Synapse with best practices and DataOps, for agile data development with built-in data governance functionalities. Use Xpert BI to quickly test out and switch between different Azure solutions such as Azure Synapse, Azure Data Lake Storage, and Azure SQL Database, as your business and analytics needs changes and grows.|[Xpert BI](https://www.bi-builders.com/adding-automation-and-governance-to-azure-analytics/)<br>[Xpert BI with Azure Synapse in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/bi-builders-as.xpert-bi-vm)<br>|
+| :::image type="content" source="./media/data-integration/bryteflow-logo.png" alt-text="The logo of BryteFlow."::: |**BryteFlow**<br> With BryteFlow, you can continually replicate data from transactional sources like Oracle, SQL Server, SAP, MySQL, and more to Azure Synapse Analytics in real time, with best practices, and access reconciled data that is ready-to-use. BryteFlow extracts and replicates data in minutes using log-based Change Data Capture and merges deltas automatically to update data. It can be configured with times series as well. There's no coding for any process (just point and select!) and tables are created automatically on the destination. BryteFlow supports enterprise-scale automated data integration with extremely high throughput, ingesting terabytes of data, with smart partitioning, and multi-threaded, parallel loading.|[BryteFlow](https://bryteflow.com/data-integration-on-azure-synapse/)<br>[BryteFlow in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/bryte.bryteflowingest-azure-standard?tab=Overview)<br>|
+| :::image type="content" source="./media/data-integration/cdata-logo.png" alt-text="The logo of CData."::: |**CData Sync - Cloud Data Pipeline**<br>Build high-performance data pipelines for Microsoft Azure Synapse in minutes. CData Sync is an easy-to-use, go-anywhere ETL/ELT pipeline that streamlines data flow from more than 200+ enterprise data sources to Azure Synapse. With CData Sync, users can easily create automated continuous data replication between Accounting, CRM, ERP, Marketing Automation, On-Premises, and cloud data.|[Cloud Data Pipeline for Microsoft Azure Synapse](https://www.cdata.com/sync/to/azuresynapse/?utm_source=azuresynapse&utm_medium=partner)<br>[CData Sync in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/cdatasoftware.cdatasync?tab=Overview)<br>|
+| :::image type="content" source="./media/data-integration/datometry-logo.png" alt-text="The logo of Datometry."::: |**Datometry**<br>Datometry Hyper-Q makes existing applications written for Teradata run natively on Azure Synapse. Datometry emulates commonly used Teradata SQL, including analytical SQL, and advanced operational concepts like stored procedures, macros, SET tables, and more. Because Hyper-Q returns results that are bit-identical to Teradata, existing applications can be re-platformed to Azure Synapse without any significant modifications. With Datometry, enterprises can move to Azure rapidly and take full advantage of Synapse immediately.|[Datometry](https://datometry.com/platform/hyper-q-for-azure-synapse/)<br>[Datometry Hyper-Q for Azure Synapse Analytics in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/datometry1601339937807.dtm-hyperq-azure-101?tab=Overview)<br>
+| :::image type="content" source="./media/data-integration/denodo_logo.png" alt-text="The logo of Denodo."::: |**Denodo**<br>Denodo provide real-time access to data across an organization's diverse data sources. It uses data virtualization to bridge data across many sources without replication. Denodo offers broad access to structured and unstructured data residing in enterprise, big data, and cloud sources, in both batch and real time.|[Denodo](https://www.denodo.com/en)<br>[Denodo Standard in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/denodo.denodo-8_0-std-vm-payg?tab=Overview)<br> |
+| :::image type="content" source="./media/data-integration/dimodelo-logo.png" alt-text="The logo of Dimodelo."::: |**Dimodelo**<br>Dimodelo Data Warehouse Studio is a data warehouse automation tool for the Azure data platform. Dimodelo enhances developer productivity through a dedicated data warehouse modeling and ETL design tool, pattern-based best practice code generation, one-select deployment, and ETL orchestration. Dimodelo enhances maintainability with change propagation, allows developers to stay focused on business outcomes, and automates portability across data platforms.|[Dimodelo Data Warehouse Studio for Azure Synapse Analytics](https://www.dimodelo.com/data-warehouse-studio-for-azure-synapse/)<br>[Dimodelo Data Warehouse Studio in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dimodelosolutions.dimodeloazurevs)<br> |
+| :::image type="content" source="./media/data-integration/fivetran_logo.png" alt-text="The logo of Fivetran."::: |**Fivetran**<br>Fivetran helps you centralize data from disparate sources. It features a zero maintenance, zero configuration data pipeline product with a growing list of built-in connectors to all the popular data sources. Setup takes five minutes after authenticating to data sources and target data warehouse.|[Fivetran](https://www.fivetran.com/partners-microsoft-azure)<br>[Fivetran Data Pipelines in the Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/fivetran.fivetran_data_pipelines?tab=Overview)<br> |
+| :::image type="content" source="./media/data-integration/hvr-logo.png" alt-text="The logo of HVR."::: |**HVR**<br>HVR provides a real-time cloud data replication solution that supports enterprise modernization efforts. The HVR platform is a reliable, secure, and scalable way to quickly and efficiently integrate large data volumes in complex environments, enabling real-time data updates, access, and analysis. Global market leaders in various industries trust HVR to address their real-time data integration challenges and revolutionize their businesses. HVR is a privately held company based in San Francisco, with offices across North America, Europe, and Asia.|[HVR](https://www.hvr-software.com/solutions/azure-data-integration/)|
+| :::image type="content" source="./media/data-integration/incorta-logo.png" alt-text="The logo of Incorta."::: |**Incorta**<br>Incorta enables organizations to go from raw data to quickly discovering actionable insights in Azure by automating the various data preparation steps typically required to analyze complex data. which. Using a proprietary technology called Direct Data Mapping and Incorta's Blueprints (pre-built content library and best practices captured from real customer implementations), customers experience unprecedented speed and simplicity in accessing, organizing, and presenting data and insights for critical business decision-making.|[Incorta](https://www.incorta.com/solutions/microsoft-azure-synapse)<br>[Incorta Direct Data Platform in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/incorta.incorta_direct_data_platform)<br>|
+| :::image type="content" source="./media/data-integration/informatica_logo.png" alt-text="The logo of Informatica."::: |**Informatica Cloud Services for Azure**<br> Informatica Cloud offers a best-in-class solution for self-service data migration, integration, and management capabilities. Customers can quickly and reliably import, and export petabytes of data to Azure from different kinds of sources. Informatica Cloud Services for Azure provides native, high volume, high-performance connectivity to Azure Synapse, SQL Database, Blob Storage, Data Lake Store, and Azure Cosmos DB. <br><br> **Informatica PowerCenter** PowerCenter is a metadata-driven data integration platform that jumpstarts and accelerates data integration projects to deliver data to the business more quickly than manual hand coding. It serves as the foundation for your data integration investments |**Informatica Cloud services for Azure**<br>[Informatica Intelligent Data Management Cloud](https://www.informatica.com/products/cloud-integration.html)<br><br><br> **Informatica PowerCenter**<br>[Informatica Cloud Data Integration](https://www.informatica.com/products/data-integration/powercenter.html)<br> [Informatica in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?search=Informatica&page=1&filters=partners)<br>|
+| :::image type="content" source="./media/data-integration/loome-logo.png" alt-text="The logo of Loome."::: |**Loome**<br>Loome provides a unique governance workbench that seamlessly integrates with Azure Synapse. It allows you to quickly onboard your data to the cloud and load your entire data source into ADLS in Parquet format. You can orchestrate data pipelines across data engineering, data science and HPC workloads, including native integration with Azure Data Factory, Python, SQL, Synapse Spark, and Databricks. Loome allows you to easily monitor Data Quality exceptions reinforcing Synapse as your strategic Data Quality Hub. Loome keeps an audit trail of resolved issues, and proactively manages data quality with a fully automated data quality engine generating audience targeted alerts in real time.| [Loome Software](https://www.loomesoftware.com)<br> [Loome in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/bizdataptyltd1592265042221.loome?tab=Overview) |
+| :::image type="content" source="./media/data-integration/lyftron-logo.png" alt-text="The logo of Lyftron."::: |**Lyftron**<br>Lyftron modern data hub combines an effortless data hub with agile access to data sources. Lyftron eliminates traditional ETL/ELT bottlenecks with automatic data pipeline and make data instantly accessible to BI user with the modern cloud compute of Azure Synapse, Spark & Snowflake. Lyftron connectors automatically convert any source into normalized, ready-to-query relational format and replication. It offers advanced security, data governance and transformation, with simple ANSI SQL along with search capability on your enterprise data catalog.| [Lyftron](https://lyftron.com/)<br> [Lyftron ELT in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/lyftron.lyftronapp?tab=Overview) |
+| :::image type="content" source="./media/data-integration/matillion-logo.png" alt-text="The logo of Matillion."::: |**Matillion**<br>Matillion is data transformation software for cloud data warehouses. Only Matillion is purpose-built for Azure Synapse enabling businesses to achieve new levels of simplicity, speed, scale, and savings. Matillion products are highly rated and trusted by companies of all sizes to meet their data integration and transformation needs. Learn more about how you can unlock the potential of your data with Matillion's cloud-based approach to data transformation.| [Matillion](https://www.matillion.com/technology/cloud-data-warehouse/microsoft-azure-synapse/)<br> [Matillion ETL](https://azuremarketplace.microsoft.com/marketplace/apps/matillion.matillion-solution-template?tab=Overview) |
+| :::image type="content" source="./media/data-integration/heddaiowhitebg-logo.png" alt-text="The logo of Oh22 HEDDA.IO."::: |**oh22 HEDDA.IO**<br>oh22's HEDDA.IO is a knowledge-driven data quality product built for Microsoft Azure. It enables you to build a knowledge base and use it to perform various critical data quality tasks, including correction, enrichment, and standardization of your data. HEDDA.IO also allows you to do data cleansing by using cloud-based reference data services provided by reference data providers or developed and provided by you.| [oh22is/HEDDA.IO](https://github.com/oh22is/HEDDA.IO)<br> [HEDDA.IO in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/oh22.hedda-io) |
+| :::image type="content" source="./media/data-integration/precisely-logo.png" alt-text="The logo of Precisely."::: |**Precisely**<br>Precisely Connect ETL enables extract transfer and load (ETL) of data from multiple sources to Azure targets. Connect ETL is an easy to configure tool that doesn't require coding or tuning. ETL transformation can be done on the fly. It eliminates the need for costly database staging areas or manual pushes, allowing you to create your own data blends with consistent sustainable performance. Import legacy data from multiple sources including mainframe DB2, VSAM, IMS, Oracle, SQL Server, Teradata, and write them to cloud targets including Azure Databricks, Azure Synapse Analytics, and Azure Data Lake Storage. By using the high performance Connect ETL engine, you can expect optimal performance and consistency.|[Precisely](https://www.precisely.com/solution/microsoft-azure)<br> [Precisely Connect ETL for Azure in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/syncsort.dmx) |
+| :::image type="content" source="./media/business-intelligence/qlik_logo.png" alt-text="The logo of Qlik Data Integration."::: |**Qlik Data Integration**<br>Qlik Data Integration provides an automated solution for loading data into an Azure Synapse. It simplifies batch loading and incremental replication of data from many sources: SQL Server, Oracle, DB2, Sybase, MySQL, and more. |[Qlik Data Integration](https://www.qlik.com/us/products/data-integration-products)<br>[Qlik Data Integration Platform in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qlik.qlik_data_integration_platform) <br> |
+| :::image type="content" source="./media/data-integration/qubole_logo.png" alt-text="The logo of Qubole."::: |**Qubole**<br>Qubole provides a cloud-native platform that enables users to conduct ETL, analytics, and AI/ML workloads. It supports different kinds of open-source engines - Apache Spark, TensorFlow, Presto, Airflow, Hadoop, Hive, and more. It provides easy-to-use end-user tools for data processing from SQL query tools, to notebooks, and dashboards that use powerful open-source engines.|[Qubole](https://www.qubole.com/company/partners/partners-microsoft-azure/) |
+| :::image type="content" source="./media/business-intelligence/sas-logo.jpg" alt-text="The logo of SAS."::: |**SAS&reg; Viya&reg;**<br>SAS&reg; Viya&reg; is an AI, analytic, and data management solution running on a scalable, cloud-native architecture. It enables you to operationalize insights, empowering everyone ΓÇô from data scientists to business users ΓÇô to collaborate and realize innovative results faster. Using open source or SAS models, SAS&reg; Viya&reg; can be accessed through APIs or interactive interfaces to transform raw data into actions. |[SAS&reg; Viya&reg;](https://www.sas.com/microsoft)<br> [SAS&reg; Viya&reg; in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/sas-institute-560503.sas-viya-saas?tab=Overview)<br> |
+| :::image type="content" source="./media/data-integration/segment_logo.png" alt-text="The logo of Segment."::: |**Segment**<br>Segment is a data management and analytics solution that helps you make sense of customer data coming from various sources. It allows you to connect your data to over 200 tools to create better decisions, products, and experiences. Segment will transform and load multiple data sources into your warehouse for you using its built-in data connectors|[Twilio Segment](https://segment.com/)<br> |
+| :::image type="content" source="./media/data-integration/skyvia_logo.png" alt-text="The logo of Skyvia."::: |**Skyvia (data integration)**<br>Skyvia data integration provides a wizard that automates data imports. This wizard allows you to migrate data between different kinds of sources - CRMs, application database, CSV files, and more. |[Skyvia](https://skyvia.com/)<br> |
+| :::image type="content" source="./media/data-integration/snaplogic_logo.png" alt-text="The logo of SnapLogic."::: |**SnapLogic**<br>The SnapLogic Platform enables customers to quickly transfer data into and out of an Azure Synapse data warehouse. It offers the ability to integrate hundreds of applications, services, and IoT scenarios in one solution.|[SnapLogic](https://www.snaplogic.com/)<br>[SnapLogic Intelligent Integration Platform in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/snaplogic.snaplogic-elastic-integration-windows)<br> |
+| :::image type="content" source="./media/data-integration/snowmirror-logo.png" alt-text="The logo of SnowMirror."::: |**SnowMirror by GuideVision**<br>SnowMirror is a smart data replication tool for ServiceNow. It loads data from a ServiceNow instance and stores it in an on-premises or cloud database. You can then use your replicated data for custom reporting and dashboards with tools like Power BI. Because your data is replicated, it reduces load on your ServiceNow cloud instance. It can be used for system integration, disaster recovery and more. SnowMirror can be used either on premises or in the cloud, and is compatible with all leading databases, including Microsoft SQL Server and Azure Synapse.|[SnowMirror](https://www.snow-mirror.com/)|
+| :::image type="content" source="./media/data-integration/streamsets_logo.png" alt-text="The logo of StreamSets."::: |**StreamSets**<br>StreamSets provides a data integration platform for DataOps. It operationalizes the full design-deploy-operate lifecycle of integrating data into an Azure Synapse data warehouse. You can quickly ingest and integrate data to and from the warehouse via streaming, batch, or changed data capture. Also, you can ensure continuous operations with smart data pipelines that provide end-to-end data flow visibility and resiliency.|[StreamSets](https://streamsets.com/partners/microsoft)|
+| :::image type="content" source="./media/data-integration/talend-logo.png" alt-text="The logo of Talend."::: |**Talend Cloud**<br>Talend Cloud is an enterprise data integration platform to connect, access, and transform any data across the cloud or on-premises. It's an integration platform-as-a-service that provides broad connectivity, built-in data quality, and native support for the latest big data and cloud technologies. |[Talend Cloud](https://www.talend.com/)<br> [Talend Cloud in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/talend.talendremoteengine?source=datamarket&tab=Overview) |
+| :::image type="content" source="./media/data-integration/theobald-logo.png" alt-text="The logo of Theobald."::: |**Theobald Software**<br>Theobald Software has been offering various solutions for data integration with SAP since 2004. Secure, stable, fast and, if required, incremental access to all types of SAP data objects on SAP ERP, S/4, BW or BW/4 systems is their area of expertise; an expertise that has been officially certified by SAP and which more than 3,500 global customers are making use of. Their products, Xtract IS for Azure and Xtract Universal, are constantly improving and have evolved into SAP ETL/ELT solutions that seamlessly integrate with Microsoft Azure, where Synapse and Data Factory pipelines can be used to orchestrate SAP data extractions, while Azure Storage serves as a destination for SAP data ingestions. |[Theobald Software](https://theobald-software.com/en/products-technologies/)<br> [Theobald Software Xtract IS for Azure in the Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/theobaldsoftwaregmbh.xtractisforazure) |
+| :::image type="content" source="./media/data-integration/timextender-logo.png" alt-text="The logo of TimeXtender."::: |**TimeXtender**<br>TimeXtender's Discovery Hub helps companies build a modern data estate by providing an integrated data management platform that accelerates time to data insights by up to 10 times. Going beyond everyday ETL and ELT, it provides capabilities for data access, data modeling, and compliance in a single platform. Discovery Hub provides a cohesive data fabric for cloud scale analytics. It allows you to connect and integrate various data silos, catalog, model, move, and document data for analytics and AI. | [TimeXtender](https://www.timextender.com/)<br> [TimeXtender in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?search=timextender&page=1) |
+| :::image type="content" source="./media/data-integration/wherescape_logo.png" alt-text="The logo of WhereScape."::: |**WhereScape RED**<br> WhereScape RED is an IDE that provides teams with automation tools to streamline ETL workflows. The IDE provides best practice, optimized native code for popular data targets. Use WhereScape RED to cut the time to develop, deploy, and operate your data infrastructure.|[WhereScape](https://www.wherescape.com/)<br> [WhereScape&reg; RED in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/wherescapesoftware.wherescape-red?source=datamarket&tab=Overview) |
## Next steps
-To learn more about other partners, see [Business Intelligence partners](business-intelligence.md), [Data Management partners](data-management.md), and [Machine Learning and AI partners](machine-learning-ai.md).
-See how to [discover partner solutions through Synapse Studio](browse-partners.md).
+- To learn more about other partners, see [Business Intelligence partners](business-intelligence.md), [Data Management partners](data-management.md), and [Machine Learning and AI partners](machine-learning-ai.md).
+
+- See how to [discover partner solutions through Synapse Studio](browse-partners.md).
synapse-analytics Data Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/data-management.md
Title: Data management partners
+ Title: Data management partners
description: Lists of third-party data management partners with solutions that support Azure Synapse Analytics. + + Last updated : 06/14/2023 + - Previously updated : 04/17/2018--
This article highlights Microsoft partner companies with data management tools a
## Data management partners | Partner | Description | Website/Product link | | - | -- | -- |
-| ![Aginity](./media/data-management/aginity-logo.png) |**Aginity**<br>Aginity is an analytics development tool. It puts the full power of MicrosoftΓÇÖs Synapse platform in the hands of analysts and engineers. The rich and intuitive SQL development environment allows team members to connect to over a dozen industry leading analytics platforms. It allows users to ingest data in a variety of formats, and quickly build complex business calculation to serve the results into Business Intelligence and Machine Learning use cases. The entire application is built around a central catalog which makes collaboration across the analytics team a reality, and the sophisticated management capabilities and fine grained security make governance a breeze. |[Product page](https://www.aginity.com/databases/microsoft/)<br> |
-| ![Alation](./media/data-management/alation-logo.png) |**Alation**<br>AlationΓÇÖs data catalog dramatically improves the productivity, increases the accuracy, and drives confident data-driven decision making for analysts. AlationΓÇÖs data catalog empowers everyone in your organization to find, understand, and govern data. |[Product page](https://www.alation.com/product/data-catalog/)<br> |
-| ![BI Builders (Xpert BI)](./media/data-integration/bibuilders-logo.png) |**BI Builders (Xpert BI)**<br> Xpert BI provides an intuitive and searchable catalog for the line-of-business user to find, trust, and understand data and reports. The solution covers the whole data platform including Azure Synapse Analytics, ADLS Gen 2, Azure SQL Database, Analysis Services and Power BI, and also data flows and data movement end-to-end. Data stewards can update descriptions and tag data to follow regulatory requirements. Xpert BI can be integrated via APIs to other catalogs such as Microsoft Purview. It supplements traditional data catalogs with a business user perspective. |[Product page](https://www.bi-builders.com/adding-automation-and-governance-to-azure-analytics/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/bi-builders-as.xpert-bi-vm)<br>|
-| ![Coffing Data Warehousing](./media/data-management/coffing-data-warehousing-logo.png) |**Coffing Data Warehousing**<br>Coffing Data Warehousing provides Nexus Chameleon, a tool with 10 years of design dedicated to querying systems. Nexus is available as a query tool for dedicated SQL pool in Azure Synapse Analytics. Use Nexus to query in-house and cloud computers and join data across different platforms. Point-Click-Report! |[Product page](https://coffingdw.com/software/nexus/)<br> |
-| ![Inbrein](./media/data-management/inbrein-logo.png) |**Inbrein MicroERD**<br>Inbrein MicroERD provides the tools that you need to create a precise data model, reduce data redundancy, improve productivity, and observe standards. By using its UI, which was developed based on extensive user experiences, a modeler can work on DB models easily and conveniently. You can continuously enjoy new and improved functions of MicroERD through prompt functional improvements and updates. |Product page<br> |
-| ![Infolibrarian](./media/data-management/infolibrarian-logo.png) |**Infolibrarian (Metadata Management Server)**<br>InfoLibrarian catalogs, stores, and manages metadata to help you solve key pain points of data management. Infolibrarian provides metadata management, data governance, and asset management solutions for managing and publishing metadata from a diverse set of tools and technologies. |[Product page](http://www.infolibcorp.com/metadata-management/software-tools)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/infolibrarian.infolibrarian-metadata-management-server)<br> |
-| ![Kyligence](./media/data-management/kyligence-logo.png) |**Kyligence**<br>Founded by the creators of Apache Kylin, Kyligence is on a mission to accelerate the productivity of its customers by automating data management, discovery, interaction, and insight generation ΓÇô all without barriers. Kyligence Cloud enables cluster deployment, enhances data access, and dramatically accelerates data analysis. Kyligence's AI-augmented Big Data analytics management platform makes the often-challenging task of building enterprise-scale data lakes fast and easy.|[Product page](https://kyligence.io/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/kyligence.kyligence-cloud-saas)<br> |
-| ![Redpoint Global](./media/data-management/redpoint-global-logo.png) |**RedPoint Data Management**<br>RedPoint Data Management enables marketers to apply all their data to drive cross-channel customer engagement while doing structured and unstructured data management. With RedPoint, you can maximize the value of your structured and unstructured data to deliver the hyper-personalized, contextual interactions needed to engage today's omni-channel customer. Drag-and-drop interface makes designing and executing data management processes easy. |[Product page](https://www.redpointglobal.com/customer-data-management)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/redpoint-global.redpoint-rpdm)<br> |
-| ![SAS](./media/business-intelligence/sas-logo.jpg) |**SAS® Viya®**<br>SAS® Viya® is an AI, analytic, and data management solution running on a scalable, cloud-native architecture. It enables you to operationalize insights, empowering everyone – from data scientists to business users – to collaborate and realize innovative results faster. Using open source or SAS models, SAS® Viya® can be accessed through APIs or interactive interfaces to transform raw data into actions. |[Product page](https://www.sas.com/microsoft)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/sas-institute-560503.sas-viya-saas?tab=Overview)<br> |
-| ![Sentry One](./media/data-management/sql-sentry-logo.png) |**SentryOne (DW Sentry)**<br>With the intelligent data movement dashboard and event calendar, you always know exactly what is impacting your workload. Designed to give you visibility into your queries and jobs running to load, backup, or restore your data, never worry about making the most of your Azure resources. |[Product page](https://sentryone.com/platform/azure-sql-dw-performance-monitoring/)<br>[Azure Marketplace](https://sentryone.com/platform/azure-sql-dw-performance-monitoring/)<br> |
-| ![SqlDBM](./media/data-management/sqldbm-logo.png) |**SqlDBM**<br>SqlDBM is a Cloud-based Data Modeling Tool that offers you an easy, convenient way to develop your database anywhere on any browser. All while incorporating any needed database rules and objects such as database keys, schemas, indexes, column constraints, and relationships. |[Product page](http://sqldbm.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/sqldbm1583438206845.sqldbm-data-modeling-tool?tab=Overview)<br>|
-| ![Tamr](./media/data-management/tamr-logo.png) |**Tamr**<br>With Tamr, organizations can supply Azure Synapse with mastered data, allowing them to get most from Azure SynapseΓÇÖs analytic capabilities. TamrΓÇÖs cloud-native data mastering solutions use machine learning to do the heavy lifting to combine, cleanse, and categorize data, with intuitive human feedback workflows to bridge the gap between data and business outcomes. Tamr integrates with AzureΓÇÖs data services including Azure Synapse Analytics, Azure Databricks, Azure HDInsight, Azure Data Catalog, Azure Data Lake Storage, and Azure Data Factory. It allows for data mastering at scale with a lower total cost of ownership, by taking advantage of the flexibility and scale of Azure. |[Product page](https://www.tamr.com/tamr-partners/microsoft-azure/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tamrinc.unify_v_2019?tab=Overview) |
-| ![Teleran](./media/data-management/teleran-logo.jpg) |**Teleran**<br>TeleranΓÇÖs Query Control prevents inappropriate and poorly formed queries from reaching Synapse and wasting compute resources. It sends intelligent messages to analytics users guiding them to more efficiently interact with the data. The goal is to ensure good business results without needlessly driving up Azure costs. Teleran Usage Analysis delivers an analysis of user, application, query, and data usage activity. It allows you to always have the entire picture of whatΓÇÖs going on. It enables you to improve service, increase business productivity, and optimize Synapse consumption costs. |[Product page](https://teleran.com/azure-synapse-optimization-cost-control/)<br>|
+| :::image type="content" source="./media/data-management/aginity-logo.png" alt-text="The logo of Aginity."::: |**Aginity**<br>Aginity is an analytics development tool. It puts the full power of Microsoft's Synapse platform in the hands of analysts and engineers. The rich and intuitive SQL development environment allows team members to connect to over a dozen industry leading analytics platforms. It allows users to ingest data in a variety of formats, and quickly build complex business calculation to serve the results into Business Intelligence and Machine Learning use cases. The entire application is built around a central catalog which makes collaboration across the analytics team a reality, and the sophisticated management capabilities and fine grained security make governance a breeze. |[Aginity](https://www.aginity.com/databases/microsoft/)<br> |
+| :::image type="content" source="./media/data-management/alation-logo.png" alt-text="The logo of Alation."::: |**Alation**<br>Alation's data catalog dramatically improves the productivity, increases the accuracy, and drives confident data-driven decision making for analysts. Alation's data catalog empowers everyone in your organization to find, understand, and govern data. |[Alation](https://www.alation.com/product/data-catalog/)<br> |
+| :::image type="content" source="./media/data-integration/bibuilders-logo.png" alt-text="The logo of BI Builders (Xpert BI)."::: |**BI Builders (Xpert BI)**<br> Xpert BI provides an intuitive and searchable catalog for the line-of-business user to find, trust, and understand data and reports. The solution covers the whole data platform including Azure Synapse Analytics, ADLS Gen 2, Azure SQL Database, Analysis Services and Power BI, and also data flows and data movement end-to-end. Data stewards can update descriptions and tag data to follow regulatory requirements. Xpert BI can be integrated via APIs to other catalogs such as Microsoft Purview. It supplements traditional data catalogs with a business user perspective. |[Xpert BI](https://www.bi-builders.com/adding-automation-and-governance-to-azure-analytics/)<br>[Xpert BI in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/bi-builders-as.xpert-bi-vm)<br>|
+| :::image type="content" source="./media/data-management/coffing-data-warehousing-logo.png" alt-text="The logo of Coffing Data Warehousing."::: |**Coffing Data Warehousing**<br>Coffing Data Warehousing provides Nexus Chameleon, a tool with 10 years of design dedicated to querying systems. Nexus is available as a query tool for dedicated SQL pool in Azure Synapse Analytics. Use Nexus to query in-house and cloud computers and join data across different platforms. Point-Select-Report! |[Coffing Data Warehousing](https://coffingdw.com/software/nexus/)<br> |
+| :::image type="content" source="./media/data-management/inbrein-logo.png" alt-text="The logo of Inbrein."::: |**Inbrein MicroERD**<br>Inbrein MicroERD provides the tools that you need to create a precise data model, reduce data redundancy, improve productivity, and observe standards. By using its UI, which was developed based on extensive user experiences, a modeler can work on DB models easily and conveniently. You can continuously enjoy new and improved functions of MicroERD through prompt functional improvements and updates. |[Inbrein MicroDesigner](http://www.inbrein.com/en/solutions/Micro%20Designer.html)<br> |
+| :::image type="content" source="./media/data-management/infolibrarian-logo.png" alt-text="The logo of InfoLibrarian."::: |**InfoLibrarian (Metadata Management Server)**<br>InfoLibrarian catalogs, stores, and manages metadata to help you solve key pain points of data management. InfoLibrarian provides metadata management, data governance, and asset management solutions for managing and publishing metadata from a diverse set of tools and technologies. |[InfoLibrarian](http://www.infolibcorp.com/metadata-management/software-tools)<br> [Metadata Management Server (Data Catalog)
+ in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/infolibrarian.infolibrarian-metadata-management-server)<br> |
+| :::image type="content" source="./media/data-management/kyligence-logo.png" alt-text="The logo of Kyligence."::: |**Kyligence**<br>Founded by the creators of Apache Kylin, Kyligence is on a mission to accelerate the productivity of its customers by automating data management, discovery, interaction, and insight generation ΓÇô all without barriers. Kyligence Cloud enables cluster deployment, enhances data access, and dramatically accelerates data analysis. Kyligence's AI-augmented Big Data analytics management platform makes the often-challenging task of building enterprise-scale data lakes fast and easy.|[Kyligence](https://kyligence.io/)<br> [Kyligence Cloud in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/kyligence.kyligence-cloud-saas)<br> |
+| :::image type="content" source="./media/data-management/redpoint-global-logo.png" alt-text="The logo of Redpoint Global."::: |**RedPoint Data Management**<br>RedPoint Data Management enables marketers to apply all their data to drive cross-channel customer engagement while doing structured and unstructured data management. With RedPoint, you can maximize the value of your structured and unstructured data to deliver the hyper-personalized, contextual interactions needed to engage today's omni-channel customer. Drag-and-drop interface makes designing and executing data management processes easy. |[RedPoint Data Management](https://www.redpointglobal.com/customer-data-management)<br> [rgOne&trade; in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/redpoint-global.redpoint-rpdm)<br> |
+| :::image type="content" source="./media/business-intelligence/sas-logo.jpg" alt-text="The logo of SAS."::: |**SAS&reg; Viya&reg;**<br>SAS&reg; Viya&reg; is an AI, analytic, and data management solution running on a scalable, cloud-native architecture. It enables you to operationalize insights, empowering everyone ΓÇô from data scientists to business users ΓÇô to collaborate and realize innovative results faster. Using open source or SAS models, SAS&reg; Viya&reg; can be accessed through APIs or interactive interfaces to transform raw data into actions. |[SAS&reg; Viya&reg;](https://www.sas.com/microsoft)<br> [SAS&reg; Viya&reg; (SAS&reg; Cloud) in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/sas-institute-560503.sas-viya-saas?tab=Overview)<br> |
+| :::image type="content" source="./media/data-management/sql-sentry-logo.png" alt-text="The logo of SentryOne."::: |**SentryOne (DW Sentry)**<br>With the intelligent data movement dashboard and event calendar, you always know exactly what is impacting your workload. Designed to give you visibility into your queries and jobs running to load, backup, or restore your data, never worry about making the most of your Azure resources. |[SentryOne](https://sentryone.com/platform/azure-sql-dw-performance-monitoring/)<br>[SentryOne in the Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=SentryOne&page=1)<br> |
+| :::image type="content" source="./media/data-management/sqldbm-logo.png" alt-text="The logo of SqlDBM."::: |**SqlDBM**<br>SqlDBM is a Cloud-based Data Modeling Tool that offers you an easy, convenient way to develop your database anywhere on any browser. All while incorporating any needed database rules and objects such as database keys, schemas, indexes, column constraints, and relationships. |[SqlDBM](http://sqldbm.com/)<br> [SqlDBM Data Modeling Tool for Synapse and SQL Server in the Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/sqldbm1583438206845.sqldbm-data-modeling-tool?tab=Overview)<br>|
+| :::image type="content" source="./medim/)<br> [Tamr in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tamrinc.unify_v_2019?tab=Overview) |
+| :::image type="content" source="./media/data-management/teleran-logo.jpg" alt-text="The logo of Teleran."::: |**Teleran**<br>Teleran's Query Control prevents inappropriate and poorly formed queries from reaching Synapse and wasting compute resources. It sends intelligent messages to analytics users guiding them to more efficiently interact with the data. The goal is to ensure good business results without needlessly driving up Azure costs. Teleran Usage Analysis delivers an analysis of user, application, query, and data usage activity. It allows you to always have the entire picture of what's going on. It enables you to improve service, increase business productivity, and optimize Synapse consumption costs. |[Teleran](https://teleran.com/azure-synapse-optimization-cost-control/)|
## Next steps
-To learn more about other partners, see [Business Intelligence partners](business-intelligence.md), [Data Integration partners](data-integration.md), and [Machine Learning and AI partners](machine-learning-ai.md).
+
+- To learn more about other partners, see [Business Intelligence partners](business-intelligence.md), [Data Integration partners](data-integration.md), and [Machine Learning and AI partners](machine-learning-ai.md).
synapse-analytics Machine Learning Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/machine-learning-ai.md
Title: Machine learning and AI partners
+ Title: Machine learning and AI partners
description: Lists of third-party machine learning and artificial intelligence partners with solutions that support Azure Synapse Analytics. + + Last updated : 06/14/2023 - Previously updated : 06/22/2020--+
This article highlights Microsoft partners with machine learning and artificial
## Machine learning and AI partners | Partner | Description | Website/Product link | | - | -- | -- |
-| ![Dataiku](./media/machine-learning-and-ai/dataiku-logo.png) |**Dataiku**<br>Dataiku is the centralized data platform that moves businesses along their data journey from analytics at scale to Enterprise AI, powering self-service analytics while also ensuring the operationalization of machine learning models in production. |[Product page](https://www.dataiku.com/partners/microsoft/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/dataiku.dataiku-data-science-studio)<br> |
-| ![MATLAB](./media/machine-learning-and-ai/mathworks-logo.png) |**Matlab**<br>MATLAB® is a programming platform designed for engineers and scientists. It combines a desktop environment tuned for iterative analysis and design processes with a programming language that expresses matrix and array mathematics directly. Millions worldwide use MATLAB for a range of applications, including machine learning, deep learning, signal and image processing, control systems, and computational finance. |[Product page](https://www.mathworks.com/products/database.html)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/mathworks-inc.matlab-byol?tab=Overview)<br> |
-| ![Qubole](./media/data-integration/qubole_logo.png) |**Qubole**<br>Qubole provides a cloud-native platform that enables users to conduct ETL, analytics, and AI/ML workloads. It supports different kinds of open-source engines - Apache Spark, TensorFlow, Presto, Airflow, Hadoop, Hive, and more. It provides easy-to-use end-user tools for data processing from SQL query tools, to notebooks, and dashboards that use powerful open-source engines.|[Product page](https://www.qubole.com/company/partners/partners-microsoft-azure/) |
-| ![SAS](./media/business-intelligence/sas-logo.jpg) |**SAS® Viya®**<br>SAS® Viya® is an AI, analytic, and data management solution running on a scalable, cloud-native architecture. It enables you to operationalize insights, empowering everyone – from data scientists to business users – to collaborate and realize innovative results faster. Using open source or SAS models, SAS® Viya® can be accessed through APIs or interactive interfaces to transform raw data into actions. |[Product page](https://www.sas.com/microsoft)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/sas-institute-560503.sas-viya-saas?tab=Overview)<br> |
+| :::image type="content" source="./media/machine-learning-and-ai/dataiku-logo.png" alt-text="The logo of Dataiku."::: |**Dataiku**<br>Dataiku is the centralized data platform that moves businesses along their data journey from analytics at scale to Enterprise AI, powering self-service analytics while also ensuring the operationalization of machine learning models in production. |[Dataiku](https://www.dataiku.com/partners/microsoft/)<br> [Dataiku in the Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=Dataiku&page=1)<br> |
+| :::image type="content" source="./media/machine-learning-and-ai/mathworks-logo.png" alt-text="The logo of MATLAB."::: |**Matlab**<br>MATLAB&reg; is a programming platform designed for engineers and scientists. It combines a desktop environment tuned for iterative analysis and design processes with a programming language that expresses matrix and array mathematics directly. Millions worldwide use MATLAB for a range of applications, including machine learning, deep learning, signal and image processing, control systems, and computational finance. |[Database Toolbox by MathWorks](https://www.mathworks.com/products/database.html)<br> [MATLAB in the Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/mathworks-inc.matlab-byol?tab=Overview)<br> |
+| :::image type="content" source="./media/data-integration/qubole_logo.png" alt-text="The logo of Qubole."::: |**Qubole**<br>Qubole provides a cloud-native platform that enables users to conduct ETL, analytics, and AI/ML workloads. It supports different kinds of open-source engines - Apache Spark, TensorFlow, Presto, Airflow, Hadoop, Hive, and more. It provides easy-to-use end-user tools for data processing from SQL query tools, to notebooks, and dashboards that use powerful open-source engines.|[Qubole](https://www.qubole.com/company/partners/partners-microsoft-azure/) |
+| :::image type="content" source="./media/business-intelligence/sas-logo.jpg" alt-text="The logo of SAS."::: |**SAS&reg; Viya&reg;**<br>SAS&reg; Viya&reg; is an AI, analytic, and data management solution running on a scalable, cloud-native architecture. It enables you to operationalize insights, empowering everyone ΓÇô from data scientists to business users ΓÇô to collaborate and realize innovative results faster. Using open source or SAS models, SAS&reg; Viya&reg; can be accessed through APIs or interactive interfaces to transform raw data into actions. |[SAS&reg; Viya&reg;](https://www.sas.com/microsoft)<br> [SAS&reg; Viya&reg; in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/sas-institute-560503.sas-viya-saas?tab=Overview)<br> |
## Next steps
-To learn more about other partners, see [Business Intelligence partners](business-intelligence.md), [Data Integration partners](data-integration.md), and [Data Management partners](data-management.md).
+
+- To learn more about other partners, see [Business Intelligence partners](business-intelligence.md), [Data Integration partners](data-integration.md), and [Data Management partners](data-management.md).
synapse-analytics System Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/system-integration.md
Title: System integration partners
+ Title: System integration partners
description: List of industry system integrators building customer solutions with Azure Synapse Analytics- --- Previously updated : 11/24/2020 -+ Last updated : 06/14/2023+++ # Azure Synapse Analytics system integration partners
This article highlights Microsoft system integration partner companies building
## System Integration partners | Partner | Description | Website/Product link | | - | -- | -- |
-| ![Accenture](./media/system-integration/accenture-logo.png) |**Accenture**<br>Bringing together 45,000+ dedicated professionals, the Accenture Microsoft Business GroupΓÇöpowered by AvanadeΓÇöhelps enterprises to thrive in the era of digital disruption.|[Partner page](https://www.accenture.com/us-en/services/microsoft-index)<br>|
-| ![Adatis](./media/system-integration/adatis-logo.png) |**Adatis**<br>Adatis offers services that specialize in advanced data analytics, from data strategy and consultancy, to world class delivery and managed services. |[Partner page](https://adatis.co.uk/)<br> |
-| ![Blue Granite](./media/system-integration/blue-granite-logo.png) |**Blue Granite**<br>The BlueGranite Catalyst for Analytics is an engagement approach that features their ΓÇ£think big, but start smallΓÇ¥ philosophy. Starting with collaborative envisioning and strategy sessions, Blue Granite work with clients to discover, create, and realize the value of new modern data and analytics solutions, using the latest technologies on the Microsoft platform.|[Partner page](https://www.blue-granite.com/)<br>|
-| ![Capax Global](./media/system-integration/capax-global-logo.png) |**Capax Global**<br>We improve your business by making better use of information you already have. Building custom solutions that align to your business goals, and setting you up for long-term success. We combine well-established patterns and practices with technology while using our team's wide range of industry and commercial software development experience. We share a passion for technology, innovation, and client satisfaction. Our pride for what we do drives the success of our projects and is fundamental to why people partner with us.|[Partner page](https://www.capaxglobal.com/)<br>|
-| ![Coeo](./media/system-integration/coeo-logo.png) |**Coeo**<br>CoeoΓÇÖs team includes cloud consultants with deep expertise in Azure databases, and BI consultants dedicated to providing flexible and scalable analytic solutions. Coeo can help you move to a hybrid or full Azure solution.|[Partner page](https://www.coeo.com/solution/technology/microsoft-azure/)<br>|
-| ![Cognizant](./media/system-integration/cognizant-logo.png) |**Cognizant**<br>As a Microsoft strategic partner, Cognizant has the consulting skills and experience to help customers make the journey to the cloud. For each client project, Cognizant uses its strong partnership with Microsoft to maximize customer benefits from the Azure architecture.|[Partner page](https://mbg.cognizant.com/technologies-capabilities/microsoft-azure/)<br>|
-| ![Neal Analytics](./media/system-integration/neal-analytics-logo.png) |**Neal Analytics**<br>Neal Analytics helps companies navigate their digital transformation journey in converting data into valuable assets and a competitive advantage. With our machine learning and data engineering expertise, we use data to drive margin increases and profitable analytics projects. Comprised of consultants specializing in Data Science, Business Intelligence, Cognitive Services, practical AI, Data Management, and IoT, Neal Analytics is trusted to solve unique business problems and optimize operations across industries.|[Partner page](https://nealanalytics.com/)<br>|
-| ![Pragmatic Works](./media/system-integration/pragmatic-works-logo.png) |**Pragmatic Works**<br>Pragmatic Works can help you capitalize on the value of your data by empowering more users and applications on the same dataset. We kickstart, accelerate, and maintain your cloud environment with a range of solutions that fit your business needs.|[Partner page](https://www.pragmaticworks.com/)<br>|
+| :::image type="content" source="./media/system-integration/accenture-logo.png" alt-text="The logo of Accenture."::: |**Accenture**<br>Bringing together 45,000+ dedicated professionals, the Accenture Microsoft Business GroupΓÇöpowered by AvanadeΓÇöhelps enterprises to thrive in the era of digital disruption.|[Accenture](https://www.accenture.com/us-en/services/microsoft-index)<br>|
+| :::image type="content" source="./media/system-integration/adatis-logo.png" alt-text="The logo of Adatis."::: |**Adatis**<br>Adatis offers services that specialize in advanced data analytics, from data strategy and consultancy, to world class delivery and managed services. |[Adatis](https://adatis.co.uk/)<br> |
+| :::image type="content" source="./media/system-integration/blue-granite-logo.png" alt-text="The logo of Blue Granite."::: |**Blue Granite**<br>The BlueGranite Catalyst for Analytics is an engagement approach that features their "think big, but start small" philosophy. Starting with collaborative envisioning and strategy sessions, Blue Granite work with clients to discover, create, and realize the value of new modern data and analytics solutions, using the latest technologies on the Microsoft platform.|[Blue Granite](https://www.blue-granite.com/)<br>|
+| :::image type="content" source="./media/system-integration/capax-global-logo.png" alt-text="The logo of Capax Global."::: |**Capax Global**<br>We improve your business by making better use of information you already have. Building custom solutions that align to your business goals, and setting you up for long-term success. We combine well-established patterns and practices with technology while using our team's wide range of industry and commercial software development experience. We share a passion for technology, innovation, and client satisfaction. Our pride for what we do drives the success of our projects and is fundamental to why people partner with us.|[Capax Global](https://www.capaxglobal.com/)<br>|
+| :::image type="content" source="./media/system-integration/coeo-logo.png" alt-text="The logo of Coeo."::: |**Coeo**<br>Coeo's team includes cloud consultants with deep expertise in Azure databases, and BI consultants dedicated to providing flexible and scalable analytic solutions. Coeo can help you move to a hybrid or full Azure solution.|[Coeo](https://www.coeo.com/analytics/)<br>|
+| :::image type="content" source="./media/system-integration/cognizant-logo.png" alt-text="The logo of Cognizant."::: |**Cognizant**<br>As a Microsoft strategic partner, Cognizant has the consulting skills and experience to help customers make the journey to the cloud. For each client project, Cognizant uses its strong partnership with Microsoft to maximize customer benefits from the Azure architecture.|[Cognizant](https://mbg.cognizant.com/technologies-capabilities/microsoft-azure/)<br>|
+| :::image type="content" source="./media/system-integration/neal-analytics-logo.png" alt-text="The logo of Neal Analytics."::: |**Neal Analytics**<br>Neal Analytics helps companies navigate their digital transformation journey in converting data into valuable assets and a competitive advantage. With our machine learning and data engineering expertise, we use data to drive margin increases and profitable analytics projects. Comprised of consultants specializing in Data Science, Business Intelligence, Cognitive Services, practical AI, Data Management, and IoT, Neal Analytics is trusted to solve unique business problems and optimize operations across industries.|[Neal Analytics](https://nealanalytics.com/)<br>|
+| :::image type="content" source="./media/system-integration/pragmatic-works-logo.png" alt-text="The logo of Pragmatic Works."::: |**Pragmatic Works**<br>Pragmatic Works can help you capitalize on the value of your data by empowering more users and applications on the same dataset. We kickstart, accelerate, and maintain your cloud environment with a range of solutions that fit your business needs.|[Pragmatic Works](https://www.pragmaticworks.com/)<br>|
-## Next Steps
-To learn more about some of our other partners, see [Business intelligence partners](business-intelligence.md), [Data Integration partners](data-integration.md), [Data Management partners](data-management.md), and also [Machine Learning & AI partners](machine-learning-ai.md).
+## Next steps
+- To learn more about some of our other partners, see [Business intelligence partners](business-intelligence.md), [Data Integration partners](data-integration.md), [Data Management partners](data-management.md), and also [Machine Learning & AI partners](machine-learning-ai.md).
synapse-analytics Synapse Workspace Synapse Rbac Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-synapse-rbac-roles.md
Title: Azure Synapse RBAC roles
-description: This article describes the built-in Synapse RBAC (role-based access control) roles, the permissions they grant, and the scopes at which they can be used.
+description: This article describes the built-in Synapse RBAC (role-based access control) roles, the permissions they grant, and the scopes at which they can be used.
--- Previously updated : 04/22/2022 - Last updated : 06/16/2023+++ # Synapse RBAC Roles
The article describes the built-in Synapse RBAC (role-based access control) role
For more information on reviewing and assigning Synapse role memberships, see [how to review Synapse RBAC role assignments](./how-to-review-synapse-rbac-role-assignments.md) and [how to assign Synapse RBAC roles](./how-to-manage-synapse-rbac-role-assignments.md).
-## What's changed since the preview?
-
-For users familiar with the Synapse RBAC roles provided during the preview, the following changes apply:
-- Workspace Admin is renamed **Synapse Administrator**-- Apache Spark Admin is renamed **Synapse Apache Spark Administrator** and has permission to see all published code artifacts, including SQL scripts. This role no longer gives permission to use the workspace MSI, which requires the Synapse Credential User role. This permission is required to run pipelines. -- SQL Admin is renamed **Synapse SQL Administrator** and has permission to see all published code artifacts, including Spark notebooks and jobs. This role no longer gives permission to use the workspace MSI, which requires the Synapse Credential User role. This permission is required to run pipelines.-- **New finer-grained Synapse RBAC roles** are introduced that focus on supporting development and operations personas rather than specific analytics runtimes. -- **New lower-level scopes** are introduced for several roles. These scopes allow roles to be restricted to specific resources or objects.- ## Built-in Synapse RBAC roles and scopes The following table describes the built-in roles and the scopes at which they can be used.
->[!Note]
+> [!NOTE]
> Users with any Synapse RBAC role at any scope automatically have the Synapse User role at workspace scope. > [!IMPORTANT]
The following table describes the built-in roles and the scopes at which they ca
## Synapse RBAC roles and the actions they permit
->[!Note]
+> [!NOTE]
>- All actions listed in the tables below are prefixed, "Microsoft.Synapse/..."</br> >- All artifact read, write, and delete actions are with respect to published artifacts in the live service. These permissions do not affect access to artifacts in a connected Git repo.
workspaces/credentials/useSecret/action|Synapse Administrator</br>Synapse Creden
The table below lists Synapse RBAC scopes and the roles that can be assigned at each scope. >[!NOTE]
->To create or delete an object you must have permissions at a higher-level scope.
+> To create or delete an object you must have permissions at a higher-level scope.
Scope|Roles --|--
Linked service |Synapse Administrator </br>Synapse Credential User
Credential |Synapse Administrator </br>Synapse Credential User >[!NOTE]
->All artifact roles and actions are scoped at the workspace level.
+> All artifact roles and actions are scoped at the workspace level.
## Next steps - Learn [how to review Synapse RBAC role assignments](./how-to-review-synapse-rbac-role-assignments.md) for a workspace.-- Learn [how to assign Synapse RBAC roles](./how-to-manage-synapse-rbac-role-assignments.md)
+- Learn [how to assign Synapse RBAC roles](./how-to-manage-synapse-rbac-role-assignments.md)
synapse-analytics Release Notes 10 0 10106 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/release-notes-10-0-10106-0.md
For tooling improvements, make sure you have the correct version installed speci
| Service improvements | Details | | | | |**Virtual Network Service Endpoints Generally Available**|This release includes general availability of Virtual Network (VNet) Service Endpoints for SQL Analytics in Azure Synapse in all Azure regions. VNet Service Endpoints enable you to isolate connectivity to your server from a given subnet or set of subnets within your virtual network. The traffic to Azure Synapse from your VNet will always stay within the Azure backbone network. This direct route will be preferred over any specific routes that take Internet traffic through virtual appliances or on-premises. No additional billing is charged for virtual network access through service endpoints. Current pricing model for [Azure Synapse](https://azure.microsoft.com/pricing/details/sql-data-warehouse/gen2/) applies as is.<br/><br/>With this release, we also enabled PolyBase connectivity to [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) (ADLS) via [Azure Blob File System](../../storage/blobs/data-lake-storage-abfs-driver.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) (ABFS) driver. Azure Data Lake Storage Gen2 brings all the qualities that are required for the complete lifecycle of analytics data to Azure Storage. Features of the two existing Azure storage services, Azure Blob Storage and Azure Data Lake Storage Gen1 are converged. Features from [Azure Data Lake Storage Gen1](../../data-lake-store/index.yml?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json), such as file system semantics, file-level security, and scale are combined with low-cost, tiered storage, and high availability/disaster recovery capabilities from [Azure Blob Storage](../../storage/blobs/storage-blobs-introduction.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json).<br/><br/>Using Polybase you can also import data into SQL Analytics in Azure Synapse from Azure Storage secured to VNet. Similarly, exporting data from Azure Synapse to Azure Storage secured to VNet is also supported via Polybase.<br/><br/>For more information on VNet Service Endpoints in Azure Synapse, refer to the [blog post](https://azure.microsoft.com/blog/general-availability-of-vnet-service-endpoints-for-azure-sql-data-warehouse/) or the [documentation](/azure/azure-sql/database/vnet-service-endpoint-rule-overview?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json).|
-|**Automatic Performance Monitoring (Preview)**|[Query Store](/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) is now available in Preview in SQL Analytics in Azure Synapse. Query Store is designed to help you with query performance troubleshooting by tracking queries, query plans, runtime statistics, and query history to help you monitor the activity and performance of your data warehouse. Query Store is a set of internal stores and Dynamic Management Views (DMVs) that allow you to:<br/><br/>&bull; &nbsp; Identify and tune top resource consuming queries<br/>&bull; &nbsp; Identify and improve unplanned workloads<br/>&bull; &nbsp; Evaluate query performance and impact to the plan by changes in statistics, indexes, or system size (DWU setting)<br/>&bull; &nbsp; See full query text for all queries executed<br/><br/>The Query Store contains three actual stores:<br/>&bull; &nbsp; A plan store for persisting the execution plan information<br/>&bull; &nbsp; A runtime stats store for persisting the execution statistics information<br/>&bull; &nbsp; A wait stats store for persisting wait stats information.<br/><br/>SQL Analytics in Azure Synapse manages these stores automatically and provides an unlimited number of queries storied over the last seven days at no additional charge. Enabling Query Store is as simple as running an ALTER DATABASE T-SQL statement: <br/>sql -ALTER DATABASE [DatabaseName] SET QUERY_STORE = ON;-For more information on Query Store, see the article, [Monitoring performance by using the Query Store](/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store), and the Query Store DMVs, such as [sys.query_store_query](/sql/relational-databases/system-catalog-views/sys-query-store-query-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true). Here is the [blog post](https://azure.microsoft.com/blog/automatic-performance-monitoring-in-azure-sql-data-warehouse-with-query-store/) announcing the release. For more information on historical query analysis, see [Historical query storage and analysis in Azure Synapse Analytics](../sql/query-history-storage-analysis.md).|
+|**Automatic Performance Monitoring (Preview)**|[Query Store](/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) is now available in Preview in SQL Analytics in Azure Synapse. Query Store is designed to help you with query performance troubleshooting by tracking queries, query plans, runtime statistics, and query history to help you monitor the activity and performance of your data warehouse. Query Store is a set of internal stores and Dynamic Management Views (DMVs) that allow you to:<br/><br/>&bull; &nbsp; Identify and tune top resource consuming queries<br/>&bull; &nbsp; Identify and improve unplanned workloads<br/>&bull; &nbsp; Evaluate query performance and impact to the plan by changes in statistics, indexes, or system size (DWU setting)<br/>&bull; &nbsp; See full query text for all queries executed<br/><br/>The Query Store contains three actual stores:<br/>&bull; &nbsp; A plan store for persisting the execution plan information<br/>&bull; &nbsp; A runtime stats store for persisting the execution statistics information<br/>&bull; &nbsp; A wait stats store for persisting wait stats information.<br/><br/>SQL Analytics in Azure Synapse manages these stores automatically and provides an unlimited number of queries storied over the last seven days at no additional charge. Enabling Query Store is as simple as running an ALTER DATABASE T-SQL statement: <br/>sql -ALTER DATABASE [DatabaseName] SET QUERY_STORE = ON;-For more information on Query Store, see the article, [Monitoring performance by using the Query Store](/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store), and the Query Store DMVs, such as [sys.query_store_query](/sql/relational-databases/system-catalog-views/sys-query-store-query-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true). For more information on historical query analysis, see [Historical query storage and analysis in Azure Synapse Analytics](../sql/query-history-storage-analysis.md).|
|**Lower Compute Tiers for SQL Analytics**|SQL Analytics in Azure Synapse now supports lower compute tiers. Customers can experience Azure Synapse's leading performance, flexibility, and security features starting with 100 cDWU ([data warehouse units](what-is-a-data-warehouse-unit-dwu-cdwu.md)) and scale to 30,000 cDWU in minutes. Starting mid-December 2018, customers can benefit from Gen2 performance and flexibility with lower compute tiers in [regions](gen2-migration-schedule.md#automated-schedule-and-region-availability-table), with the rest of the regions available during 2019.<br/><br/>By dropping the entry point for next-generation data warehousing, Microsoft opens the doors to value-driven customers who want to evaluate all the benefits of a secure, high-performance data warehouse without guessing which trial environment is best for them. Customers may start as low as 100 cDWU, down from the current 500 cDWU entry point. SQL Analytics continues to support pause and resume operations and goes beyond just the flexibility in compute. Gen2 also supports unlimited column-store storage capacity along with 2.5 times more memory per query, up to 128 concurrent queries and [adaptive caching](https://azure.microsoft.com/blog/adaptive-caching-powers-azure-sql-data-warehouse-performance-gains/) features. These features on average bring five times more performance compared to the same data warehouse Unit on Gen1 at the same price. Geo-redundant backups are standard for Gen2 with built-in guaranteed data protection. SQL Analytics in Azure Synapse is ready to scale when you are.| |**Columnstore Background Merge**|By default, Azure SQL Data stores data in columnar format, with micro-partitions called [rowgroups](sql-data-warehouse-memory-optimizations-for-columnstore-compression.md). Sometimes, due to memory constrains at index build or data load time, the rowgroups may be compressed with less than the optimal size of one million rows. Rowgroups may also become fragmented due to deletes. Small or fragmented rowgroups result in higher memory consumption, as well as inefficient query execution. With this release, the columnstore background maintenance task merges small compressed rowgroups to create larger rowgroups to better utilize memory and speed up query execution. | | |
synapse-analytics Active Directory Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/active-directory-authentication.md
You can also disable local authentication after a workspace is created through t
- The following members of Azure AD can be provisioned in Synapse SQL: - Native members: A member created in Azure AD in the managed domain or in a customer domain. For more information, see [Add your own domain name to Azure AD](../../active-directory/fundamentals/add-custom-domain.md).
- - Federated domain members: A member created in Azure AD with a federated domain. For more information, see [Microsoft Azure now supports federation with Windows Server Active Directory](https://azure.microsoft.com/blog/20../../windows-azure-now-supports-federation-with-windows-server-active-directory/).
+ - Federated domain members: A member created in Azure AD with a federated domain. For more information, see [Deploying Active Directory Federation Services in Azure](/windows-server/identity/ad-fs/deployment/how-to-connect-fed-azure-adfs).
- Imported members from other Azure ADs who are native or federated domain members. - Active Directory groups created as security groups.
synapse-analytics Create Use Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/create-use-views.md
ORDER BY
[population] DESC; ```
+When you query the view, you may encounter errors or unexpected results. This probably means that the view references columns or objects that were modified or no longer exist. You need to manually adjust the view definition to align with the underlying schema changes.
+ ## Next steps For information on how to query different file types, refer to the [Query single CSV file](query-single-csv-file.md), [Query Parquet files](query-parquet-files.md), and [Query JSON files](query-json-files.md) articles.
synapse-analytics Query Cosmos Db Analytical Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-cosmos-db-analytical-store.md
FROM OPENROWSET(
Do not use `OPENROWSET` without explicitly defined schema because it might impact your performance. Make sure that you use the smallest possible sizes for your columns (for example VARCHAR(100) instead of default VARCHAR(8000)). You should use some UTF-8 collation as default database collation or set it as explicit column collation to avoid [UTF-8 conversion issue](../troubleshoot/reading-utf8-text.md). Collation `Latin1_General_100_BIN2_UTF8` provides best performance when you filter data using some string columns.
+When you query the view, you may encounter errors or unexpected results. This probably means that the view references columns or objects that were modified or no longer exist. You need to manually adjust the view definition to align with the underlying schema changes. Have in mind that this can happen both when using automatic schema inference in the view and when explicitly specifying the schema.
+ ## Query nested objects With Azure Cosmos DB, you can represent more complex data models by composing them as nested objects or arrays. The autosync capability of Azure Synapse Link for Azure Cosmos DB manages the schema representation in the analytical store out of the box, which includes handling nested data types that allow for rich querying from the serverless SQL pool.
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
The following table lists the features of Azure Synapse Analytics that are curre
| **Reject options for delimited text files** | [Reject options for CREATE EXTERNAL TABLE on delimited files](/sql/t-sql/statements/create-external-table-transact-sql?view=azure-sqldw-latest&preserve-view=true#reject-options-1) is in preview. | | **Spark Advisor for Azure Synapse Notebook** | The [Spark Advisor for Azure Synapse Notebook](monitoring/apache-spark-advisor.md) analyzes code run by Spark and displays real-time advice for Notebooks. The Spark advisor offers recommendations for code optimization based on built-in common patterns, performs error analysis, and locates the root cause of failures.| | **Time-To-Live in managed virtual network (VNet)** | Reserve compute for the time-to-live (TTL) in managed virtual network TTL period, saving time and improving efficiency. For more information on this preview, see [Announcing public preview of Time-To-Live (TTL) in managed virtual network](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-time-to-live-ttl-in-managed-virtual/ba-p/3552879).|
-| **User-Assigned managed identities** | Now you can use user-assigned managed identities in linked services for authentication in Synapse Pipelines and Dataflows.To learn more, see [Credentials in Azure Data Factory and Azure Synapse](../data-factory/credentials.md?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext&tabs=data-factory).|
+| **User-Assigned managed identities** | Now you can use user-assigned managed identities in linked services for authentication in Synapse Pipelines and Dataflows. To learn more, see [Credentials in Azure Data Factory and Azure Synapse](../data-factory/credentials.md?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext&tabs=data-factory).|
## Generally available features
update-center Assessment Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/assessment-options.md
Title: Assessment options in update management center (preview). description: The article describes the assessment options available in Update management center (preview). Previously updated : 04/21/2022 Last updated : 05/23/2023
update-center Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/deploy-updates.md
Title: Deploy updates and track results in update management center (preview). description: The article details how to use update management center (preview) in the Azure portal to deploy updates and view results for supported machines. Previously updated : 12/27/2022 Last updated : 05/31/2023
update-center Manage Update Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-update-settings.md
description: The article describes how to manage the update settings for your Wi
Previously updated : 01/30/2023 Last updated : 05/30/2023
update-center Manage Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-workbooks.md
description: This article describes how to create and manage workbooks for VM in
Previously updated : 01/16/2023 Last updated : 05/23/2023
update-center Scheduled Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/scheduled-patching.md
Title: Scheduling recurring updates in Update management center (preview) description: The article details how to use update management center (preview) in Azure to set update schedules that install recurring updates on your machines. Previously updated : 05/02/2023 Last updated : 05/30/2023
update-center Updates Maintenance Schedules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/updates-maintenance-schedules.md
Title: Updates and maintenance in update management center (preview). description: The article describes the updates and maintenance options available in Update management center (preview). Previously updated : 05/22/2023 Last updated : 05/23/2023
update-center View Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/view-updates.md
Title: Check update compliance in Update management center (preview) description: The article details how to use Azure Update management center (preview) in the Azure portal to assess update compliance for supported machines. Previously updated : 04/21/2022 Last updated : 05/31/2023
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
-This article details how to check the status of available updates on a single VM or multiple machines using update management center (preview).
+This article details how to check the status of available updates on a single VM or multiple VMs using update management center (preview).
## Check updates on single VM
This article details how to check the status of available updates on a single VM
1. Select your virtual machine and the **virtual machines | Updates** page opens. 1. Under **Operations**, select **Updates**.
-1. In **Updates**, select **Go to Updates using Update Center**.
+1. In **Updates**, select **Go to Updates using Update Management Center**.
:::image type="content" source="./media/view-updates/resources-check-updates.png" alt-text="Screenshot showing selection of updates from Home page.":::
-1. In **Updates (Preview)**, select **Assess updates**, in **Trigger assess now**, select **OK**.
+1. In **Updates (Preview)**, select **Check for updates**, in **Trigger assess now**, select **OK**.
An assessment is performed and a notification appears first that the *Assessment is in progress* and after a successful assessment, you will see *Assessment successful* else, you will see the notification *Assessment Failed*.
To check the updates on your machines at scale, follow these steps:
## Next steps
-* Learn about deploying updates to your machines to maintain security compliance by reading [deploy updates](deploy-updates.md).
-* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
+* Learn about deploying updates on your machines to maintain security compliance by reading [deploy updates](deploy-updates.md).
+* To view the update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
* To troubleshoot issues, see [Troubleshoot](troubleshoot.md) Azure Update management center (preview).
virtual-desktop Private Link Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-setup.md
Title: Set up Private Link for Azure Virtual Desktop preview - Azure
description: How to set up Private Link for Azure Virtual Desktop (preview). Previously updated : 05/10/2023 Last updated : 06/15/2023
To control public traffic:
- If you don't select the check box, Azure Virtual Desktop session hosts can only talk to the Azure Virtual Desktop service over private endpoint connections.
+>[!IMPORTANT]
+>Disabling the **Allow session host access from public network** setting won't affect existing sessions. You must restart the session host VM for the change to take effect on the session host network settings.
+ ## Network security groups Follow the directions in [Tutorial: Filter network traffic with a network security group using the Azure portal](../virtual-network/tutorial-filter-network-traffic.md) to set up a network security group (NSG). You can use this NSG to block the **WindowsVirtualDesktop** service tag. If you block this service tag, all service traffic will use private routes only.
virtual-machines Capture Image Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capture-image-portal.md
For images stored in an Azure Compute Gallery (formerly known as Shared Image Ga
> [!IMPORTANT] > Once you mark a VM as `generalized` in Azure, you cannot restart the VM. Legacy **managed images** are automatically marked as generalized.
+> > When capturing an image of a virtual machine in Azure, the virtual machine will be temporarily stopped to ensure data consistency and prevent any potential issues during the image creation. This is because capturing an image requires a point-in-time snapshot of the virtual machine's disk.
+> To avoid disruptions in a production environment, it's recommended you schedule the image capture process during a maintenance window or a time when the temporary downtime won't impacting critical services.
## Capture a VM in the portal
virtual-machines Update Linux Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/update-linux-agent.md
To update your [Azure Linux Agent](https://github.com/Azure/WALinuxAgent) on a L
- A running Linux VM in Azure. - A connection to that Linux VM using SSH.
-You should always check for a package in the Linux distro repository first. It is possible the package available may not be the latest version, however, enabling autoupdate will ensure the Linux Agent will always get the latest update. Should you have issues installing from the package managers, you should seek support from the distro vendor.
+You should always check for a package in the Linux distro repository first. It's possible the package available may not be the latest version, however, enabling autoupdate will ensure the Linux Agent will always get the latest update. Should you have issues installing from the package managers, you should seek support from the distro vendor.
> [!NOTE] > For more information, see [Endorsed Linux distributions on Azure](../linux/endorsed-distros.md)
sudo systemctl status waagent
Typically this is all you need, but if for some reason you need to install it from https://github.com directly, use the following steps. ## Update the Linux Agent when no agent package exists for distribution-
+<!--
Install wget, there are some distros that don't install it by default, such as Red Hat, CentOS, and Oracle Linux versions 6.4 and 6.5. ### 1. Download the latest version
sudo waagent -version
``` You'll see that the Azure Linux Agent version has been updated to the new version.-
-For more information regarding the Azure Linux Agent, see [Azure Linux Agent README](https://github.com/Azure/WALinuxAgent).
+-->
+For more information regarding updating the Azure Linux Agent when no package exists, see [Azure Linux Agent README](https://github.com/Azure/WALinuxAgent).
virtual-machines Image Builder Prefetch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-prefetch.md
+
+ Title: VM Boot Optimization for Azure Compute Gallery Images with Azure VM Image Builder
+description: Optimize VM Boot and Provisioning time with Azure VM Image Builder
++ Last updated : 06/07/2023 +++
+
+
+
+
+# VM optimization for gallery images with Azure VM Image Builder
+
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Virtual Machine Scale Sets
+
+In this article, you learn how to use Azure VM Image Builder to optimize your ACG (Azure Compute Gallery) Images or Managed Images or VHDs to improve the create time for your VMs.
+
+## Azure VM Optimization
+Azure VM optimization improves virtual machine creation time by updating the gallery image to optimize the image for a faster boot time.
+
+## Image types supported
+
+Optimization for the following images is supported:
+
+| Features | Details |
+|||
+|OS Type| Linux, Windows |
+| Partition | MBR/GPT |
+| Hyper-V | Gen1/Gen2 |
+| OS State | Generalized |
+
+The following types of images aren't supported:
+
+* Images with size greater than 2 TB
+* ARM64 images
+* Specialized images
++
+## Optimization in Azure VM Image Builder
+
+Optimization can be enabled while creating a VM image using the CLI.
+
+Customers can create an Azure VM Image Builder template using CLI. It contains details regarding source, type of customization, and distribution.
+
+In your template, you will need to enable the additional fields for VM optimization. For more information on how to enable the VM optimization fields for your image builder template, see the [Optimize property](../virtual-machines/linux/image-builder-json.md#properties-optimize).
+
+> [!NOTE]
+> To enable VM optimization benefits, you must be using Azure Image Builder API Version `2022-07-01` or later.
+
+
+
+## FAQs
+
+
+
+### Can VM optimization be used without Azure VM Image Builder customization?
+
+
+
+Yes, customers can opt for only VM optimization without using Azure VM Image Builder customization feature. Customers can simply enable the optimization flag and keep customization field as empty.
+
+
+
+### Can an existing ACG image version be optimized?
+
+No, this optimization feature won't update an existing SIG image version. However, optimization can be enabled during new version creation for an existing image
+
+
+
+## How much time does it take for generating an optimized image?
+
+
+
+ The below latencies have been observed at various percentiles:
+
+| OS | Size | P50 | P95 | Average |
+| | | | | |
+| Linux | 30 GB VHD | 20 mins | 21 mins | 20 mins |
+| Windows | 127 GB VHD | 34 mins | 35 mins | 33 mins |
+
+
+
+This is the end to end duration observed. Note, image generation duration varies based on different factors such as, OS Type, VHD size, OS State, etc.
+
+
+
+### Is OS image copied out of customer subscription for optimization?
+
+Yes, the OS VHD is copied from customer subscription to Azure subscription for optimization in the same geographic location. Once optimization is finished or timed out, Azure internally deletes all copied OS VHDs.
+
+### What are the performance improvements observed for VM boot optimization?
+
+Enabling VM boot optimization feature may not always result in noticeable performance improvement as it depends on several factors like source image already optimized, OS type, customization etc. However, to ensure the best VM boot performance, it's recommended to enable this feature.
+
+
+
+## Next steps
+Learn more about [Azure Compute Gallery](../virtual-machines/azure-compute-gallery.md).
virtual-machines Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-version.md
tenantID="<tenant ID for the source image>"
subID="<subscription ID where the image will be creted>" sourceImageID="<resource ID of the source image>"
+# Login to the subscription where the new image will be created
+az login
# Log in to the tenant where the source image is available az login --tenant $tenantID
$targetSubID = "<subscription ID for the target>"
$sourceTenantID = "<tenant ID where for the source image>" $sourceImageID = "<resource ID of the source image>"
-#Login to the subscription where the new image will be created
+# Login to the subscription where the new image will be created
Connect-AzAccount -UseDeviceAuthentication -Subscription $targetSubID # Login to the tenant where the source image is published Connect-AzAccount -Tenant $sourceTenantID -UseDeviceAuthentication 
-# Set the context of the subscription where the new image will be created
+# Login to the subscription again where the new image will be created and set the context
+Connect-AzAccount -UseDeviceAuthentication -Subscription $targetSubID
Set-AzContext -Subscription $targetSubID  # Create the image version from another image version in a different tenant
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-overview.md
Previously updated : 04/18/2023 Last updated : 06/14/2023
Azure Disk Encryption is supported on a subset of the [Azure-endorsed Linux dist
Linux server distributions that are not endorsed by Azure do not support Azure Disk Encryption; of those that are endorsed, only the following distributions and versions support Azure Disk Encryption: | Publisher | Offer | SKU | URN | Volume type supported for encryption |
-| | | | |
+| | | | | |
| Canonical | Ubuntu | 22.04-LTS | Canonical:0001-com-ubuntu-server-focal:22_04-lts:latest | OS and data disk | | Canonical | Ubuntu | 22.04-LTS Gen2 | Canonical:0001-com-ubuntu-server-focal:22_04-lts-gen2:latest | OS and data disk | | Canonical | Ubuntu | 20.04-LTS | Canonical:0001-com-ubuntu-server-focal:20_04-lts:latest | OS and data disk |
Linux server distributions that are not endorsed by Azure do not support Azure D
| Canonical | Ubuntu | 20.04-DAILY-LTS Gen2 |Canonical:0001-com-ubuntu-server-focal-daily:20_04-daily-lts-gen2:latest | OS and data disk | | Canonical | Ubuntu | 18.04-LTS | Canonical:UbuntuServer:18.04-LTS:latest | OS and data disk | | Canonical | Ubuntu 18.04 | 18.04-DAILY-LTS | Canonical:UbuntuServer:18.04-DAILY-LTS:latest | OS and data disk |
+| MicrosoftCBLMariner | cbl-mariner | cbl-mariner-2 | MicrosoftCBLMariner:cbl-mariner:cbl-mariner-2:latest\* | OS and data disk |
+| MicrosoftCBLMariner | cbl-mariner | cbl-mariner-2-gen2 | MicrosoftCBLMariner:cbl-mariner:cbl-mariner-2-gen2:latest* | OS and data disk |
+| OpenLogic | CentOS 8-LVM | 8-LVM | OpenLogic:CentOS-LVM:8-LVM:latest | OS and data disk |
+| OpenLogic | CentOS 8.4 | 8_4 | OpenLogic:CentOS:8_4:latest | OS and data disk |
+| OpenLogic | CentOS 8.3 | 8_3 | OpenLogic:CentOS:8_3:latest | OS and data disk |
+| OpenLogic | CentOS 8.2 | 8_2 | OpenLogic:CentOS:8_2:latest | OS and data disk |
+| OpenLogic | CentOS 8.1 | 8_1 | OpenLogic:CentOS:8_1:latest | OS and data disk |
+| OpenLogic | CentOS 7-LVM | 7-LVM | OpenLogic:CentOS-LVM:7-LVM:7.9.2021020400 | OS and data disk |
+| OpenLogic | CentOS 7.9 | 7_9 | OpenLogic:CentOS:7_9:latest | OS and data disk |
+| OpenLogic | CentOS 7.8 | 7_8 | OpenLogic:CentOS:7_8:latest | OS and data disk |
+| OpenLogic | CentOS 7.7 | 7.7 | OpenLogic:CentOS:7.7:latest | OS and data disk |
+| OpenLogic | CentOS 7.6 | 7.6 | OpenLogic:CentOS:7.6:latest | OS and data disk |
+| OpenLogic | CentOS 7.5 | 7.5 | OpenLogic:CentOS:7.5:latest | OS and data disk |
+| OpenLogic | CentOS 7.4 | 7.4 | OpenLogic:CentOS:7.4:latest | OS and data disk |
+| OpenLogic | CentOS 6.8 | 6.8 | OpenLogic:CentOS:6.8:latest | Data disk only |
| Oracle | Oracle Linux 8.6 | 8.6 | Oracle:Oracle-Linux:ol86-lvm:latest | OS and data disk (see note below) | | Oracle | Oracle Linux 8.6 Gen 2 | 8.6 | Oracle:Oracle-Linux:ol86-lvm-gen2:latest | OS and data disk (see note below) | | Oracle | Oracle Linux 8.5 | 8.5 | Oracle:Oracle-Linux:ol85-lvm:latest | OS and data disk (see note below) |
Linux server distributions that are not endorsed by Azure do not support Azure D
| RedHat | RHEL 7.4 | 7.4 | RedHat:RHEL:7.4:latest | OS and data disk (see note below) | | RedHat | RHEL 6.8 | 6.8 | RedHat:RHEL:6.8:latest | Data disk (see note below) | | RedHat | RHEL 6.7 | 6.7 | RedHat:RHEL:6.7:latest | Data disk (see note below) |
-| OpenLogic | CentOS 8-LVM | 8-LVM | OpenLogic:CentOS-LVM:8-LVM:latest | OS and data disk |
-| OpenLogic | CentOS 8.4 | 8_4 | OpenLogic:CentOS:8_4:latest | OS and data disk |
-| OpenLogic | CentOS 8.3 | 8_3 | OpenLogic:CentOS:8_3:latest | OS and data disk |
-| OpenLogic | CentOS 8.2 | 8_2 | OpenLogic:CentOS:8_2:latest | OS and data disk |
-| OpenLogic | CentOS 8.1 | 8_1 | OpenLogic:CentOS:8_1:latest | OS and data disk |
-| OpenLogic | CentOS 7-LVM | 7-LVM | OpenLogic:CentOS-LVM:7-LVM:7.9.2021020400 | OS and data disk |
-| OpenLogic | CentOS 7.9 | 7_9 | OpenLogic:CentOS:7_9:latest | OS and data disk |
-| OpenLogic | CentOS 7.8 | 7_8 | OpenLogic:CentOS:7_8:latest | OS and data disk |
-| OpenLogic | CentOS 7.7 | 7.7 | OpenLogic:CentOS:7.7:latest | OS and data disk |
-| OpenLogic | CentOS 7.6 | 7.6 | OpenLogic:CentOS:7.6:latest | OS and data disk |
-| OpenLogic | CentOS 7.5 | 7.5 | OpenLogic:CentOS:7.5:latest | OS and data disk |
-| OpenLogic | CentOS 7.4 | 7.4 | OpenLogic:CentOS:7.4:latest | OS and data disk |
-| OpenLogic | CentOS 6.8 | 6.8 | OpenLogic:CentOS:6.8:latest | Data disk only |
| SUSE | openSUSE 42.3 | 42.3 | SUSE:openSUSE-Leap:42.3:latest | Data disk only | | SUSE | SLES 12-SP4 | 12-SP4 | SUSE:SLES:12-SP4:latest | Data disk only | | SUSE | SLES HPC 12-SP3 | 12-SP3 | SUSE:SLES-HPC:12-SP3:latest | Data disk only |
+\* For image versions greater than or equal to May 2023.
+ > [!NOTE] > RHEL: > - The new Azure Disk Encryption implementation is supported for RHEL OS and data disk for RHEL7 Pay-As-You-Go images.
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
The basic format is:
"buildTimeoutInMinutes": <minutes>, "customize": [], "distribute": [],
+ "optimize": [],
"source": {}, "stagingResourceGroup": "/subscriptions/<subscriptionID>/resourceGroups/<stagingResourceGroupName>", "validate": {},
VHD distribute properties:
**uri** - Optional Azure Storage URI for the distributed VHD blob. Omit to use the default (empty string) in which case VHD would be published to the storage account in the staging resource group.
+## Properties: optimize
+
+The `optimize` property can be enabled while creating a VM image and allows VM optimization to improve image creation time.
+
+# [JSON](#tab/json)
+
+```json
+"optimize": {
+
+ "vmboot": {
+
+ "state": "Enabled"
+
+ }
+
+ }
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+optimize: {
+ vmboot: {
+ state: 'Enabled'
+ }
+ }
+```
+++ ## Properties: source The `source` section contains information about the source image that will be used by Image Builder. Azure Image Builder only supports generalized images as source images, specialized images aren't supported at this time.
virtual-machines Vm Generalized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-generalized-image-version.md
az login --tenant $tenant1
az account get-access-token az login --tenant $tenant2 az account get-access-token-
+az login --tenant $tenant1
+az account get-access-token
```
$tenant1 = "<Tenant 1 ID>"
$tenant2 = "<Tenant 2 ID>" Connect-AzAccount -Tenant "<Tenant 1 ID>" -UseDeviceAuthentication Connect-AzAccount -Tenant "<Tenant 2 ID>" -UseDeviceAuthentication
+Connect-AzAccount -Tenant "<Tenant 1 ID>" -UseDeviceAuthentication
```
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
You can use service tags to achieve network isolation and protect your Azure res
![Network isolation of Azure services using service tags](./media/service-tags-overview/service_tags.png) ## Available service tags+ The following table includes all the service tags available for use in [network security group](./network-security-groups-overview.md#security-rules) rules. The columns indicate whether the tag:
The classic deployment model (before Azure Resource Manager) supports a small su
| **Internet** | INTERNET | | **VirtualNetwork** | VIRTUAL_NETWORK |
+### Tags unsupported for user defined routes (UDR)
+
+The following is a list of tags currently unsupported for use with user defined routes (UDR).
+
+* AzurePlatformDNS
+
+* AzurePlatformIMDS
+
+* AzurePlatformLKM
+
+* VirtualNetwork
+
+* AzureLoadBalancer
+
+* Internet
## Service tags on-premises + You can obtain the current service tag and range information to include as part of your on-premises firewall configurations. This information is the current point-in-time list of the IP ranges that correspond to each service tag. You can obtain the information programmatically or via a JSON file download, as described in the following sections. ### Use the Service Tag Discovery API+ You can programmatically retrieve the current list of service tags together with IP address range details: - [REST](/rest/api/virtualnetwork/servicetags/list)+ - [Azure PowerShell](/powershell/module/az.network/Get-AzNetworkServiceTag)+ - [Azure CLI](/cli/azure/network#az-network-list-service-tags) For example, to retrieve all the prefixes for the Storage Service Tag, you can use the following PowerShell cmdlets:
$storage.Properties.AddressPrefixes
> - You must be authenticated and have a role with read permissions for your current subscription. ### Discover service tags by using downloadable JSON files + You can download JSON files that contain the current list of service tags together with IP address range details. These lists are updated and published weekly. Locations for each cloud are: - [Azure Public](https://www.microsoft.com/download/details.aspx?id=56519)+ - [Azure US Government](https://www.microsoft.com/download/details.aspx?id=57063) + - [Azure China 21Vianet](https://www.microsoft.com/download/details.aspx?id=57062) + - [Azure Germany](https://www.microsoft.com/download/details.aspx?id=57064) The IP address ranges in these files are in CIDR notation. The following AzureCloud tags don't have regional names formatted according to the normal schema: + - AzureCloud.centralfrance (FranceCentral)+ - AzureCloud.southfrance (FranceSouth)+ - AzureCloud.germanywc (GermanyWestCentral)+ - AzureCloud.germanyn (GermanyNorth)+ - AzureCloud.norwaye (NorwayEast)+ - AzureCloud.norwayw (NorwayWest)+ - AzureCloud.switzerlandn (SwitzerlandNorth)+ - AzureCloud.switzerlandw (SwitzerlandWest)+ - AzureCloud.usstagee (EastUSSTG)+ - AzureCloud.usstagec (SouthCentralUSSTG) > [!TIP]
The following AzureCloud tags don't have regional names formatted according to t
> > - When new IP addresses are added to service tags, they won't be used in Azure for at least one week. This gives you time to update any systems that might need to track the IP addresses associated with service tags. - ## Next steps+ - Learn how to [create a network security group](tutorial-filter-network-traffic.md).
virtual-wan Howto Connect Vnet Hub Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/howto-connect-vnet-hub-powershell.md
Title: 'Connect a VNet to a Virtual WAN hub - PowerShell' description: Learn how to connect a VNet to a Virtual WAN hub using PowerShell.-+ Previously updated : 06/14/2023- Last updated : 06/15/2023+ # Connect a virtual network to a Virtual WAN hub - PowerShell
Before you create a connection, be aware of the following:
## Prerequisites * Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial).
-* This tutorial creates a NAT rule on a VPN gateway that will be associated with a VPN site connection. The steps assume that you have an existing Virtual WAN VPN gateway connection to two branches with overlapping address spaces.
+* The following steps assume that you have already created a [site-to-site Virtual WAN VPN gateway](site-to-site-powershell.md).
### Azure PowerShell
Before you create a connection, be aware of the following:
## Add a connection
-1. Declare the variables for the existing resources including the existing Virtual Network.
+1. Declare the variables for the existing resources, including the existing virtual network.
```azurepowershell-interactive
- $resourceGroup = Get-AzResourceGroup -ResourceGroupName "testRG"
- $virtualWan = Get-AzVirtualWan -ResourceGroupName "testRG" -Name "myVirtualWAN"
- $virtualHub = Get-AzVirtualHub -ResourceGroupName "testRG" -Name "westushub"
- $remoteVirtualNetwork = Get-AzVirtualNetwork -Name "MyVirtualNetwork" -ResourceGroupName "testRG"
+ $resourceGroup = Get-AzResourceGroup -ResourceGroupName "TestRG"
+ $virtualWan = Get-AzVirtualWan -ResourceGroupName "TestRG" -Name "TestVWAN1"
+ $virtualHub = Get-AzVirtualHub -ResourceGroupName "TestRG" -Name "Hub1"
+ $remoteVirtualNetwork = Get-AzVirtualNetwork -Name "VNet1" -ResourceGroupName "TestRG"
```
-1. You can create a connection between a new virtual network or an already existing virtual network to peer the Virtual Network to the Virtual Hub. To create the connection:
+1. Create a connection to peer the virtual network to the virtual hub.
```azurepowershell-interactive
- New-AzVirtualHubVnetConnection -ResourceGroupName "testRG" -VirtualHubName "westushub" -Name "testvnetconnection" -RemoteVirtualNetwork $remoteVirtualNetwork
+ New-AzVirtualHubVnetConnection -ResourceGroupName "TestRG" -VirtualHubName "Hub1" -Name "VNet1-connection" -RemoteVirtualNetwork $remoteVirtualNetwork
``` ## Next steps
virtual-wan Site To Site Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/site-to-site-powershell.md
Previously updated : 08/04/2022 Last updated : 06/15/2023 # Create a site-to-site connection to Azure Virtual WAN using PowerShell
-This article shows you how to use Virtual WAN to connect to your resources in Azure over an IPsec/IKE (IKEv1 and IKEv2) VPN connection via PowerShell. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it. For more information about Virtual WAN, see the [Virtual WAN overview](virtual-wan-about.md).
+This article shows you how to use Virtual WAN to connect to your resources in Azure over an IPsec/IKE (IKEv1 and IKEv2) VPN connection via PowerShell. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it. For more information about Virtual WAN, see the [Virtual WAN overview](virtual-wan-about.md). You can also create this configuration using the [Azure portal](virtual-wan-site-to-site-portal.md) instructions.
## Prerequisites
This article shows you how to use Virtual WAN to connect to your resources in Az
Before you can create a virtual wan, you have to create a resource group to host the virtual wan or use an existing resource group. Use one of the following examples.
-**New resource group** - This example creates a new resource group named **testRG** in the **West US** location.
+This example creates a new resource group named **TestRG** in the **East US** location. If you want to use an existing resource group instead, you can modify the `$resourceGroup = Get-AzResourceGroup -ResourceGroupName "NameofResourceGroup"` command, and then complete the steps in this exercise using your own values.
1. Create a resource group. ```azurepowershell-interactive
- New-AzResourceGroup -Location "West US" -Name "testRG"
+ New-AzResourceGroup -Location "East US" -Name "TestRG"
```
-1. Create the virtual wan.
+1. Create the virtual wan using the [New-AzVirtualWan](/powershell/module/az.network/new-azvirtualwan) cmdlet.
```azurepowershell-interactive
- $virtualWan = New-AzVirtualWan -ResourceGroupName testRG -Name myVirtualWAN -Location "West US"
- ```
-
-**Existing resource group** - Use the following steps if you want to create the virtual wan in an already existing resource group.
-
-1. Set the variables for the existing resource group.
-
- ```azurepowershell-interactive
- $resourceGroup = Get-AzResourceGroup -ResourceGroupName "testRG"
- ```
-
-2. Create the virtual wan.
-
- ```azurepowershell-interactive
- $virtualWan = New-AzVirtualWan -ResourceGroupName testRG -Name myVirtualWAN -Location "West US"
+ $virtualWan = New-AzVirtualWan -ResourceGroupName TestRG -Name TestVWAN1 -Location "East US"
``` ## <a name="hub"></a>Create the hub and configure hub settings
-A hub is a virtual network that can contain gateways for site-to-site, ExpressRoute, or point-to-site functionality. Create a virtual hub with [New-AzVirtualHub](/powershell/module/az.Network/New-AzVirtualHub). This example creates a default virtual hub named **westushub** with the specified address prefix and a location for the hub.
+A hub is a virtual network that can contain gateways for site-to-site, ExpressRoute, or point-to-site functionality. Create a virtual hub with [New-AzVirtualHub](/powershell/module/az.Network/New-AzVirtualHub). This example creates a default virtual hub named **Hub1** with the specified address prefix and a location for the hub.
```azurepowershell-interactive
-$virtualHub = New-AzVirtualHub -VirtualWan $virtualWan -ResourceGroupName "testRG" -Name "westushub" -AddressPrefix "10.11.0.0/24" -Location "westus"
+$virtualHub = New-AzVirtualHub -VirtualWan $virtualWan -ResourceGroupName "TestRG" -Name "Hub1" -AddressPrefix "10.1.0.0/16" -Location "westus"
``` ## <a name="gateway"></a>Create a site-to-site VPN gateway
-In this section, you create a site-to-site VPN gateway that will be in the same location as the referenced virtual hub. When you create the VPN gateway, you specify the scale units that you want. It takes about 30 minutes for the gateway to create.
+In this section, you create a site-to-site VPN gateway in the same location as the referenced virtual hub. When you create the VPN gateway, you specify the scale units that you want. It takes about 30 minutes for the gateway to create.
-1. Create a VPN gateway.
+1. If you closed Azure Cloud Shell or your connection timed out, you may need to declare the variable again for $virtualHub.
```azurepowershell-interactive
- New-AzVpnGateway -ResourceGroupName "testRG" -Name "testvpngw" -VirtualHubId $virtualHub.Id -VpnGatewayScaleUnit 2
+ $virtualHub = Get-AzVirtualHub -ResourceGroupName "TestRG" -Name "Hub1"
+ ```
+
+1. Create a VPN gateway using the [New-AzVpnGateway](/powershell/module/az.network/new-azvpngateway) cmdlet.
+
+ ```azurepowershell-interactive
+ New-AzVpnGateway -ResourceGroupName "TestRG" -Name "vpngw1" -VirtualHubId $virtualHub.Id -VpnGatewayScaleUnit 2
``` 1. Once your VPN gateway is created, you can view it using the following example. ```azurepowershell-interactive
- Get-AzVpnGateway -ResourceGroupName "testRG" -Name "testvpngw"
+ Get-AzVpnGateway -ResourceGroupName "TestRG" -Name "vpngw1"
``` ## <a name="site"></a>Create a site and connections
In this section, you create sites that correspond to your physical locations and
1. Set the variable for the VPN gateway and for the IP address space that is located on your on-premises site. Traffic destined for this address space is routed to your local site. This is required when BGP isn't enabled for the site. ```azurepowershell-interactive
- $vpnGateway = Get-AzVpnGateway -ResourceGroupName "testRG" -Name "testvpngw"
+ $vpnGateway = Get-AzVpnGateway -ResourceGroupName "TestRG" -Name "vpngw1"
$vpnSiteAddressSpaces = New-Object string[] 2 $vpnSiteAddressSpaces[0] = "192.168.2.0/24" $vpnSiteAddressSpaces[1] = "192.168.3.0/24"
In this section, you create sites that correspond to your physical locations and
1. Create links to add information about the physical links at the branch including metadata about the link speed, link provider name, and the public IP address of the on-premises device. ```azurepowershell-interactive
- $vpnSiteLink1 = New-AzVpnSiteLink -Name "testVpnSiteLink1" -IpAddress "15.25.35.45" -LinkProviderName "SomeTelecomProvider" -LinkSpeedInMbps "10"
- $vpnSiteLink2 = New-AzVpnSiteLink -Name "testVpnSiteLink2" -IpAddress "15.25.35.55" -LinkProviderName "SomeTelecomProvider2" -LinkSpeedInMbps "100"
+ $vpnSiteLink1 = New-AzVpnSiteLink -Name "TestSite1Link1" -IpAddress "15.25.35.45" -LinkProviderName "SomeTelecomProvider" -LinkSpeedInMbps "10"
+ $vpnSiteLink2 = New-AzVpnSiteLink -Name "TestSite1Link2" -IpAddress "15.25.35.55" -LinkProviderName "SomeTelecomProvider2" -LinkSpeedInMbps "100"
``` 1. Create the VPN site, referencing the variables of the VPN site links you just created.
+ If you closed Azure Cloud Shell or your connection timed out, redeclare the virtual WAN variable:
+ ```azurepowershell-interactive
- $vpnSite = New-AzVpnSite -ResourceGroupName "testRG" -Name "testVpnSite" -Location "West US" -VirtualWan $virtualWan -AddressSpace $vpnSiteAddressSpaces -DeviceModel "SomeDevice" -DeviceVendor "SomeDeviceVendor" -VpnSiteLink @($vpnSiteLink1, $vpnSiteLink2)
+ $virtualWan = Get-AzVirtualWAN -ResourceGroupName "TestRG" -Name "TestVWAN1"
```
+
+ Create the VPN site using the [New-AzVpnSite](/powershell/module/az.network/new-azvpnsite) cmdlet.
-1. Create the site link connection. The connection is composed of 2 active-active tunnels from a branch/site to the scalable gateway.
+ ```azurepowershell-interactive
+ $vpnSite = New-AzVpnSite -ResourceGroupName "TestRG" -Name "TestSite1" -Location "westus" -VirtualWan $virtualWan -AddressSpace $vpnSiteAddressSpaces -DeviceModel "SomeDevice" -DeviceVendor "SomeDeviceVendor" -VpnSiteLink @($vpnSiteLink1, $vpnSiteLink2)
+ ```
+
+1. Create the site link connection. The connection is composed of two active-active tunnels from a branch/site to the scalable gateway.
```azurepowershell-interactive
- $vpnSiteLinkConnection1 = New-AzVpnSiteLinkConnection -Name "testLinkConnection1" -VpnSiteLink $vpnSite.VpnSiteLinks[0] -ConnectionBandwidth 100
+ $vpnSiteLinkConnection1 = New-AzVpnSiteLinkConnection -Name "TestLinkConnection1" -VpnSiteLink $vpnSite.VpnSiteLinks[0] -ConnectionBandwidth 100
$vpnSiteLinkConnection2 = New-AzVpnSiteLinkConnection -Name "testLinkConnection2" -VpnSiteLink $vpnSite.VpnSiteLinks[1] -ConnectionBandwidth 10 ``` ## <a name="connectsites"></a>Connect the VPN site to a hub
-Connect your VPN site to the hub site-to-site VPN gateway.
+Connect your VPN site to the hub site-to-site VPN gateway using the [New-AzVpnConnection](/powershell/module/az.network/new-azvpnconnection) cmdlet.
-```azurepowershell-interactive
-New-AzVpnConnection -ResourceGroupName $vpnGateway.ResourceGroupName -ParentResourceName $vpnGateway.Name -Name "testConnection" -VpnSite $vpnSite -VpnSiteLinkConnection @($vpnSiteLinkConnection1, $vpnSiteLinkConnection2)
-```
+1. Before running the command, you may need to redeclare the following variables:
+
+ ```azurepowershell-interactive
+ $virtualWan = Get-AzVirtualWAN -ResourceGroupName "TestRG" -Name "TestVWAN1"
+ $vpnGateway = Get-AzVpnGateway -ResourceGroupName "TestRG" -Name "vpngw1"
+ $vpnSite = Get-AzVpnSite -ResourceGroupName "TestRG" -Name "TestSite1"
+ ```
+
+1. Connect the VPN site to the hub.
+
+ ```azurepowershell-interactive
+ New-AzVpnConnection -ResourceGroupName $vpnGateway.ResourceGroupName -ParentResourceName $vpnGateway.Name -Name "testConnection" -VpnSite $vpnSite -VpnSiteLinkConnection @($vpnSiteLinkConnection1, $vpnSiteLinkConnection2)
+ ```
+
+## Connect a VNet to your hub
+
+The next step is to connect the hub to the VNet. If you created a new resource group for this exercise, you typically won't already have a virtual network (VNet) in your resource group. The steps below help you create a VNet if you don't already have one. You can then create a connection between the hub and your VNet.
+
+### Create a virtual network
+
+You can use the following example values to create a VNet. Make sure to substitute the values in the examples for the values you used for your environment. For more information, see [Quickstart: Use Azure PowerShell to create a virtual network](../virtual-network/quick-create-powershell.md).
+
+1. Create a VNet.
+
+ ```azurepowershell-interactive
+ $vnet = @{
+ Name = 'VNet1'
+ ResourceGroupName = 'TestRG'
+ Location = 'eastus'
+ AddressPrefix = '10.21.0.0/16'
+ }
+ $virtualNetwork = New-AzVirtualNetwork @vnet
+ ```
+
+1. Specify subnet settings.
+
+ ```azurepowershell-interactive
+ $subnet = @{
+ Name = 'Subnet-1'
+ VirtualNetwork = $virtualNetwork
+ AddressPrefix = '10.21.0.0/24'
+ }
+ $subnetConfig = Add-AzVirtualNetworkSubnetConfig @subnet
+ ```
+
+1. Set the VNet.
+
+ ```azurepowershell-interactive
+ $virtualNetwork | Set-AzVirtualNetwork
+ ```
+
+### Connect a VNet to a hub
+
+Once you have a VNet, follow the steps in this article to connect your VNet to the VWAN hub: [Connect a VNet to a Virtual WAN hub](howto-connect-vnet-hub-powershell.md).
+
+## Configure VPN device
+
+To configure your on-premises VPN device, follow the steps in the [Site-to-site: Azure portal](virtual-wan-site-to-site-portal.md#device) article.
## <a name="cleanup"></a>Clean up resources When you no longer need the resources that you created, delete them. Some of the Virtual WAN resources must be deleted in a certain order due to dependencies. Deleting can take about 30 minutes to complete.
-1. Delete all gateway entities following the below order for the VPN gateway.
+Delete all gateway entities in the following order:
1. Declare the variables. ```azurepowershell-interactive
- $resourceGroup = Get-AzResourceGroup -ResourceGroupName "testRG"
- $virtualWan = Get-AzVirtualWan -ResourceGroupName "testRG" -Name "myVirtualWAN"
- $virtualHub = Get-AzVirtualHub -ResourceGroupName "testRG" -Name "westushub"
- $vpnGateway = Get-AzVpnGateway -ResourceGroupName "testRG" -Name "testvpngw"
+ $resourceGroup = Get-AzResourceGroup -ResourceGroupName "TestRG"
+ $virtualWan = Get-AzVirtualWan -ResourceGroupName "TestRG" -Name "TestVWAN1"
+ $virtualHub = Get-AzVirtualHub -ResourceGroupName "TestRG" -Name "Hub1"
+ $vpnGateway = Get-AzVpnGateway -ResourceGroupName "TestRG" -Name "vpngw1"
``` 1. Delete the VPN gateway connection to the VPN sites.
When you no longer need the resources that you created, delete them. Some of the
1. Delete the VPN gateway. Deleting a VPN gateway will also remove all VPN ExpressRoute connections associated with it. ```azurepowershell-interactive
- Remove-AzVpnGateway -ResourceGroupName "testRG" -Name "testvpngw"
+ Remove-AzVpnGateway -ResourceGroupName "TestRG" -Name "vpngw1"
```
-1. You can delete the entire resource group in order to delete all the remaining resources it contains, including the hubs, sites, and the virtual WAN.
+1. At this point, you can do one of two things:
+
+ * You can delete the entire resource group in order to delete all the remaining resources it contains, including the hubs, sites, and the virtual WAN.
+ * You can choose to delete each of the resources in the resource group.
+
+ **To delete the entire resource group:**
```azurepowershell-interactive
- Remove-AzResourceGroup -Name "testRG"
+ Remove-AzResourceGroup -Name "TestRG"
```
-1. Or, you can choose to delete each of the resources in the Resource Group.
+ **To delete each resource in the resource group:**
- Delete the VPN site.
+ * Delete the VPN site.
- ```azurepowershell-interactive
- Remove-AzVpnSite -ResourceGroupName "testRG" -Name "testVpnSite"
- ```
+ ```azurepowershell-interactive
+ Remove-AzVpnSite -ResourceGroupName "TestRG" -Name "TestSite1"
+ ```
- Delete the virtual hub.
+ * Delete the virtual hub.
- ```azurepowershell-interactive
- Remove-AzVirtualHub -ResourceGroupName "testRG" -Name "westushub"
- ```
+ ```azurepowershell-interactive
+ Remove-AzVirtualHub -ResourceGroupName "TestRG" -Name "Hub1"
+ ```
- Delete the virtual WAN.
+ * Delete the virtual WAN.
- ```azurepowershell-interactive
- Remove-AzVirtualWan -Name "MyVirtualWan" -ResourceGroupName "testRG"
- ```
+ ```azurepowershell-interactive
+ Remove-AzVirtualWan -Name "TestVWAN1" -ResourceGroupName "TestRG"
+ ```
## Next steps
virtual-wan Virtual Wan Site To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-site-to-site-portal.md
This tutorial shows you how to use Virtual WAN to connect to your resources in Azure over an IPsec/IKE (IKEv1 and IKEv2) VPN connection. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it. For more information about Virtual WAN, see the [Virtual WAN Overview](virtual-wan-about.md). + In this tutorial you learn how to: > [!div class="checklist"]
In this tutorial you learn how to:
> If you have many sites, you typically would use a [Virtual WAN partner](https://aka.ms/virtualwan) to create this configuration. However, you can create this configuration yourself if you are comfortable with networking and proficient at configuring your own VPN device. > - ## Prerequisites Verify that you've met the following criteria before beginning your configuration:
web-application-firewall Afds Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/afds-overview.md
WAF prevents malicious attacks close to the attack sources, before they enter yo
![Azure web application firewall](../media/overview/wafoverview.png)
+> [!NOTE]
+> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../overview.md) to safeguard against emerging DDoS attacks. Another option is to deploy [**Azure Front Door**](../../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../../frontdoor/front-door-ddos.md).
+ Azure Front Door has [two tiers](../../frontdoor/standard-premium/overview.md): Front Door Standard and Front Door Premium. WAF is natively integrated with Front Door Premium with full capabilities. For Front Door Standard, only [custom rules](#custom-authored-rules) are supported.
+## Protection
+
+* Protect your web applications from web vulnerabilities and attacks without modification to back-end code.
+
+* Protect your web applications from malicious bots with the IP Reputation ruleset.
+
+* Protect your application against DDoS attacks. For more information, see [Application DDoS Protection](../shared/application-ddos-protection.md).
++ ## WAF policy and rules You can configure a [WAF policy](waf-front-door-create-portal.md) and associate that policy to one or more Front Door front-ends for protection. A WAF policy consists of two types of security rules:
web-application-firewall Ag Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/ag-overview.md
This section describes the core benefits that WAF on Application Gateway provide
* Protect multiple web applications at the same time. An instance of Application Gateway can host up to 40 websites that are protected by a web application firewall.
-* Create custom WAF policies for different sites behind the same WAF
+* Create custom WAF policies for different sites behind the same WAF.
-* Protect your web applications from malicious bots with the IP Reputation ruleset
+* Protect your web applications from malicious bots with the IP Reputation ruleset.
+
+* Protect your application against DDoS attacks. For more information, see [Application DDoS Protection](../shared/application-ddos-protection.md).
### Monitoring
web-application-firewall Waf Sensitive Data Protection Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/waf-sensitive-data-protection-configure.md
+
+ Title: How to mask sensitive data on Azure Web Application Firewall
+description: Learn how to mask sensitive data on Azure Web Application Firewall
++++ Last updated : 06/13/2023++
+# How to mask sensitive data on Azure Web Application Firewall
+
+The Web Application Firewall's (WAF's) Log Scrubbing tool helps you remove sensitive data from your WAF logs. It works by using a rules engine that allows you to build custom rules to identify specific portions of a request that contain sensitive data. Once identified, the tool scrubs that information from your logs and replaces it with _*******_.
+
+The following table shows examples of log scrubbing rules that can be used to protect your sensitive data:
+
+| Match Variable | Operator | Selector | What gets scrubbed |
+| | | | |
+| Request Header Names | Equals | X-Forwarded-For | REQUEST_HEADERS:x-forwarded-for.","data":"******" |
+| Request Cookie Names | Equals | cookie1 | "Matched Data: ****** found within REQUEST_COOKIES:cookie1: ******" |
+| Request Arg Names | Equals | arg1 | "requestUri":"\/?arg1=******" |
+| Request Post Arg Names | Equals | Post1 | "data":"Matched Data: ****** found within ARGS:post1: ******" |
+| Request JSON Arg Names | Equals | Jsonarg | "data":"Matched Data: ****** found within ARGS:jsonarg: ******" |
+| Request IP Address* | Equals Any | NULL | "clientIp":"******" |
+
+\* Request IP Address rules only support the *equals any* operator and scrubs all instances of the requestor's IP address that appears in the WAF logs.
+
+For more information, see [What is Azure Web Application Firewall Sensitive Data Protection?](waf-sensitive-data-protection.md)
+
+## Enable Sensitive Data Protection
+
+Use the following information to enable and configure Sensitive Data Protection.
+
+#### [Portal](#tab/browser)
+
+To enable Sensitive Data Protection:
+
+1. Open an existing Application Gateway WAF policy.
+1. Under **Settings**, select **Sensitive data**.
+1. On the **Sensitive data** page, select **Enable log scrubbing**.
+
+To configure Log Scrubbing rules for Sensitive Data Protection:
+
+1. Under **Log scrubbing rules**, select a **Match variable**.
+1. Select an **Operator** (if applicable).
+1. Type a **Selector** (if applicable).
+1. Select **Save**.
+
+Repeat to add more rules.
+
+#### [PowerShell](#tab/powershell)
+
+Use the following Azure PowerShell commands to [create](/powershell/module/az.network/new-azapplicationgatewayfirewallpolicylogscrubbingrule) and [configure](/powershell/module/az.network/new-azapplicationgatewayfirewallpolicylogscrubbingconfiguration) Log Scrubbing rules for Sensitive Data Protection:
+
+```azurepowershell
+$logScrubbingRule1 = New-AzApplicationGatewayFirewallPolicyLogScrubbingRule `
+ -State <String> -MatchVariable <String> `
+ -SelectorMatchOperator <String> -Selector <String>
+
+$logScrubbingRuleConfig = New-AzApplicationGatewayFirewallPolicyLogScrubbingConfiguration `
+ -State <String> -ScrubbingRule $logScrubbingRule1
+```
+#### [CLI](#tab/cli)
+
+The Azure CLI commands to enable and configure Sensitive Data Protection are coming soon.
+++++
+## Verify Sensitive Data Protection
+
+To verify your Sensitive Data Protection rules, open the Application Gateway firewall log and search for _******_ in place of the sensitive fields.
+
+## Next steps
+
+- [Use Log Analytics to examine Application Gateway Web Application Firewall (WAF) logs](../ag/log-analytics.md)
web-application-firewall Waf Sensitive Data Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/waf-sensitive-data-protection.md
+
+ Title: Azure Web Application Firewall Sensitive Data Protection
+description: Learn about Azure Web Application Firewall Sensitive Data Protection.
++++ Last updated : 06/13/2023++
+# What is Azure Web Application Firewall Sensitive Data Protection?
+
+The Web Application Firewall's (WAF's) Log Scrubbing tool helps you remove sensitive data from your WAF logs. It works by using a rules engine that allows you to build custom rules to identify specific portions of a request that contain sensitive information. Once identified, the tool scrubs that information from your logs and replaces it with _*******_.
++
+## Default log behavior
+
+Normally, when a WAF rule is triggered, the WAF logs the details of the request in clear text. If the portion of the request triggering the WAF rule contains sensitive data (such as customer passwords or IP addresses), that sensitive data is viewable by anyone with access to the WAF logs. To protect customer data, you can set up Log Scrubbing rules targeting this sensitive data for protection.
+
+> [!IMPORTANT]
+> Selectors are case insensitive for the RequestHeaderNames match variable only. All other match variables are case sensitive.
+
+## Fields
+
+The following fields can be scrubbed from the logs:
+
+- IP address
+- Request header name
+- Request cookie name
+- Request args name
+- Post arg name
+- JSON arg name
+
+## Next steps
+
+- [How to mask sensitive data on Azure Web Application Firewall](waf-sensitive-data-protection-configure.md)
web-application-firewall Application Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/shared/application-ddos-protection.md
+
+ Title: Application DDoS protection
+
+description: This article explains how you can use Azure Web Application Firewall with Azure Front Door or Azure Application Gateway to protect your web applications against application layer DDoS attacks.
++++ Last updated : 06/16/2023++
+# Application (Layer 7) DDoS protection
+
+Azure WAF has several defense mechanisms that can help to prevent distributed denial of service (DDoS) attacks. The DDoS attacks can target at both network layer (L3/L4) or application layer (L7). Azure DDoS protects customer against large network layer volumetric attacks. Azure WAF operating at layer 7 protects web applications against L7 DDoS attacks such as HTTP Floods. These defenses can prevent attackers from reaching your application and affect your application's availability and performance.
+
+## How can you protect your services?
+
+These attacks can be mitigated by adding Web Application Firewall (WAF) or placing DDoS in front of the service to filter out bad requests. Azure offers WAF running at network edge with Azure Front Door and in data centers with Application Gateway. These steps are a generalized list and need to be adjusted to fit your application requirements service.
+
+* Deploy [Azure Web Application Firewall (WAF)](../overview.md) with Azure Front Door Premium or Application Gateway WAF v2 SKU to protect against L7 application layer attacks.
+* Scale up your origin instance count so that there's sufficient spare capacity by following safe deployment guidelines.
+* Enable [Azure DDoS Protection](../../ddos-protection/ddos-protection-overview.md) on the origin public IPs to protect your public IPs against layer 3(L3) and layer 4(L4) DDoS attacks. AzureΓÇÖs DDoS offerings can automatically protect most sites from L3 and L4 volumetric attacks that send large numbers of packets towards a website. Azure also offers infrastructure level protection to all sites hosted on Azure by default.
+
+## Azure WAF with Azure Front Door
+
+Azure WAF has many features that can be used to mitigate many different types of attacks -
+
+* Using bot protection managed rule set to protect against known bad bots. For more information, see [Configuring bot protection](../afds/waf-front-door-policy-configure-bot-protection.md).
+
+* Apply rate limit to prevent IP addresses from calling your service too frequently. For more information, see [Rate limiting](../afds/waf-front-door-rate-limit.md).
+
+* Block IP addresses, and ranges that you identify as malicious. For more information, see [IP restrictions](../afds/waf-front-door-configure-ip-restriction.md).
+
+* Block or redirect to a static web page any traffic from outside a defined geographic region, or within a defined region that doesn't fit the application traffic pattern. For more information, see [Geo-filtering]()../afds/waf-front-door-geo-filtering.md).
+
+* Create [custom WAF rules](../afds/waf-front-door-custom-rules.md) to automatically block and rate limit HTTP or HTTPS attacks that have known signatures. Signature such as a specific user-agent, or a specific traffic pattern including headers, cookies, query string parameters or a combination of multiple signatures.
+
+Beyond WAF, Azure Front Door also offers default Azure Infrastructure DDoS protection to protect against L3/4 DDoS attacks. Enabling caching on Azure Front Door can help absorb sudden peak traffic volume at the edge and protect backend origins from attack as well.
+
+For more information on features and DDoS protection on Azure Front Door, see [DDoS protection on Azure Front Door](../../frontdoor/front-door-ddos.md).
+
+## Azure WAF with Azure Application Gateway
+
+We recommend using Application Gateway WAF v2 SKU that comes with the latest features, including L7 DDoS mitigation features, to defend against L7 DDoS attacks.
+
+Application Gateway WAF SKUs can be used to mitigate many L7 DDoS attacks:
+
+* Set your Application Gateway to auto scale up and not enforce number of max instances.
+
+* Use bot protection managed rule set provides protection against known bad bots. For more information, see [Configuring bot protection](../ag/bot-protection.md).
+
+* Block IP addresses, and ranges that you identify as malicious. For more information, see examples at [Create and use v2 custom rules](../ag/create-custom-waf-rules.md).
+
+* Block or redirect to a static web page any traffic from outside a defined geographic region, or within a defined region that doesn't fit the application traffic pattern. For more information, see examples at [Create and use v2 custom rules](../ag/create-custom-waf-rules.md).
+
+* You can create [custom WAF rules](../ag/configure-waf-custom-rules.md) to automatically block and rate limit HTTP or HTTPS attacks that have known signatures. Signatures such as a specific user-agent, or a specific traffic pattern including headers, cookies, query string parameters or a combination of multiple signatures.
+
+## Other considerations
+
+* Lock down access to public IPs on origin and restrict inbound traffic to only allow traffic from Azure Front Door or Application Gateway to origin. Refer to guidance on Azure Front Door. Application Gateways are deployed in a virtual network, ensure there isn't any publicly exposed IPs.
+
+* Switch WAF policy to the prevention mode. Deploying the policy in detection mode operates in the log only and doesn't block traffic. After verifying and testing your WAF policy with production traffic and fine tuning to reduce any false positives, you should turn policy to Prevention mode (block/defend mode).
+
+* Monitor traffic using Azure WAF logs for any anomalies. You can create custom rules to block any offending traffic ΓÇô suspected IPs sending unusually high number of requests, unusual user-agent string, anomalous query string patterns etc.
+
+* You can bypass the WAF for known legitimate traffic by creating Match Custom Rules with the action of Allow to reduce false positive. These rules should be configured with a high priority (lower numeric value) than other block and rate limit rules.
+
+* Depending on your traffic pattern, create a preventive rate limit rule (only applies to Azure Front Door). For example, you can configure a rate limit rule to not allow any single *Client IP address* to send more than XXX traffic per window to your site. Azure Front Door supports two fixed windows for tracking requests, 1 and 5 minutes. It's recommended to use the 5-minute window for better mitigation of HTTP Flood attacks. For example, **Configure a Rate Limit Rule**, which blocks any *Source IP* that exceeds 100 requests in a 5-minute window. This rule should be the lowest priority rule (priority is ordered with 1 being the highest priority), so that more specific Rate Limit rules or Match rules can be created to match before this rule.
+
+The following Log Analytics query can be helpful in determining the threshold you should use for the above rule.
+
+```
+AzureDiagnostics
+| where Category == "FrontdoorAccessLog"
+| summarize count() by bin(TimeGenerated, 5m), clientIp_s
+| summarize max(count_), percentile(count_, 99), percentile(count_, 95)
+```
+
+Managed rules while not directly targeted for defenses against DDoS attacks provide protection against other common attacks. For more information, see [Managed rules (Azure Front Door)](../afds/waf-front-door-drs.md) or [Managed rules (Application Gateway)](../ag/application-gateway-crs-rulegroups-rules.md) to learn more about various attack types these rules can help protect against.
+
+## WAF log analysis
+
+You can analyze WAF logs in Log Analytics with the following query.
+
+### Azure Front Door
+
+```
+AzureDiagnostics
+| where Category == "FrontdoorWebApplicationFirewallLog"
+```
+
+For more information, see [Azure WAF with Azure Front Door](../afds/waf-front-door-monitor.md).
+
+### Azure Application Gateway
+
+```
+AzureDiagnostics
+| where Category == "ApplicationGatewayFirewallLog"
+```
+
+For more information, see [Azure WAF with Azure Application Gateway](../ag/web-application-firewall-logs.md).
+
+## Next steps
+
+* [Create a WAF policy](../afds/waf-front-door-create-portal.md) for Azure Front Door.
+* [Create an Application Gateway](../ag/application-gateway-web-application-firewall-portal.md) with a Web Application Firewall.
+
+* Learn how Azure Front Door can help [protect against DDoS attacks](../../frontdoor/front-door-ddos.md).
+* Protect your application gateway with [Azure DDoS Network Protection](../../application-gateway/tutorial-protect-application-gateway-ddos.md).
+* Learn more about the [Azure DDoS Protection](../../ddos-protection/ddos-protection-overview.md).