Updates from: 07/28/2021 03:04:40
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/azure-monitor.md
Previously updated : 01/29/2021 Last updated : 07/19/2021 # Monitor Azure AD B2C with Azure Monitor
Use Azure Monitor to route Azure Active Directory B2C (Azure AD B2C) sign-in and
You can route log events to:
-* An Azure [storage account](../storage/blobs/storage-blobs-introduction.md).
-* A [Log Analytics workspace](../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace) (to analyze data, create dashboards, and alert on specific events).
-* An Azure [event hub](../event-hubs/event-hubs-about.md) (and integrate with your Splunk and Sumo Logic instances).
+- An Azure [storage account](../storage/blobs/storage-blobs-introduction.md).
+- A [Log Analytics workspace](../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace) (to analyze data, create dashboards, and alert on specific events).
+- An Azure [event hub](../event-hubs/event-hubs-about.md) (and integrate with your Splunk and Sumo Logic instances).
![Azure Monitor](./media/azure-monitor/azure-monitor-flow.png)
In this article, you learn how to transfer the logs to an Azure Log Analytics wo
> [!IMPORTANT] > When you plan to transfer Azure AD B2C logs to different monitoring solutions, or repository, consider the following. Azure AD B2C logs contain personal data. Such data should be processed in a manner that ensures appropriate security of the personal data, including protection against unauthorized or unlawful processing, using appropriate technical or organizational measures. - ## Deployment overview
-Azure AD B2C leverages [Azure Active Directory monitoring](../active-directory/reports-monitoring/overview-monitoring.md). To enable *Diagnostic settings* in Azure Active Directory within your Azure AD B2C tenant, you use [Azure Lighthouse](../lighthouse/overview.md) to [delegate a resource](../lighthouse/concepts/architecture.md), which allows your Azure AD B2C (the **Service Provider**) to manage an Azure AD (the **Customer**) resource. After you complete the steps in this article, you'll have access to the *azure-ad-b2c-monitor* resource group that contains the [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) in your **Azure AD B2C** portal. You'll also be able to transfer the logs from Azure AD B2C to your Log Analytics workspace.
+Azure AD B2C leverages [Azure Active Directory monitoring](../active-directory/reports-monitoring/overview-monitoring.md). Because an Azure AD B2C tenant, unlike Azure AD tenants, can't have a subscription associated with it, we need to take some additional steps to enable the integration between Azure AD B2C and Log Analytics, which is where we'll send the logs.
+To enable _Diagnostic settings_ in Azure Active Directory within your Azure AD B2C tenant, you use [Azure Lighthouse](../lighthouse/overview.md) to [delegate a resource](../lighthouse/concepts/architecture.md), which allows your Azure AD B2C (the **Service Provider**) to manage an Azure AD (the **Customer**) resource.
+
+> [!TIP]
+> Azure Lighthouse is typically used to manage resources for multiple customers. However, it can also be used to manage resources **within an enterprise which has multiple Azure AD tenants of its own**, which is what we are doing here, except that we are only delegating the management of single resource group.
+
+After you complete the steps in this article, you'll have created a new resource group (here called _azure-ad-b2c-monitor_) and have access to that same resource group that contains the [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) in your **Azure AD B2C** portal. You'll also be able to transfer the logs from Azure AD B2C to your Log Analytics workspace.
-During this deployment, you'll authorize a user or group in your Azure AD B2C directory to configure the Log Analytics workspace instance within the tenant that contains your Azure subscription. To create the authorization, you deploy an [Azure Resource Manager](../azure-resource-manager/index.yml) template to your Azure AD tenant containing the subscription.
+During this deployment, you'll authorize a user or group in your Azure AD B2C directory to configure the Log Analytics workspace instance within the tenant that contains your Azure subscription. To create the authorization, you deploy an [Azure Resource Manager](../azure-resource-manager/index.yml) template to the subscription containing the Log Analytics workspace.
The following diagram depicts the components you'll configure in your Azure AD and Azure AD B2C tenants. ![Resource group projection](./media/azure-monitor/resource-group-projection.png)
-During this deployment, you'll configure both your Azure AD B2C tenant and Azure AD tenant where the Log Analytics workspace will be hosted. The Azure AD B2C account should be assigned the [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator) role on the Azure AD B2C tenant. The Azure AD account used to run the deployment must be assigned the [Owner](../role-based-access-control/built-in-roles.md#owner) role in the Azure AD subscription.It's also important to make sure you're signed in to the correct directory as you complete each step as described.
+During this deployment, you'll configure both your Azure AD B2C tenant and Azure AD tenant where the Log Analytics workspace will be hosted. The Azure AD B2C accounts used (such as your admin account) should be assigned the [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator) role on the Azure AD B2C tenant. The Azure AD account used to run the deployment must be assigned the [Owner](../role-based-access-control/built-in-roles.md#owner) role in the Azure AD subscription. It's also important to make sure you're signed in to the correct directory as you complete each step as described.
+
+In summary, you will use Azure Lighthouse to allow a user or group in your Azure AD B2C tenant to manage a resource group in a subscription associated with a different tenant (the Azure AD tenant). After this authorization is completed, the subscription and log analytics workspace can be selected as a target in the Diagnostic settings in Azure AD B2C.
## 1. Create or choose resource group
First, create, or choose a resource group that contains the destination Log Anal
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your **Azure AD tenant**.
-1. [Create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) or choose an existing one. This example uses a resource group named *azure-ad-b2c-monitor*.
+1. [Create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) or choose an existing one. This example uses a resource group named _azure-ad-b2c-monitor_.
## 2. Create a Log Analytics workspace
A **Log Analytics workspace** is a unique environment for Azure Monitor log data
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your **Azure AD tenant**.
-1. [Create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md). This example uses a Log Analytics workspace named *AzureAdB2C*, in a resource group named *azure-ad-b2c-monitor*.
+1. [Create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md). This example uses a Log Analytics workspace named _AzureAdB2C_, in a resource group named _azure-ad-b2c-monitor_.
## 3. Delegate resource management
First, get the **Tenant ID** of your Azure AD B2C directory (also known as the d
### 3.2 Select a security group
-Now select an Azure AD B2C group or user to which you want to give permission to the resource group you created earlier in the directory containing your subscription.
+Now select an Azure AD B2C group or user to which you want to give permission to the resource group you created earlier in the directory containing your subscription.
-To make management easier, we recommend using Azure AD user *groups* for each role, allowing you to add or remove individual users to the group rather than assigning permissions directly to that user. In this walkthrough, we'll add a security group.
+To make management easier, we recommend using Azure AD user _groups_ for each role, allowing you to add or remove individual users to the group rather than assigning permissions directly to that user. In this walkthrough, we'll add a security group.
> [!IMPORTANT] > In order to add permissions for an Azure AD group, the **Group type** must be set to **Security**. This option is selected when the group is created. For more information, see [Create a basic group and add members using Azure Active Directory](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
-1. With **Azure Active Directory** still selected in your **Azure AD B2C** directory, select **Groups**, and then select a group. If you don't have an existing group, create a **Security** group, then add members. For more information, follow the procedure [Create a basic group and add members using Azure Active Directory](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
+1. With **Azure Active Directory** still selected in your **Azure AD B2C** directory, select **Groups**, and then select a group. If you don't have an existing group, create a **Security** group, then add members. For more information, follow the procedure [Create a basic group and add members using Azure Active Directory](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
1. Select **Overview**, and record the group's **Object ID**. ### 3.3 Create an Azure Resource Manager template
-Next, you'll create an Azure Resource Manager template that grants Azure AD B2C access to the Azure AD resource group you created earlier (for example, *azure-ad-b2c-monitor*). Deploy the template from the GitHub sample by using the **Deploy to Azure** button, which opens the Azure portal and lets you configure and deploy the template directly in the portal. For these steps, make sure you're signed in to your Azure AD tenant (not the Azure AD B2C tenant).
+To create the custom authorization and delegation in Azure Lighthouse, we use an Azure Resource Manager template that grants Azure AD B2C access to the Azure AD resource group you created earlier (for example, _azure-ad-b2c-monitor_). Deploy the template from the GitHub sample by using the **Deploy to Azure** button, which opens the Azure portal and lets you configure and deploy the template directly in the portal. For these steps, make sure you're signed in to your Azure AD tenant (not the Azure AD B2C tenant).
1. Sign in to the [Azure portal](https://portal.azure.com). 2. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your **Azure AD** tenant. 3. Use the **Deploy to Azure** button to open the Azure portal and deploy the template directly in the portal. For more information, see [create an Azure Resource Manager template](../lighthouse/how-to/onboard-customer.md#create-an-azure-resource-manager-template).
- [![Deploy to Azure](https://aka.ms/deploytoazurebutton)]( https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure-ad-b2c%2Fsiem%2Fmaster%2Ftemplates%2FrgDelegatedResourceManagement.json)
+ [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure-ad-b2c%2Fsiem%2Fmaster%2Ftemplates%2FrgDelegatedResourceManagement.json)
-5. On the **Custom deployment** page, enter the following information:
+4. On the **Custom deployment** page, enter the following information:
- | Field | Definition |
- |||
- | Subscription | Select the directory that contains the Azure subscription where the *azure-ad-b2c-monitor* resource group was created. |
- | Region| Select the region where the resource will be deployed. |
- | Msp Offer Name| A name describing this definition. For example, *Azure AD B2C Monitoring*. |
- | Msp Offer Description| A brief description of your offer. For example, *Enables Azure Monitor in Azure AD B2C*.|
- | Managed By Tenant Id| The **Tenant ID** of your Azure AD B2C tenant (also known as the directory ID). |
- |Authorizations|Specify a JSON array of objects that include the Azure AD `principalId`, `principalIdDisplayName`, and Azure `roleDefinitionId`. The `principalId` is the **Object ID** of the B2C group or user that will have access to resources in this Azure subscription. For this walkthrough, specify the group's Object ID that you recorded earlier. For the `roleDefinitionId`, use the [built-in role](../role-based-access-control/built-in-roles.md) value for the *Contributor role*, `b24988ac-6180-42a0-ab88-20f7382dd24c`.|
- | Rg Name | The name of the resource group you create earlier in your Azure AD tenant. For example, *azure-ad-b2c-monitor*. |
+ | Field | Definition |
+ | | - |
+ | Subscription | Select the directory that contains the Azure subscription where the _azure-ad-b2c-monitor_ resource group was created. |
+ | Region | Select the region where the resource will be deployed. |
+ | Msp Offer Name | A name describing this definition. For example, _Azure AD B2C Monitoring_. This is the name that will be displayed in Azure Lighthouse. |
+ | Msp Offer Description | A brief description of your offer. For example, _Enables Azure Monitor in Azure AD B2C_. |
+ | Managed By Tenant Id | The **Tenant ID** of your Azure AD B2C tenant (also known as the directory ID). |
+ | Authorizations | Specify a JSON array of objects that include the Azure AD `principalId`, `principalIdDisplayName`, and Azure `roleDefinitionId`. The `principalId` is the **Object ID** of the B2C group or user that will have access to resources in this Azure subscription. For this walkthrough, specify the group's Object ID that you recorded earlier. For the `roleDefinitionId`, use the [built-in role](../role-based-access-control/built-in-roles.md) value for the _Contributor role_, `b24988ac-6180-42a0-ab88-20f7382dd24c`. |
+ | Rg Name | The name of the resource group you create earlier in your Azure AD tenant. For example, _azure-ad-b2c-monitor_. |
The following example demonstrates an Authorizations array with one security group. ```json [
- {
- "principalId": "<Replace with group's OBJECT ID>",
- "principalIdDisplayName": "Azure AD B2C tenant administrators",
- "roleDefinitionId": "b24988ac-6180-42a0-ab88-20f7382dd24c"
- }
+ {
+ "principalId": "<Replace with group's OBJECT ID>",
+ "principalIdDisplayName": "Azure AD B2C tenant administrators",
+ "roleDefinitionId": "b24988ac-6180-42a0-ab88-20f7382dd24c"
+ }
] ```
After you've deployed the template and waited a few minutes for the resource pro
1. Sign out of the Azure portal if you're currently signed in (this allows your session credentials to be refreshed in the next step). 2. Sign in to the [Azure portal](https://portal.azure.com) with your **Azure AD B2C** administrative account. This account must be a member of the security group you specified in the [Delegate resource management](#3-delegate-resource-management) step. 3. Select the **Directory + Subscription** icon in the portal toolbar.
-4. Select the Azure AD directory that contains the Azure subscription and the *azure-ad-b2c-monitor* resource group you created.
+4. Select the Azure AD directory that contains the Azure subscription and the _azure-ad-b2c-monitor_ resource group you created.
- ![Switch directory](./media/azure-monitor/azure-monitor-portal-03-select-subscription.png)
+ ![Switch directory](./media/azure-monitor/azure-monitor-portal-03-select-subscription.png)
-1. Verify that you've selected the correct directory and subscription. In this example, all directories and all subscriptions are selected.
+5. Verify that you've selected the correct directory and subscription. In this example, all directories and all subscriptions are selected.
- ![All directories selected in Directory & Subscription filter](./media/azure-monitor/azure-monitor-portal-04-subscriptions-selected.png)
+ ![All directories selected in Directory & Subscription filter](./media/azure-monitor/azure-monitor-portal-04-subscriptions-selected.png)
## 5. Configure diagnostic settings
To configure monitoring settings for Azure AD B2C activity logs:
1. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant. 1. Select **Azure Active Directory** 1. Under **Monitoring**, select **Diagnostic settings**.
-1. If there are existing settings for the resource, you will see a list of settings already configured. Either select **Add diagnostic setting** to add a new setting, or select **Edit** to edit an existing setting. Each setting can have no more than one of each of the destination types.
+1. If there are existing settings for the resource, you'll see a list of settings already configured. Either select **Add diagnostic setting** to add a new setting, or select **Edit** to edit an existing setting. Each setting can have no more than one of each of the destination types.
- ![Diagnostics settings pane in Azure portal](./media/azure-monitor/azure-monitor-portal-05-diagnostic-settings-pane-enabled.png)
+ ![Diagnostics settings pane in Azure portal](./media/azure-monitor/azure-monitor-portal-05-diagnostic-settings-pane-enabled.png)
1. Give your setting a name if it doesn't already have one. 1. Check the box for each destination to send the logs. Select **Configure** to specify their settings **as described in the following table**.
To configure monitoring settings for Azure AD B2C activity logs:
> [!NOTE] > It can take up to 15 minutes after an event is emitted for it to [appear in a Log Analytics workspace](../azure-monitor/logs/data-ingestion-time.md). Also, learn more about [Active Directory reporting latencies](../active-directory/reports-monitoring/reference-reports-latencies.md), which can impact the staleness of data and play an important role in reporting.
-If you see the error message "To setup Diagnostic settings to use Azure Monitor for your Azure AD B2C directory, you need to set up delegated resource management," make sure you sign-in with a user who is a member of the [security group](#32-select-a-security-group) and [select your subscription](#4-select-your-subscription).
+If you see the error message "To set up Diagnostic settings to use Azure Monitor for your Azure AD B2C directory, you need to set up delegated resource management," make sure you sign in with a user who is a member of the [security group](#32-select-a-security-group) and [select your subscription](#4-select-your-subscription).
## 6. Visualize your data
Log queries help you to fully leverage the value of the data collected in Azure
1. From **Log Analytics workspace**, select **Logs** 1. In the query editor, paste the following [Kusto Query Language](/azure/data-explorer/kusto/query/) query. This query shows policy usage by operation over the past x days. The default duration is set to 90 days (90d). Notice that the query is focused only on the operation where a token/code is issued by policy.
- ```kusto
- AuditLogs
- | where TimeGenerated > ago(90d)
- | where OperationName contains "issue"
- | extend UserId=extractjson("$.[0].id",tostring(TargetResources))
- | extend Policy=extractjson("$.[1].value",tostring(AdditionalDetails))
- | summarize SignInCount = count() by Policy, OperationName
- | order by SignInCount desc nulls last
- ```
+ ```kusto
+ AuditLogs
+ | where TimeGenerated > ago(90d)
+ | where OperationName contains "issue"
+ | extend UserId=extractjson("$.[0].id",tostring(TargetResources))
+ | extend Policy=extractjson("$.[1].value",tostring(AdditionalDetails))
+ | summarize SignInCount = count() by Policy, OperationName
+ | order by SignInCount desc nulls last
+ ```
1. Select **Run**. The query results are displayed at the bottom of the screen. 1. To save your query for later use, select **Save**.
Log queries help you to fully leverage the value of the data collected in Azure
1. Fill in the following details:
- - **Name** - Enter the name of your query.
- - **Save as** - Select `query`.
- - **Category** - Select `Log`.
+ - **Name** - Enter the name of your query.
+ - **Save as** - Select `query`.
+ - **Category** - Select `Log`.
1. Select **Save**.
Follow the instructions below to create a new workbook using a JSON Gallery Temp
1. From the toolbar, select **+ New** option to create a new workbook. 1. On the **New workbook** page, select the **Advanced Editor** using the **</>** option on the toolbar.
- ![Gallery Template](./media/azure-monitor/wrkb-adv-editor.png)
+ ![Gallery Template](./media/azure-monitor/wrkb-adv-editor.png)
1. Select **Gallery Template**.
-1. Replace the JSON in the **Gallery Template** with the content from [Azure AD B2C basic workbook](https://raw.githubusercontent.com/azure-ad-b2c/siem/master/workbooks/dashboard.json):
+1. Replace the JSON in the **Gallery Template** with the content from [Azure AD B2C basic workbook](https://raw.githubusercontent.com/azure-ad-b2c/siem/master/workbooks/dashboard.json):
1. Apply the template by using the **Apply** button. 1. Select **Done Editing** button from the toolbar to finish editing the workbook. 1. Finally, save the workbook by using the **Save** button from the toolbar.
-1. Provide a **Title**, such as *Azure AD B2C Dashboard*.
+1. Provide a **Title**, such as _Azure AD B2C Dashboard_.
1. Select **Save**.
- ![Save the workbook](./media/azure-monitor/wrkb-title.png)
+ ![Save the workbook](./media/azure-monitor/wrkb-title.png)
The workbook will display reports in the form of a dashboard.
The workbook will display reports in the form of a dashboard.
![Workbook third dashboard](./media/azure-monitor/wrkb-dashboard-3.png) - ## Create alerts Alerts are created by alert rules in Azure Monitor and can automatically run saved queries or custom log searches at regular intervals. You can create alerts based on specific performance metrics or when certain events are created, absence of an event, or a number of events are created within a particular time window. For example, alerts can be used to notify you when average number of sign-in exceeds a certain threshold. For more information, see [Create alerts](../azure-monitor/alerts/alerts-log.md). - Use the following instructions to create a new Azure Alert, which will send an [email notification](../azure-monitor/alerts/action-groups.md#configure-notifications) whenever there is a 25% drop in the **Total Requests** compare to previous period. Alert will run every 5 minutes and look for the drop in the last hour compared to the hour before that. The alerts are created using Kusto query language. -
-1. From **Log Analytics workspace**, select **Logs**.
+1. From **Log Analytics workspace**, select **Logs**.
1. Create a new **Kusto query** by using the query below.
- ```kusto
- let start = ago(2h);
- let end = now();
- let threshold = -25; //25% decrease in total requests.
- AuditLogs
- | serialize TimeGenerated, CorrelationId, Result
- | make-series TotalRequests=dcount(CorrelationId) on TimeGenerated from start to end step 1h
- | mvexpand TimeGenerated, TotalRequests
- | serialize TotalRequests, TimeGenerated, TimeGeneratedFormatted=format_datetime(todatetime(TimeGenerated), 'yyyy-MM-dd [HH:mm:ss]')
- | project TimeGeneratedFormatted, TotalRequests, PercentageChange= ((toreal(TotalRequests) - toreal(prev(TotalRequests,1)))/toreal(prev(TotalRequests,1)))*100
- | order by TimeGeneratedFormatted desc
- | where PercentageChange <= threshold //Trigger's alert rule if matched.
- ```
+ ```kusto
+ let start = ago(2h);
+ let end = now();
+ let threshold = -25; //25% decrease in total requests.
+ AuditLogs
+ | serialize TimeGenerated, CorrelationId, Result
+ | make-series TotalRequests=dcount(CorrelationId) on TimeGenerated from start to end step 1h
+ | mvexpand TimeGenerated, TotalRequests
+ | serialize TotalRequests, TimeGenerated, TimeGeneratedFormatted=format_datetime(todatetime(TimeGenerated), 'yyyy-MM-dd [HH:mm:ss]')
+ | project TimeGeneratedFormatted, TotalRequests, PercentageChange= ((toreal(TotalRequests) - toreal(prev(TotalRequests,1)))/toreal(prev(TotalRequests,1)))*100
+ | order by TimeGeneratedFormatted desc
+ | where PercentageChange <= threshold //Trigger's alert rule if matched.
+ ```
1. Select **Run**, to test the query. You should see the results if there is a drop of 25% or more in the total requests within the past hour. 1. To create an alert rule based on the query above, use the **+ New alert rule** option available in the toolbar.
-1. On the **Create an alert rule** page, select **Condition name**
+1. On the **Create an alert rule** page, select **Condition name**
1. On the **Configure signal logic** page, set following values and then use **Done** button to save the changes.
- * Alert logic: Set **Number of results** **Greater than** **0**.
- * Evaluation based on: Select **120** for Period (in minutes) and **5** for Frequency (in minutes)
- ![Create a alert rule condition](./media/azure-monitor/alert-create-rule-condition.png)
+ - Alert logic: Set **Number of results** **Greater than** **0**.
+ - Evaluation based on: Select **120** for Period (in minutes) and **5** for Frequency (in minutes)
+
+ ![Create a alert rule condition](./media/azure-monitor/alert-create-rule-condition.png)
-After the alert is created, go to **Log Analytics workspace** and select **Alerts**. This page displays all the alerts that have been triggered in the duration set by **Time range** option.
+After the alert is created, go to **Log Analytics workspace** and select **Alerts**. This page displays all the alerts that have been triggered in the duration set by **Time range** option.
### Configure action groups Azure Monitor and Service Health alerts use action groups to notify users that an alert has been triggered. You can include sending a voice call, SMS, email; or triggering various types of automated actions. Follow the guidance [Create and manage action groups in the Azure portal](../azure-monitor/alerts/action-groups.md)
-Here is an example of an alert notification email.
+Here is an example of an alert notification email.
- ![Email notification](./media/azure-monitor/alert-email-notification.png)
+![Email notification](./media/azure-monitor/alert-email-notification.png)
## Multiple tenants
Azure Monitor Logs are designed to scale and support collecting, indexing, and s
## Next steps
-* Find more samples in the Azure AD B2C [SIEM gallery](https://aka.ms/b2csiem).
+- Find more samples in the Azure AD B2C [SIEM gallery](https://aka.ms/b2csiem).
-* For more information about adding and configuring diagnostic settings in Azure Monitor, see [Tutorial: Collect and analyze resource logs from an Azure resource](../azure-monitor/essentials/monitor-azure-resource.md).
+- For more information about adding and configuring diagnostic settings in Azure Monitor, see [Tutorial: Collect and analyze resource logs from an Azure resource](../azure-monitor/essentials/monitor-azure-resource.md).
-* For information about streaming Azure AD logs to an event hub, see [Tutorial: Stream Azure Active Directory logs to an Azure event hub](../active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md).
+- For information about streaming Azure AD logs to an event hub, see [Tutorial: Stream Azure Active Directory logs to an Azure event hub](../active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md).
active-directory-b2c Partner Bloksec https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-bloksec.md
zone_pivot_groups: b2c-policy-type
::: zone-end
-In this sample tutorial, learn how to integrate Azure Active Directory (AD) B2C authentication with BlokSec. BlokSec is a decentralized identity platform that provides organizations with true passwordless authentication, tokenless multifactor authentication, and real-time consent-based services. BlokSecΓÇÖs Decentralized-Identity-as-a-Service (DIaaS)Γäó platform provides a frictionless and secure solution to protect websites and mobile apps, web-based business applications, and remote services. Also, it eliminates the need of passwords, and simplifies the end-user login process. BlokSec protects customers against identity-centric cyber-attacks such as password stuffing, phishing, and man-in-the-middle attacks.
-
-With Azure AD B2C as an identity provider, you can integrate BlokSec with any of your customer applications to provide true passwordless authentication and real-time consent-based authorization to your users.
+In this sample tutorial, learn how to integrate Azure Active Directory (AD) B2C authentication with [BlokSec](https://bloksec.com/). BlokSec simplifies the end-user login experience by providing customers passwordless authentication and tokenless multifactor authentication (MFA). BlokSec protects customers against identity-centric cyber-attacks such as password stuffing, phishing, and man-in-the-middle attacks.
## Scenario description
active-directory-b2c Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-gallery.md
Microsoft partners with the following ISVs for MFA and Passwordless authenticati
| ISV partner | Description and integration walkthroughs | |:-|:--|
-| ![Screenshot of a bloksec logo](./medi) is a decentralized identity platform that provides organizations with true passwordless authentication, tokenless MFA, and real-time consent-based services. |
+| ![Screenshot of a bloksec logo](./medi) is a passwordless authentication and tokenless MFA solution, which provides real-time consent-based services and protects customers against identity-centric cyber-attacks such as password stuffing, phishing, and man-in-the-middle attacks. |
| ![Screenshot of a hypr logo](./medi) is a passwordless authentication provider, which replaces passwords with public key encryptions eliminating fraud, phishing, and credential reuse. | | ![Screenshot of a itsme logo](./medi) is an Electronic Identification, Authentication and Trust Services (eiDAS) compliant digital ID solution to allow users to sign in securely without card readers, passwords, two-factor authentication, and multiple PIN codes. | |![Screenshot of a Keyless logo.](./medi) is a passwordless authentication provider that provides authentication in the form of a facial biometric scan and eliminates fraud, phishing, and credential reuse.
active-directory Concept Authentication Methods https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-methods.md
The following table outlines when an authentication method can be used during a
| Windows Hello for Business | Yes | MFA | | Microsoft Authenticator app | Yes | MFA and SSPR | | FIDO2 security key | Yes | MFA |
-| OATH hardware tokens (preview) | No | MFA |
-| OATH software tokens | No | MFA |
+| OATH hardware tokens (preview) | No | MFA and SSPR |
+| OATH software tokens | No | MFA and SSPR |
| SMS | Yes | MFA and SSPR | | Voice call | No | MFA and SSPR | | Password | Yes | |
active-directory Concept Authentication Oath Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-oath-tokens.md
Some OATH TOTP hardware tokens are programmable, meaning they don't come with a
## OATH hardware tokens (Preview)
-Azure AD supports the use of OATH-TOTP SHA-1 tokens that refresh codes every 30 or 60 seconds. Customers can purchase these tokens from the vendor of their choice. For a list of security token providers that are compatible with passwordless authentication, see [FIDO2 security key providers](concept-authentication-passwordless.md#fido2-security-key-providers).
+Azure AD supports the use of OATH-TOTP SHA-1 tokens that refresh codes every 30 or 60 seconds. Customers can purchase these tokens from the vendor of their choice.
OATH TOTP hardware tokens typically come with a secret key, or seed, pre-programmed in the token. These keys must be input into Azure AD as described in the following steps. Secret keys are limited to 128 characters, which may not be compatible with all tokens. The secret key can only contain the characters *a-z* or *A-Z* and digits *2-7*, and must be encoded in *Base32*.
Users may have a combination of up to five OATH hardware tokens or authenticator
## Next steps Learn more about configuring authentication methods using the [Microsoft Graph REST API](/graph/api/resources/authenticationmethods-overview).
+Learn about [FIDO2 security key providers](concept-authentication-passwordless.md#fido2-security-key-providers) that are compatible with passwordless authentication.
active-directory Groups Self Service Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-self-service-management.md
Previously updated : 06/23/2021 Last updated : 07/27/2021
The group settings enable to control who can create security and Microsoft 365 g
![Azure Active Directory security groups setting change.](./media/groups-self-service-management/security-groups-setting.png)
+> [!NOTE]
+> The behavior of these settings recently changed. Make sure these settings are configured for your organization. For more information, see [Why were the group settings changed?](#why-were-the-group-settings-changed).
+ The following table helps you decide which values to choose. | Setting | Value | Effect on your tenant |
Here are some additional details about these group settings.
- If you want to enable some, but not all, of your users to create groups, you can assign those users a role that can create groups, such as [Groups Administrator](../roles/permissions-reference.md#groups-administrator). - These settings are for users and don't impact service principals. For example, if you have a service principal with permissions to create groups, even if you set these settings to **No**, the service principal will still be able to create groups.
+### Why were the group settings changed?
+
+The previous implementation of the group settings were named **Users can create security groups in Azure portals** and **Users can create Microsoft 365 groups in Azure portals**. The previous settings only controlled group creation in Azure portals and did not apply to API or PowerShell. The new settings control group creation in Azure portals, as well as, API and PowerShell. The new settings are more secure.
+
+The default values for the new settings have been set to your previous API or PowerShell values. There is a possibility that the default values for the new settings are different than your previous values that controlled only the Azure portal behavior. Starting in May 2021, there was a transition period of a few weeks where you could select your preferred default value before the new settings took effect. Now that the new settings have taken effect, you are required to verify the new settings are configured for your organization.
+ ## Next steps These articles provide additional information on Azure Active Directory.
active-directory Identity Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/identity-providers.md
Previously updated : 07/13/2021 Last updated : 07/26/2021
# Identity Providers for External Identities
-> [!NOTE]
-> Some of the features mentioned in this article are public preview features of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- An *identity provider* creates, maintains, and manages identity information while providing authentication services to applications. When sharing your apps and resources with external users, Azure AD is the default identity provider for sharing. This means when you invite external users who already have an Azure AD or Microsoft account, they can automatically sign in without further configuration on your part. In addition to Azure AD accounts, External Identities offers a variety of identity providers. -- **Microsoft accounts** (Preview): Guest users can use their own personal Microsoft account (MSA) to redeem your B2B collaboration invitations. When setting up a self-service sign-up user flow, you can add [Microsoft Account (Preview)](microsoft-account.md) as one of the allowed identity providers. No additional configuration is needed to make this identity provider available for user flows.
+- **Microsoft accounts**: Guest users can use their own personal Microsoft account (MSA) to redeem your B2B collaboration invitations. When setting up a self-service sign-up user flow, you can add [Microsoft Account](microsoft-account.md) as one of the allowed identity providers. No additional configuration is needed to make this identity provider available for user flows.
-- **Email one-time passcode** (Preview): When redeeming an invitation or accessing a shared resource, a guest user can request a temporary code, which is sent to their email address. Then they enter this code to continue signing in. The email one-time passcode feature authenticates B2B guest users when they can't be authenticated through other means. When setting up a self-service sign-up user flow, you can add **Email One-Time Passcode (Preview)** as one of the allowed identity providers. Some setup is required; see [Email one-time passcode authentication](one-time-passcode.md).
+- **Email one-time passcode**: When redeeming an invitation or accessing a shared resource, a guest user can request a temporary code, which is sent to their email address. Then they enter this code to continue signing in. The email one-time passcode feature authenticates B2B guest users when they can't be authenticated through other means. When setting up a self-service sign-up user flow, you can add **Email One-Time Passcode** as one of the allowed identity providers. Some setup is required; see [Email one-time passcode authentication](one-time-passcode.md).
- **Google**: Google federation allows external users to redeem invitations from you by signing in to your apps with their own Gmail accounts. Google federation can also be used in your self-service sign-up user flows. See how to [add Google as an identity provider](google-federation.md). > [!IMPORTANT]
active-directory Microsoft Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/microsoft-account.md
Previously updated : 03/02/2021 Last updated : 07/26/2021
-# Microsoft account (MSA) identity provider for External Identities (Preview)
-
-> [!NOTE]
-> The Microsoft account identity provider is a public preview feature of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Microsoft account (MSA) identity provider for External Identities
Your B2B guest users can use their own personal Microsoft accounts for B2B collaboration without further configuration. Guest users can redeem your B2B collaboration invitations or complete your sign-up user flows using their personal Microsoft account.
active-directory One Time Passcode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/one-time-passcode.md
Previously updated : 06/30/2021 Last updated : 07/26/2021
The email one-time passcode feature is a way to authenticate B2B collaboration users when they can't be authenticated through other means, such as Azure AD, Microsoft account (MSA), or social identity providers. When a B2B guest user tries to redeem your invitation or sign in to your shared resources, they can request a temporary passcode, which is sent to their email address. Then they enter this passcode to continue signing in.
-You can enable this feature at any time in the Azure portal by configuring the Email one-time passcode (Preview) identity provider under your tenant's External Identities settings. You can choose to enable the feature, disable it, or wait for automatic enablement in October 2021.
+You can enable this feature at any time in the Azure portal by configuring the Email one-time passcode identity provider under your tenant's External Identities settings. You can choose to enable the feature, disable it, or wait for automatic enablement in October 2021.
![Email one-time passcode overview diagram](media/one-time-passcode/email-otp.png)
active-directory Self Service Sign Up User Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/self-service-sign-up-user-flow.md
Previously updated : 03/02/2021 Last updated : 07/26/2021
# Add a self-service sign-up user flow to an app
-> [!NOTE]
-> Some of the features mentioned in this article are public preview features of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- For applications you build, you can create user flows that allow a user to sign up for an app and create a new guest account. A self-service sign-up user flow defines the series of steps the user will follow during sign-up, the identity providers you'll allow them to use, and the user attributes you want to collect. You can associate one or more applications with a single user flow. > [!NOTE]
For applications you build, you can create user flows that allow a user to sign
### Add identity providers (optional)
-Azure AD is the default identity provider for self-service sign-up. This means that users are able to sign up by default with an Azure AD account. In your self-service sign-up user flows, you can also include social identity providers like Google and Facebook, Microsoft Account (Preview), and Email One-time Passcode (Preview). For more information, see these articles:
+Azure AD is the default identity provider for self-service sign-up. This means that users are able to sign up by default with an Azure AD account. In your self-service sign-up user flows, you can also include social identity providers like Google and Facebook, Microsoft Account, and Email One-time Passcode. For more information, see these articles:
-- [Microsoft Account (Preview) identity provider](microsoft-account.md)
+- [Microsoft Account identity provider](microsoft-account.md)
- [Email one-time passcode authentication](one-time-passcode.md) - [Add Facebook to your list of social identity providers](facebook-federation.md) - [Add Google to your list of social identity providers](google-federation.md)
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/whats-new-docs.md
Welcome to what's new in Azure Active Directory external identities documentatio
### New articles -- [Microsoft Account (MSA) identity provider for External Identities (Preview)](microsoft-account.md)
+- [Microsoft Account (MSA) identity provider for External Identities](microsoft-account.md)
### Updated articles
active-directory Entitlement Management Access Package Incompatible https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-incompatible.md
# Configure separation of duties checks for an access package in Azure AD entitlement management (Preview)
-In each of an access package's policies, you can specify who is able to request that access package, such as all member users in your organization, or only users who are already a member of a particular group. However, you may wish to further restrict access, in order to avoid a user from obtaining excessive access.
+In Azure AD entitlement management, you can configure multiple policies, with different settings for each user community that will need access through an access package. For example, employees might only need manager approval to get access to certain apps, but guests coming in from other organizations may require both a sponsor and a resource team departmental manager to approve. In a policy for users already in the directory, you can specify a particular group of users for who can request access. However, you may have a requirement to avoid a user obtaining excessive access. To meet this requirement, you will want to further restrict who can request access, based on the access the requestor already has.
-With the separation of duties settings on an access package, you can configure that a user cannot request an access package, if they already have an assignment to another access package, or are a member of a group.
+With the separation of duties settings on an access package, you can configure that a user who is a member of a group or who already has an assignment to one access package cannot request an additional access package.
-For example, you have an access package, *Marketing Campaign*, that people across your organization and other organizations can request access to, to work with your organization's marketing department on that marketing campaign. Since employees in the marketing department should already have access to that marketing campaign material, you wouldn't want employees in the marketing department to request access to that access package. Or, you may already have a dynamic group, *Marketing department employees*, with all of the marketing employees in it. You could indicate that the access package is incompatible with the membership of that dynamic group. Then, if a marketing department employee is looking for an access package to request, they couldn't request access to the *Marketing campaign* access package.
+![myaccess experience for attempting to request incompatible access](./media/entitlement-management-access-package-incompatible/request-prevented.png)
++
+## Scenarios for separation of duties checks
+
+For example, you have an access package, *Marketing Campaign*, that people across your organization and other organizations can request access to, to work with your organization's marketing department while that campaign is going on. Since employees in the marketing department should already have access to that marketing campaign material, you don't want employees in the marketing department to request access to that access package. Or, you may already have a dynamic group, *Marketing department employees*, with all of the marketing employees in it. You could indicate that the access package is incompatible with the membership of that dynamic group. Then, if a marketing department employee is looking for an access package to request, they couldn't request access to the *Marketing campaign* access package.
Similarly, you may have an application with two roles - **Western Sales** and **Eastern Sales** - and want to ensure that a user can only have one sales territory at a time. If you have two access packages, one access package **Western Territory** giving the **Western Sales** role and the other access package **Eastern Territory** giving the **Eastern Sales** role, then you can configure - the **Western Territory** access package has the **Eastern Territory** package as incompatible, and - the **Eastern Territory** access package has the **Western Territory** package as incompatible.
+If youΓÇÖve been using Microsoft Identity Manager or other on-premises identity management systems for automating access for on-premises apps, then you can integrate these systems with Azure AD entitlement management as well. If you will be controlling access to Azure AD-integrated apps through entitlement management, and want to prevent users from having incompatible access, you can configure that an access package is incompatible with a group. That could be a group, which your on-premises identity management system sends into Azure AD through Azure AD Connect. This check ensures a user will be unable to request an access package, if that access package would give access that's incompatible with access the user has in on-premises apps.
+ ## Prerequisites To use Azure AD entitlement management and assign users to access packages, you must have one of the following licenses:
Follow these steps to change the list of incompatible groups or other access pac
1. If you wish to prevent users who have another access package assignment already from requesting this access package, click on **Add access package** and select the access package that the user would already be assigned. +
+ ![configuration of incompatible access packages](./media/entitlement-management-access-package-incompatible/select-incompatible-ap.png)
++ 1. If you wish to prevent users who have an existing group membership from requesting this access package, then click on **Add group** and select the group that the user would already be in.
+### Configure incompatible access packages programmatically
+
+You can also configure the groups and other access packages that are incompatible with access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission, or an application with that application permission, can call the API to add, remove, and list the incompatible groups and access packages [of an access package](/graph/api/resources/accesspackage?view=graph-rest-beta&preserve-view=true).
++ ## View other access packages that are configured as incompatible with this one **Prerequisite role**: Global administrator, Identity Governance administrator, User administrator, Catalog owner or Access package manager
Follow these steps to view the list of other access packages that have indicated
1. Click on **Incompatible With**.
+## Monitor and report on access assignments
+
+You can use Azure Monitor workbooks to get insights on how users have been receiving their access.
+
+1. Configure Azure AD to [send audit events to Azure Monitor](entitlement-management-logs-and-reporting.md).
+
+1. The workbook named *Access Package Activity* displays each event related to a particular access package.
+
+ ![View access package events](./media/entitlement-management-logs-and-reporting/view-events-access-package.png)
+
+1. To see if there have been changes to application role assignments for an application that were not created due to access package assignments, then you can select the workbook named *Application role assignment activity*. If you select to omit entitlement activity, then only changes to application roles that were not made by entitlement management are shown. For example, you would see a row if a global administrator had directly assigned a user to an application role.
+
+ ![View app role assignments](./media/entitlement-management-access-package-incompatible/workbook-ara.png)
++ ## Next steps - [View, add, and remove assignments for an access package](entitlement-management-access-package-assignments.md)
active-directory Entitlement Management Access Package Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-resources.md
For more information, see [Compare groups](/office365/admin/create-groups/compar
## Add an application resource role
-You can have Azure AD automatically assign users access to an Azure AD enterprise application, including both SaaS applications and your organization's applications federated to Azure AD, when a user is assigned an access package. For applications that integrate with Azure AD through federated single sign-on, Azure AD will issue federation tokens for users assigned to the application.
+You can have Azure AD automatically assign users access to an Azure AD enterprise application, including both SaaS applications and your organization's applications integrated with Azure AD, when a user is assigned an access package. For applications that integrate with Azure AD through federated single sign-on, Azure AD will issue federation tokens for users assigned to the application.
Applications can have multiple roles. When adding an application to an access package, if that application has more than one role, you will need to specify the appropriate role for those users. If you are developing applications, you can read more about how those roles are added to your applications in [How to: Configure the role claim issued in the SAML token for enterprise applications](../develop/active-directory-enterprise-app-role-management.md).
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-catalog-create.md
$catalog = New-MgEntitlementManagementAccessPackageCatalog -DisplayName "Marketi
## Add resources to a catalog
-To include resources in an access package, the resources must exist in a catalog. The types of resources you can add are groups, applications, and SharePoint Online sites. The groups can be cloud-created Microsoft 365 Groups or cloud-created Azure AD security groups. The applications can be Azure AD enterprise applications, including both SaaS applications and your own applications federated to Azure AD. The sites can be SharePoint Online sites or SharePoint Online site collections.
+To include resources in an access package, the resources must exist in a catalog. The types of resources you can add are groups, applications, and SharePoint Online sites.
+
+* The groups can be cloud-created Microsoft 365 Groups or cloud-created Azure AD security groups. Groups that originate in an on-premises Active Directory cannot be assigned as resources because their owner or member attributes cannot be changed in Azure AD. Groups that originate in Exchange Online as Distribution groups cannot be modified in Azure AD either.
+* The applications can be Azure AD enterprise applications, including both SaaS applications and your own applications integrated with Azure AD. For more information on selecting appropriate resources for applications with multiple roles, see [add resource roles](entitlement-management-access-package-resources.md#add-resource-roles).
+* The sites can be SharePoint Online sites or SharePoint Online site collections.
**Prerequisite role:** See [Required roles to add resources to a catalog](entitlement-management-delegate.md#required-roles-to-add-resources-to-a-catalog)
active-directory Entitlement Management Logs And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-logs-and-reporting.md
Use the following procedure to view events:
1. Select the workbook named *Access Package Activity*.
-1. In that workbook, select a time range (change to **All** if not sure), and select an access package Id from the drop-down list of all access packages that had activity during that time range. The events related to the access package that occurred during the selected time range will be displayed.
+1. In that workbook, select a time range (change to **All** if not sure), and select an access package ID from the drop-down list of all access packages that had activity during that time range. The events related to the access package that occurred during the selected time range will be displayed.
![View access package events](./media/entitlement-management-logs-and-reporting/view-events-access-package.png)
- Each row includes the time, access package Id, the name of the operation, the object Id, UPN, and the display name of the user who started the operation. Additional details are included in JSON.
+ Each row includes the time, access package ID, the name of the operation, the object ID, UPN, and the display name of the user who started the operation. Additional details are included in JSON.
-1. If you would like to see if there have been changes to application role assignments for an application that were not due to access package assignments, such as by a global administrator directly assigning a user to an application roles, then you can select the workbook named *Application role assignment activity*.
+1. If you would like to see if there have been changes to application role assignments for an application that were not due to access package assignments, such as by a global administrator directly assigning a user to an application role, then you can select the workbook named *Application role assignment activity*.
+ ![View app role assignments](./media/entitlement-management-access-package-incompatible/workbook-ara.png)
## Create custom Azure Monitor queries using the Azure portal You can create your own queries on Azure AD audit events, including entitlement management events.
$subs | ft
You can reauthenticate and associate your PowerShell session to that subscription using a command such as `Connect-AzAccount ΓÇôSubscription $subs[0].id`. To learn more about how to authenticate to Azure from PowerShell, including non-interactively, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
-If you have multiple Log Analytics workspaces in that subscription, then the cmdlet [Get-AzOperationalInsightsWorkspace](/powershell/module/Az.OperationalInsights/Get-AzOperationalInsightsWorkspace) returns the list of workspaces. Then you can find the one that has the Azure AD logs. The `CustomerId` field returned by this cmdlet is the same as the value of the "Workspace Id" displayed in the Azure portal in the Log Analytics workspace overview.
+If you have multiple Log Analytics workspaces in that subscription, then the cmdlet [Get-AzOperationalInsightsWorkspace](/powershell/module/Az.OperationalInsights/Get-AzOperationalInsightsWorkspace) returns the list of workspaces. Then you can find the one that has the Azure AD logs. The `CustomerId` field returned by this cmdlet is the same as the value of the "Workspace ID" displayed in the Azure portal in the Log Analytics workspace overview.
```powershell $wks = Get-AzOperationalInsightsWorkspace
active-directory Pim How To Activate Role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-activate-role.md
If you have been made *eligible* for an administrative role, then you must *acti
This article is for administrators who need to activate their Azure AD role in Privileged Identity Management.
+> [!TIP]
+> You can use the shortcut URL [AKA.MS/PIM](https://aka.ms/PIM) to jump straight to the Azure AD roles selection page.
+ # [New version](#tab/new) ## Activate a role for new version
active-directory Pim How To Change Default Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-change-default-settings.md
If setting multiple approvers, approval completes as soon as one of them approve
![Select a user or group pane to select approvers](./media/pim-resource-roles-configure-role-settings/resources-role-settings-select-approvers.png)
-1. Select at least one user and then click **Select**. Select at least one approver. There are no default approvers.
+1. Select at least one user and then click **Select**. Select at least one approver. If no specific approvers are selected, privileged role administrators/global administrators will become the default approvers.
Your selections will appear in the list of selected approvers.
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
Previously updated : 07/23/2021 Last updated : 07/26/2021
Azure Advanced Threat Protection | Monitor and respond to suspicious security ac
Windows Defender ATP and EDR | Assign roles<br>Manage machine groups<br>Configure endpoint threat detection and automated remediation<br>View, investigate, and respond to alerts [Intune](/intune/role-based-access-control) | Views user, device, enrollment, configuration, and application information<br>Cannot make changes to Intune [Cloud App Security](/cloud-app-security/manage-admins) | Add admins, add policies and settings, upload logs and perform governance actions
-[Azure Security Center](../../key-vault/managed-hsm/built-in-roles.md) | Can view security policies, view security states, edit security policies, view alerts and recommendations, dismiss alerts and recommendations
[Microsoft 365 service health](/office365/enterprise/view-service-health) | View the health of Microsoft 365 services [Smart lockout](../authentication/howto-password-smart-lockout.md) | Define the threshold and duration for lockouts when failed sign-in events happen. [Password Protection](../authentication/concept-password-ban-bad.md) | Configure custom banned password list or on-premises password protection.
Identity Protection Center | Read all security reports and settings information
Windows Defender ATP and EDR | View and investigate alerts. When you turn on role-based access control in Windows Defender ATP, users with read-only permissions such as the Azure AD Security Reader role lose access until they are assigned to a Windows Defender ATP role. [Intune](/intune/role-based-access-control) | Views user, device, enrollment, configuration, and application information. Cannot make changes to Intune. [Cloud App Security](/cloud-app-security/manage-admins) | Has read-only permissions and can manage alerts
-[Azure Security Center](../../key-vault/managed-hsm/built-in-roles.md) | Can view recommendations and alerts, view security policies, view security states, but cannot make changes
[Microsoft 365 service health](/office365/enterprise/view-service-health) | View the health of Microsoft 365 services > [!div class="mx-tableFixed"]
active-directory Github Enterprise Cloud Enterprise Account Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/github-enterprise-cloud-enterprise-account-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields: a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://github.com/orgs/<ENTERPRISE-SLUG>`
+ `https://github.com/enterprises/<ENTERPRISE-SLUG>`
b. In the **Reply URL** text box, type a URL using the following pattern:
- `https://github.com/orgs/<ENTERPRISE-SLUG>/saml/consume`
+ `https://github.com/enterprises/<ENTERPRISE-SLUG>/saml/consume`
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: In the **Sign on URL** text box, type a URL using the following pattern:
- `https://github.com/orgs/<ENTERPRISE-SLUG>/sso`
+ `https://github.com/enterprises/<ENTERPRISE-SLUG>/sso`
> [!NOTE] > Replace `<ENTERPRISE-SLUG>` with the actual name of your GitHub Enterprise Account.
After you enable SAML SSO for your GitHub Enterprise Account, SAML SSO is enable
## Test SSO with another enterprise account owner or organization member account
-After the SAML integration is set up for the GitHub enterprise account (which also applies to the GitHub organizations in the enterprise account), other enterprise account owners who are assigned to the app in Azure AD should be able to navigate to the GitHub enterprise account URL (`https://github.com/orgs/<enterprise account>`), authenticate via SAML, and access the policies and settings under the GitHub enterprise account.
+After the SAML integration is set up for the GitHub enterprise account (which also applies to the GitHub organizations in the enterprise account), other enterprise account owners who are assigned to the app in Azure AD should be able to navigate to the GitHub enterprise account URL (`https://github.com/enterprises/<enterprise account>`), authenticate via SAML, and access the policies and settings under the GitHub enterprise account.
An organization owner for an organization in an enterprise account should be able to [invite a user to join their GitHub organization](https://docs.github.com/en/free-pro-team@latest/github/setting-up-and-managing-organizations-and-teams/inviting-users-to-join-your-organization). Sign in to GitHub.com with an organization owner account and follow the steps in the article to invite `B.Simon` to the organization. A GitHub user account will need to be created for `B.Simon` if one does not already exist.
active-directory User Help Auth App Add Work School Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/user-help-auth-app-add-work-school-account.md
Previously updated : 05/11/2021 Last updated : 07/26/2021
To add an account by signing into your work or school account using your credent
- If you donΓÇÖt have enough authentication methods on your account to get a strong authentication token, you can't proceed to add an account.
- - If you receive the message `You might be signing in from a location that is restricted by your admin`, your admin hasn't enabled this feature for you. You can try to set up your account by scanning a QR Code from the **Additional security verification** page or inΓÇ»[Security info](https://mysignins.microsoft.com/security-info).
+ - If you receive the message `You might be signing in from a location that is restricted by your admin`, your admin hasn't enabled this feature for you and probably set up a Security Information Registration Conditional Access policy. Contact the administrator for your work or school account to use this authentication method.
-1. If you are allowed by your admin to use phone sign-in using the Authenticator app, you'll be able to go through device registration to get set up for passwordless phone sign-in and Azure Multi-Factor Authentication (MFA). However, you'll still be able to set up MFA whether or not you are enabled for phone sign-in.
+1. If you are allowed by your admin to use phone sign-in using the Authenticator app, you'll be able to go through device registration to get set up for passwordless phone sign-in and Azure AD Multi-Factor Authentication. However, you'll still be able to set up multifactor authentication whether or not you are enabled for phone sign-in.
-1. At this point, you could be asked to scan a QR Code provided by your organization to set up an on-premises multi-factor authentication account in the app. You're required to do this only if your organization uses on-premises MFA Server.
+1. At this point, you might be asked to scan a QR Code provided by your organization to set up an on-premises multi-factor authentication account in the app. You're required to do this only if your organization uses on-premises MFA Server.
1. On your device, tap the account and verify in the full-screen view that your account is correct. For additional security, the verification code changes every 30 seconds preventing someone from using a code multiple times.
active-directory Introduction To Verifiable Credentials Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/introduction-to-verifiable-credentials-architecture.md
This architectural overview introduces the capabilities and components of the Az
## Approaches to identity
-Today most organizations use centralized identity systems to provide employees credentials. They also use use various methods to bring customers, partners, vendors, and relying parties into the organizationΓÇÖs trust boundaries. These methods include federation, creating and managing guest accounts with systems like Azure AD B2B, and creating explicit trusts with relying parties. Most business relationships have a digital component, so enabling some form of trust between organizations requires significant effort.
+Today most organizations use centralized identity systems to provide employees credentials. They also use various methods to bring customers, partners, vendors, and relying parties into the organizationΓÇÖs trust boundaries. These methods include federation, creating and managing guest accounts with systems like Azure AD B2B, and creating explicit trusts with relying parties. Most business relationships have a digital component, so enabling some form of trust between organizations requires significant effort.
### Centralized identity systems
Terminology for verifiable credentials (VCs) might be confusing if you're not fa
ΓÇ£A ***decentralized identifier document***, also referred to as a ***DID document***, is a document that is accessible using a verifiable data registry and contains information related to a specific decentralized identifier, such as the associated repository and public key information.ΓÇ¥
-* In the scenario above, both the issuer and verifier have a DID, and a DID document. The DID document contains the public key, and the list of DNS web domains associated with DID (also known as linked domains).
+* In the scenario above, both the issuer and verifier have a DID, and a DID document. The DID document contains the public key, and the list of DNS web domains associated with the DID (also known as linked domains).
* Woodgrove (issuer) signs their employeesΓÇÖ VCs with its public key; similarly, Proseware (verifier) signs requests to present a VC using its key, which is also associated with its DID.
These use cases demonstrate how centralized identities and decentralized identit
**Awareness**: Alice is interested in working for Woodgrove, Inc. and visits WoodgroveΓÇÖs career website.
-**Activation**: The Woodgrove site presents Alice with a method to prove their identity by promptinthem with a QR code or a deep link to visit its trusted identity proofing partner, Adatum.
+**Activation**: The Woodgrove site presents Alice with a method to prove their identity by prompting them with a QR code or a deep link to visit its trusted identity proofing partner, Adatum.
**Request and upload**: Adatum requests proof of identity from Alice. Alice takes a selfie and a driverΓÇÖs license picture and uploads them to Adatum.
By combining centralized and decentralized identity architectures for onboarding
As an employee, Alice is operating inside of the trust boundary of Woodgrove. Woodgrove acts as the identity provider (IDP) and maintains complete control of the identity and the configuration of the apps Alice uses to interact within the Woodgrove trust boundary. To use resources in the Azure AD trust boundary, Alice provides potentially multiple forms of proof of identification to log on to WoodgroveΓÇÖs trust boundary and access the resources inside of WoodgroveΓÇÖs technology environment. This is a typical scenario that is well served using a centralized identity architecture.
-* Woodgrove manages the trust boundary, and using good security practices provides the least-privileged level of access to Alice based on the job performed. To maintain a strong security posture, and potentially for compliance reasons, Woodgrove must also be able to track employeesΓÇÖ permissions and access to resources and must be able to revoke permissions when the employment is terminated.
+* Woodgrove manages the trust boundary and using good security practices provides the least-privileged level of access to Alice based on the job performed. To maintain a strong security posture, and potentially for compliance reasons, Woodgrove must also be able to track employeesΓÇÖ permissions and access to resources and must be able to revoke permissions when the employment is terminated.
* Alice only uses the credential that Woodgrove maintains to access Woodgrove resources. Alice has no need to track when the credential is used since the credential is managed by Woodgrove and only used with Woodgrove resources. The identity is only valid inside of the Woodgrove trust boundary when access to Woodgrove resources is necessary, so Alice has no need to possess the credential.
In this flow, a holder interacts with a relying party (RP) to present a VC as pa
1. The wallet downloads the request from the link. The request includes:
- * a [standards based request for credentials](https://identity.foundation/presentation-exchange/) of a schema or credentialType.
+ * a [standards based request for credentials](https://identity.foundation/presentation-exchange/) of a schema or credential type.
* the DID of the RP, which the wallet looks up in ION.
active-directory Plan Issuance Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/plan-issuance-solution.md
Each issuer has a single key set used for signing, updating, and recovery. This
* Rules are an issuer-defined model that describes the required inputs of a verifiable credential, the trusted sources of the inputs, and the mapping of input claims to output claims.
- * Input ΓÇô Are a subset of the model in the rules file for client consumption. The subset must describe the set of inputs, where to obtain the inputs and the endpoint to call to obtain a verifiable credential.
+ * **Input** ΓÇô Are a subset of the model in the rules file for client consumption. The subset must describe the set of inputs, where to obtain the inputs and the endpoint to call to obtain a verifiable credential.
* Rules and display files for different credentials can be configured to use different containers, subscriptions, and storage. For example, you can delegate permissions to different teams that own management of specific VCs.
With Azure AD Verifiable Credentials, the most common credential use cases are:
* a userΓÇÖs selfie
-* verification of liveness.
+* verification of liveness
This kind of credential is a good fit for identity onboarding scenarios of new employees, partners, service providers, students, and other instances where identity verification is essential.
In addition to the industry-specific standards and schemas that might be applica
* **Minimize private information**: Meet the use cases with the minimal amount of private information necessary. For example, a VC used for e-commerce websites that offers discounts to employees and alumni can be fulfilled by presenting the credential with just the first and last name claims. Additional information such as hiring date, title, department, etc. are not needed.
-* **Favor abstract claims**: Each claim should meet the need while minimizing the detail. For example, a claim called ΓÇ£ageOverΓÇ¥ with discrete values such as ΓÇ£13ΓÇ¥,ΓÇ¥21ΓÇ¥,ΓÇ¥60ΓÇ¥, is more abstract than a date of birth claim.
+* **Favor abstract claims**: Each claim should meet the need while minimizing the detail. For example, a claim named ΓÇ£ageOverΓÇ¥ with discrete values such as ΓÇ£13ΓÇ¥,ΓÇ¥21ΓÇ¥,ΓÇ¥60ΓÇ¥, is more abstract than a date of birth claim.
* **Plan for revocability**: We recommend you define an index claim to enable mechanisms to find and revoke credentials. You are limited to defining one index claim per contract. It is important to note that values for indexed claims are not stored in the backend, only a hash of the claim value. For more information, see [Revoke a previously issued verifiable credential](../verifiable-credentials/how-to-issuer-revoke.md).
aks Aks Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/aks-migration.md
Some open-source tools can help you create managed disks and migrate volumes bet
We recommend that you use your existing Continuous Integration (CI) and Continuous Deliver (CD) pipeline to deploy a known-good configuration to AKS. You can use Azure Pipelines to [build and deploy your applications to AKS](/azure/devops/pipelines/ecosystems/kubernetes/aks-template). Clone your existing deployment tasks and ensure that `kubeconfig` points to the new AKS cluster.
-If that's not possible, export resource definitions from your existing Kubernetes cluster and then apply them to AKS. You can use `kubectl` to export objects.
+If that's not possible, export resource definitions from your existing Kubernetes cluster and then apply them to AKS. You can use `kubectl` to export objects. For example:
```console
-kubectl get deployment -o=yaml --export > deployments.yaml
+kubectl get deployment -o yaml > deployments.yaml
```
+Be sure to examine the output and remove any unnecessary live data fields.
+ ### Moving existing resources to another region You may want to move your AKS cluster to a [different region supported by AKS][region-availability]. We recommend that you create a new cluster in the other region, then deploy your resources and applications to your new cluster.
app-service Quickstart Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-custom-container.md
Create a container registry by following the instructions in [Quickstart: Create
![sign in to Azure](./media/quickstart-docker/sign-in.png)
-1. In the [Status Bar](https://code.visualstudio.com/docs/getstarted/userinterface) at the bottom, verify that your Azure account email address. In the **APP SERVICE** explorer, your subscription should be displayed.
+1. In the [Status Bar](https://code.visualstudio.com/docs/getstarted/userinterface) at the bottom, verify your Azure account email address. In the **APP SERVICE** explorer, your subscription should be displayed.
1. In the Activity Bar, select the **Docker** logo. In the **REGISTRIES** explorer, verify that the container registry you created appears.
In this Dockerfile, the parent image is one of the built-in Java containers of A
3. In the image tag box, specify the tag you want in the following format: `<acr-name>.azurecr.io/<image-name>/<tag>`, where `<acr-name>` is the name of the container registry you created. Press **Enter**.
-4. When the image finishes building, click **Refresh** at the top of the **IMAGES** explorer and verify the image is built successfully.
+4. When the image finishes building, click **Refresh** at the top of the **IMAGES** explorer and verify that the image is built successfully.
![Screenshot shows the built image with tag.](./media/quickstart-docker/built-image.png)
In this Dockerfile, the parent image is one of the built-in Java containers of A
1. In the Activity Bar, click the **Docker** icon. In the **IMAGES** explorer, find the image you just built. 1. Expand the image, right-click on the tag you want, and click **Push**. 1. Make sure the image tag begins with `<acr-name>.azurecr.io` and press **Enter**.
-1. When Visual Studio Code finishes pushing the image to your container registry, click **Refresh** at the top of the **REGISTRIES** explorer and verify the image is pushed successfully.
+1. When Visual Studio Code finishes pushing the image to your container registry, click **Refresh** at the top of the **REGISTRIES** explorer and verify that the image is pushed successfully.
![Screenshot shows the image deployed to Azure container registry.](./media/quickstart-docker/image-in-registry.png)
app-service Terraform Secure Backend Frontend https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/scripts/terraform-secure-backend-frontend.md
Browse to the [Azure documentation](/azure/developer/terraform/) to learn how to
To use this file you must change the name property for frontwebapp and backwebapp resources (webapp name must be unique DNS name worldwide). ```hcl
+terraform {
+ required_providers {
+ azurerm = {
+ source = "hashicorp/azurerm"
+ version = "~>2.0"
+ }
+ }
+}
provider "azurerm" {
- version = "~>2.0"
features {} }
resource "azurerm_private_endpoint" "privateendpoint" {
## Next steps
-> [Learn more about using Terraform in Azure](/azure/developer/terraform/)
+> [Learn more about using Terraform in Azure](/azure/developer/terraform/)
azure-functions Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-overview.md
The following are a common, _but by no means exhaustive_, set of scenarios for A
| **Process file uploads** | Run code when a file is uploaded or changed in [blob storage](./functions-bindings-storage-blob.md) | | **Build a serverless workflow** | Chain a series of functions together using [durable functions](./durable/durable-functions-overview.md) | | **Respond to database changes** | Run custom logic when a document is created or updated in [Cosmos DB](./functions-bindings-cosmosdb-v2.md) |
-| **Run scheduled tasks** | Execute code at [set times](./functions-bindings-timer.md) |
+| **Run scheduled tasks** | Execute code on [pre-defined timed intervals](./functions-bindings-timer.md) |
| **Create reliable message queue systems** | Process message queues using [Queue Storage](./functions-bindings-storage-queue.md), [Service Bus](./functions-bindings-service-bus.md), or [Event Hubs](./functions-bindings-event-hubs.md) | | **Analyze IoT data streams** | Collect and process [data from IoT devices](./functions-bindings-event-iot.md) | | **Process data in real time** | Use [Functions and SignalR](./functions-bindings-signalr-service.md) to respond to data in the moment |
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
Title: Azure Services in FedRAMP and DoD SRG Audit Scope
description: This article contains tables for Azure Public and Azure Government that illustrate what FedRAMP (Moderate vs. High) and DoD SRG (Impact level 2, 4, 5 or 6) audit scope a given service has reached. Previously updated : 07/19/2021 Last updated : 07/26/2021
This article provides a detailed list of in-scope cloud services across Azure Pu
| [Azure Active Directory B2C](https://azure.microsoft.com/services/active-directory-b2c/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Azure Active Directory Domain Services](https://azure.microsoft.com/services/active-directory-ds/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Azure Active Directory Provisioning Service](../../active-directory/app-provisioning/user-provisioning.md)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Advanced Threat Protection](https://azure.microsoft.com/features/azure-advanced-threat-protection/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
+| [Microsoft Defender for Identity](https://azure.microsoft.com/features/azure-advanced-threat-protection/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
| [Azure Advisor](https://azure.microsoft.com/services/advisor/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Azure Analysis Services](https://azure.microsoft.com/services/analysis-services/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Azure App Configuration](https://azure.microsoft.com/services/app-configuration/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
This article provides a detailed list of in-scope cloud services across Azure Pu
| [Azure Front Door](https://azure.microsoft.com/services/frontdoor/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Azure HPC Cache](https://azure.microsoft.com/services/hpc-cache/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Azure Information Protection](https://azure.microsoft.com/services/information-protection/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Intune](/intune/what-is-intune) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
+| [Microsoft Intune](/intune/what-is-intune) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
| [Azure IoT Security](https://azure.microsoft.com/overview/iot/security/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Azure Internet Analyzer](https://azure.microsoft.com/services/internet-analyzer/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
This article provides a detailed list of in-scope cloud services across Azure Pu
| [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Data Catalog](https://azure.microsoft.com/services/data-catalog/) | | | | :heavy_check_mark: | | [Data Factory](https://azure.microsoft.com/services/data-factory/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [D365 Integrator App](/power-platform/admin/data-integrator) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
+| [Data Integrator](/power-platform/admin/data-integrator) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
| [Dynamics 365 Commerce](https://dynamics.microsoft.com/commerce/overview/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Dynamics 365 Customer Service](https://dynamics.microsoft.com/customer-service/overview/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Dynamics 365 Field Service](https://dynamics.microsoft.com/field-service/overview/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Dynamics 365 Finance](https://dynamics.microsoft.com/finance/overview/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Dynamics 365 Guides](/dynamics365/mixed-reality/guides/get-started)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Dynamics 365 Supply Chain](https://dynamics.microsoft.com/supply-chain-management/overview/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Dynamics 365 Service Omni-Channel Engagement Hub](/dynamics365/omnichannel/introduction-omnichannel) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Dynamics 365 Customer Engagement (Common Data Service)](/powerapps/maker/common-data-service/data-platform-intro) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
+| [Dynamics 365 Chat (Dynamics 365 Omni-Channel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
+| [Dataverse (Common Data Service)](/powerapps/maker/common-data-service/data-platform-intro) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
| [Event Grid](https://azure.microsoft.com/services/event-grid/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Microsoft Defender for Endpoint](/windows/security/threat-protection/microsoft-defender-atp/microsoft-defender-advanced-threat-protection) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Event Hubs](https://azure.microsoft.com/services/event-hubs/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [ExpressRoute](https://azure.microsoft.com/services/expressroute/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Flow](/flow/getting-started) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
+| [Power Automate](https://powerplatform.microsoft.com/power-automate/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
| [Functions](https://azure.microsoft.com/services/functions/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [GitHub AE](https://docs.github.com/en/github-ae@latest/admin/overview/about-github-ae) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Guest Configuration](../../governance/policy/concepts/guest-configuration.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
This article provides a detailed list of in-scope cloud services across Azure Pu
| [Microsoft Azure Peering Service](../../peering-service/about.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Microsoft Cloud App Security](/cloud-app-security/what-is-cloud-app-security) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Microsoft Graph](https://developer.microsoft.com/en-us/graph) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Microsoft Health Bot](/healthbot/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
+| [Azure Health Bot](/healthbot/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
| [Microsoft Managed Desktop](https://www.microsoft.com/en-us/microsoft-365/modern-desktop/enterprise/microsoft-managed-desktop) | | | | |
-| [Microsoft PowerApps](/powerapps/powerapps-overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Microsoft PowerApps Portal](https://powerapps.microsoft.com/portals/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
+| [Power Apps](/powerapps/powerapps-overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
+| [Power Apps Portal](https://powerapps.microsoft.com/portals/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
| [Microsoft Stream](/stream/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Microsoft Threat Experts](/windows/security/threat-protection/microsoft-defender-atp/microsoft-threat-experts) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
This article provides a detailed list of in-scope cloud services across Azure Pu
| [Web Apps (App Service)](https://azure.microsoft.com/services/app-service/web/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Web Application Firewall)](https://azure.microsoft.com/services/web-application-firewall/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Windows 10 IoT Core Services](https://azure.microsoft.com/services/windows-10-iot-core/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Windows Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
+| [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
**&ast;** FedRAMP high certification covers Datacenter Infrastructure Services & Databox Pod and Disk Service which are the online software components supporting Data Box hardware appliance.
This article provides a detailed list of in-scope cloud services across Azure Pu
| [Azure Active Directory (Free and Basic)](https://azure.microsoft.com/services/active-directory/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Azure Active Directory (Premium P1 + P2)](https://azure.microsoft.com/services/active-directory/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Azure Active Directory Domain Services](https://azure.microsoft.com/services/active-directory-ds/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Advanced Threat Protection](https://azure.microsoft.com/features/azure-advanced-threat-protection/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
+| [Microsoft Defender for Identity](https://azure.microsoft.com/features/azure-advanced-threat-protection/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
| [Azure Advisor](https://azure.microsoft.com/services/advisor/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | [Azure Analysis Services](https://azure.microsoft.com/services/analysis-services/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Azure API for FHIR](https://azure.microsoft.com/services/azure-api-for-fhir/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
This article provides a detailed list of in-scope cloud services across Azure Pu
| [Azure Databricks](https://azure.microsoft.com/services/databricks/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | [Azure DB for MySQL](https://azure.microsoft.com/services/mysql/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Azure DB for PostgreSQL](https://azure.microsoft.com/services/postgresql/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure DB for MariaDB](https://azure.microsoft.com/services/mariadb/) | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: |
+| [Azure DB for MariaDB](https://azure.microsoft.com/services/mariadb/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
| [Azure DDoS Protection](https://azure.microsoft.com/services/ddos-protection/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | [Azure Dedicated HSM](https://azure.microsoft.com/services/azure-dedicated-hsm/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | [Azure DevTest Labs](https://azure.microsoft.com/services/devtest-lab/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
This article provides a detailed list of in-scope cloud services across Azure Pu
| [Azure Front Door](https://azure.microsoft.com/services/frontdoor/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Azure HPC Cache](https://azure.microsoft.com/services/hpc-cache/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | [Azure Information Protection](https://azure.microsoft.com/services/information-protection/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Intune](/intune/what-is-intune) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
+| [Microsoft Intune](/intune/what-is-intune) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
| [Azure IoT Security](https://azure.microsoft.com/overview/iot/security/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | [Azure Lighthouse](https://azure.microsoft.com/services/azure-lighthouse/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Lab Services](https://azure.microsoft.com/services/lab-services/) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: |
+| [Azure Lab Services](https://azure.microsoft.com/services/lab-services/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| [Azure Managed Applications](https://azure.microsoft.com/services/managed-applications/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure Maps](https://azure.microsoft.com/services/azure-maps/)| :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: |
+| [Azure Maps](https://azure.microsoft.com/services/azure-maps/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
| [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | [Azure Monitor](https://azure.microsoft.com/services/monitor/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
This article provides a detailed list of in-scope cloud services across Azure Pu
| [Container Instances](https://azure.microsoft.com/services/container-instances/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | [Content Delivery Network](https://azure.microsoft.com/services/cdn/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [D365 Integrator App](/power-platform/admin/data-integrator) | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: |
-| [Dynamics 365 Service Omni-Channel Engagement Hub](/dynamics365/omnichannel/introduction-omnichannel) | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: |
-| [Dynamics 365 Forms Pro](/forms-pro/get-started) | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: |
-| [Dynamics 365 Customer Insights](/dynamics365/ai/customer-insights/overview) | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: |
-| [Dynamics 365 Customer Engagement (Common Data Service)](/dynamics365/customerengagement/on-premises/overview) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: |
+| [Data Integrator](/power-platform/admin/data-integrator) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
+| [Dynamics 365 Chat (Dynamics 365 Service Omni-Channel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
+| [Dynamics 365 Customer Voice](/forms-pro/get-started) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
+| [Dynamics 365 Customer Insights](/dynamics365/ai/customer-insights/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
+| [Dataverse (Common Data Service)](/dynamics365/customerengagement/on-premises/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| [Dynamics 365 Customer Service](/dynamics365/customer-service/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Dynamics 365 Field Service](/dynamics365/field-service/overview) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: |
-| [Dynamics 365 Project Service Automation](/dynamics365/project-service/overview) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: |
-| [Dynamics 365 Sales](/dynamics365/sales-enterprise/overview) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: |
+| [Dynamics 365 Field Service](/dynamics365/field-service/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| [Dynamics 365 Project Service Automation](/dynamics365/project-service/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| [Dynamics 365 Sales](/dynamics365/sales-enterprise/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| [Event Hubs](https://azure.microsoft.com/services/event-hubs/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Export to Data Lake service](/powerapps/maker/data-platform/export-to-data-lake) | :heavy_check_mark: | | | | :heavy_check_mark: |
+| [Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake) | :heavy_check_mark: | | | | :heavy_check_mark: |
| [ExpressRoute](https://azure.microsoft.com/services/expressroute/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Power Automate](/flow/getting-started) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Functions](https://azure.microsoft.com/services/functions/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
This article provides a detailed list of in-scope cloud services across Azure Pu
| [Microsoft Defender for Endpoint](/windows/security/threat-protection/microsoft-defender-atp/microsoft-defender-advanced-threat-protection) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Microsoft 365 Defender](/microsoft-365/security/defender/microsoft-365-defender?view=o365-worldwide) | :heavy_check_mark: | | | | :heavy_check_mark: | | [Microsoft Graph](/graph/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Microsoft PowerApps](/powerapps/powerapps-overview) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: |
-| [Microsoft PowerApps Portal](https://powerapps.microsoft.com/portals/) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: |
-| [Microsoft Stream](/stream/overview) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: |
+| [Power Apps](/powerapps/powerapps-overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| [Power Apps Portal](https://powerapps.microsoft.com/portals/) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: |
+| [Microsoft Stream](/stream/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| [Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: | | [Network Watcher](https://azure.microsoft.com/services/network-watcher/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Network Watcher(Traffic Analytics)](../../network-watcher/traffic-analytics.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | [Planned Maintenance](https://docs.microsoft.com/azure/virtual-machines/maintenance-control-portal) | :heavy_check_mark: | | | | :heavy_check_mark: |
-| [Power BI](https://powerbi.microsoft.com/) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: |
+| [Power BI](https://powerbi.microsoft.com/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| [Power BI Embedded](https://azure.microsoft.com/services/power-bi-embedded/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Power Virtual Agents](/power-virtual-agents/fundamentals-what-is-power-virtual-agents) | :heavy_check_mark: | | | | :heavy_check_mark: | | [Redis Cache](https://azure.microsoft.com/services/cache/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
This article provides a detailed list of in-scope cloud services across Azure Pu
| [VPN Gateway](https://azure.microsoft.com/services/vpn-gateway/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Web Apps (App Service)](https://azure.microsoft.com/services/app-service/web/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Web Application Firewall)](https://azure.microsoft.com/services/web-application-firewall/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Windows Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
+| [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
**&ast;** DoD CC SRG IL5 (Azure Gov) column shows DoD CC SRG IL5 certification status of services in Azure Government. For details, please refer to [Azure Government Isolation Guidelines for Impact Level 5](../documentation-government-impact-level-5.md)
azure-government Documentation Government Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-impact-level-5.md
Previously updated : 07/23/2021 Last updated : 07/26/2021 #Customer intent: As a DoD mission owner, I want to know how to implement a workload at Impact Level 5 in Microsoft Azure Government. # Isolation guidelines for Impact Level 5 workloads
Azure Government is available to US federal, state, local, and tribal government
## Principles and approach
-You need to address two key areas for Azure services in IL5 scope: storage isolation and compute isolation. We'll focus in this article on how Azure services can help isolate the compute and storage of IL5 data. The SRG allows for a shared management and network infrastructure. **This article is focused on Azure Government compute and storage isolation approaches for US Gov Arizona, US Gov Texas, and US Gov Virginia regions.** If an Azure service is available in Azure Government DoD regions and authorized at IL5, then it is by default suitable for IL5 workloads with no extra isolation configuration required. Azure Government DoD regions are reserved for DoD agencies and their partners, enabling physical separation from non-DoD tenants by design.
+You need to address two key areas for Azure services in IL5 scope: compute isolation and storage isolation. We'll focus in this article on how Azure services can help isolate the compute and storage of IL5 data. The SRG allows for a shared management and network infrastructure. **This article is focused on Azure Government compute and storage isolation approaches for US Gov Arizona, US Gov Texas, and US Gov Virginia regions.** If an Azure service is available in Azure Government DoD regions and authorized at IL5, then it is by default suitable for IL5 workloads with no extra isolation configuration required. Azure Government DoD regions are reserved for DoD agencies and their partners, enabling physical separation from non-DoD tenants by design.
> [!IMPORTANT] > You are responsible for designing and deploying your applications to meet DoD IL5 compliance requirements. In doing so, you should not include sensitive or restricted information in Azure resource names, as explained in **[Considerations for naming Azure resources](./documentation-government-concept-naming-resources.md).**
Be sure to review the entry for each service you're using and ensure that all is
## AI + machine learning
-For AI and machine learning services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=project-bonsai,genomics,search,bot-service,databricks,machine-learning-service,cognitive-services&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+For AI and machine learning services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=project-bonsai,genomics,search,bot-service,databricks,machine-learning-service,cognitive-services&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
### [Azure Cognitive Search](https://azure.microsoft.com/services/search/)
-Azure Cognitive Search supports Impact Level 5 workloads in Azure Government with this configuration:
- - Configure encryption at rest of content in Azure Cognitive Search by [using customer-managed keys in Azure Key Vault](../search/search-security-manage-encryption-keys.md). ### [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/)
-Azure Machine Learning supports Impact Level 5 workloads in Azure Government with this configuration:
- - Configure encryption at rest of content in Azure Machine Learning by using customer-managed keys in Azure Key Vault. Azure Machine Learning stores snapshots, output, and logs in the Azure Blob Storage account that's associated with the Azure Machine Learning workspace and customer subscription. All the data stored in Azure Blob Storage is [encrypted at rest with Microsoft-managed keys](../machine-learning/concept-enterprise-security.md). Customers can use their own keys for data stored in Azure Blob Storage. See [Configure encryption with customer-managed keys stored in Azure Key Vault](../storage/common/customer-managed-keys-configure-key-vault.md). ### [Cognitive
-The Azure Cognitive Services Content Moderator service supports Impact Level 5 workloads in Azure Government with this configuration:
- - Configure encryption at rest of content in the Content Moderator service by [using customer-managed keys in Azure Key Vault](../cognitive-services/content-moderator/encrypt-data-at-rest.md). ### [Cognitive
-Custom Vision supports Impact Level 5 workloads in Azure Government with this configuration:
- - Configure encryption at rest of content in Cognitive Services Custom Vision [using customer-managed keys in Azure Key Vault](../cognitive-services/custom-vision-service/encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault). ### [Cognitive
-The Cognitive Services Face service supports Impact Level 5 workloads in Azure Government with this configuration:
- - Configure encryption at rest of content in the Face service by [using customer-managed keys in Azure Key Vault](../cognitive-services/face/encrypt-data-at-rest.md). ### [Cognitive
-The Cognitive Services Language Understanding service supports Impact Level 5 workloads in Azure Government with this configuration:
- - Configure encryption at rest of content in the Language Understanding service by [using customer-managed keys in Azure Key Vault](../cognitive-services/luis/encrypt-data-at-rest.md). ### [Cognitive
-Personalizer supports Impact Level 5 workloads in Azure Government with this configuration:
- - Configure encryption at rest of content in Cognitive Services Personalizer [using customer-managed keys in Azure Key Vault](../cognitive-services/personalizer/encrypt-data-at-rest.md). ### [Cognitive
-Cognitive Services QnA Maker supports Impact Level 5 workloads in Azure Government with this configuration:
- - Configure encryption at rest of content in Cognitive Services QnA Maker [using customer-managed keys in Azure Key Vault](../cognitive-services/qnamaker/encrypt-data-at-rest.md). ### [Cognitive
-The Cognitive Services Translator service supports Impact Level 5 workloads in Azure Government with this configuration:
- - Configure encryption at rest of content in the Translator service by [using customer-managed keys in Azure Key Vault](../cognitive-services/translator/encrypt-data-at-rest.md). ### [Cognitive
-Cognitive Services Speech Services supports Impact Level 5 workloads in Azure Government with this configuration:
- - Configure encryption at rest of content in Speech Services by [using customer-managed keys in Azure Key Vault](../cognitive-services/speech-service/speech-encryption-of-data-at-rest.md). ## Analytics
-For Analytics services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=data-share,power-bi-embedded,analysis-services,event-hubs,data-lake-analytics,storage,data-catalog,monitor,data-factory,synapse-analytics,stream-analytics,databricks,hdinsight&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+For Analytics services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=data-share,power-bi-embedded,analysis-services,event-hubs,data-lake-analytics,storage,data-catalog,monitor,data-factory,synapse-analytics,stream-analytics,databricks,hdinsight&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
### [Azure Databricks](https://azure.microsoft.com/services/databricks/)
-Azure Databricks supports Impact Level 5 workloads in Azure Government with this configuration:
- - Azure Databricks can be deployed to existing storage accounts that have enabled appropriate [Storage encryption with Key Vault managed keys](#storage-encryption-with-key-vault-managed-keys). - Configure customer-managed Keys (CMK) for your [Azure Databricks Workspace](/azure/databricks/security/keys/customer-managed-key-notebook) and [Databricks File System](/azure/databricks/security/keys/customer-managed-keys-dbfs/) (DBFS). ### [Azure Data Explorer](https://azure.microsoft.com/services/data-explorer/)
-Azure Data Explorer supports Impact Level 5 workloads in Azure Government with this configuration:
- - Data in Azure Data Explorer clusters in Azure is secured and encrypted with Microsoft-managed keys by default. For extra control over encryption keys, you can supply customer-managed keys to use for data encryption and manage [encryption of your data](/azure/data-explorer/security#data-encryption) at the storage level with your own keys. ### [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/)
-Azure Stream Analytics supports Impact Level 5 workloads in Azure Government with this configuration:
- - Configure encryption at rest of content in Azure Stream Analytics by [using customer-managed keys in Azure Key Vault](../stream-analytics/data-protection.md). ### [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/)
-Azure Synapse Analytics supports Impact Level 5 workloads in Azure Government with this configuration:
--- Add transparent data encryption with customer-managed keys via Azure Key Vault. For more information, see [Azure SQL transparent data encryption](../azure-sql/database/transparent-data-encryption-byok-overview.md).-
- > [!NOTE]
- > The instructions to enable this configuration are the same as the instructions to do so for Azure SQL Database.
+- Add transparent data encryption with customer-managed keys via Azure Key Vault. For more information, see [Azure SQL transparent data encryption](../azure-sql/database/transparent-data-encryption-byok-overview.md). The instructions to enable this configuration for Azure Synapse Analytics are the same as the instructions to do so for Azure SQL Database.
### [Data Factory](https://azure.microsoft.com/services/data-factory/)
-Azure Data Factory supports Impact Level 5 workloads in Azure Government with this configuration:
- - Secure data store credentials by storing encrypted credentials in a Data Factory managed store. Data Factory helps protect your data store credentials by encrypting them with certificates managed by Microsoft. For more information about Azure Storage security, see [Azure Storage security overview](../storage/common/security-baseline.md). You can also store the data store's credentials in Azure Key Vault. Data Factory retrieves the credentials during the execution of an activity. For more information, see [Store credentials in Azure Key Vault](../data-factory/store-credentials-in-key-vault.md). ### [Event Hubs](https://azure.microsoft.com/services/event-hubs/)
-Azure Event Hubs supports Impact Level 5 workloads in Azure Government with this configuration:
--- Use client-side encryption to encrypt data before using Azure Event Hubs in Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia.
+- Configure encryption at rest of content in Azure Event Hubs by [using customer-managed keys in Azure Key Vault](../event-hubs/configure-customer-managed-key.md).
### [HDInsight](https://azure.microsoft.com/services/hdinsight/)
-Azure HDInsight supports Impact Level 5 workloads in Azure Government with these configurations:
- - Azure HDInsight can be deployed to existing storage accounts that have enabled appropriate [Storage service encryption](#storage-encryption-with-key-vault-managed-keys), as discussed in the guidance for Azure Storage. - Azure HDInsight enables a database option for certain configurations. Ensure the appropriate database configuration for transparent data encryption (TDE) is enabled on the option you choose. This process is discussed in the guidance for [Azure SQL Database](#azure-sql-database). ## Compute
-For Compute services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud,azure-vmware,cloud-services,batch,app-service,service-fabric,functions,virtual-machine-scale-sets,virtual-machines&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+For Compute services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud,azure-vmware,cloud-services,batch,app-service,service-fabric,functions,virtual-machine-scale-sets,virtual-machines&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
### [Azure Functions](https://azure.microsoft.com/services/functions/)
-Azure Functions supports Impact Level 5 workloads in Azure Government with this configuration:
- - To accommodate proper network and workload isolation, deploy your Azure functions on App Service plans configured to use the Isolated SKU. For more information, see the [App Service plan documentation](../app-service/overview-hosting-plans.md). ### [Batch](https://azure.microsoft.com/services/batch/)
-Azure Batch supports Impact Level 5 workloads in Azure Government with this configuration:
- - Enable user subscription mode, which will require a Key Vault instance for proper encryption and key storage. For more information, see the documentation on [batch account configurations](../batch/batch-account-create-portal.md). ### [Virtual machines](https://azure.microsoft.com/services/virtual-machines/) and [virtual machine scale sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/)
You can encrypt disks that support virtual machine scale sets by using Azure Dis
## Containers
-For Containers services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=openshift,app-service-linux,container-registry,service-fabric,container-instances,kubernetes-service&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+For Containers services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=openshift,app-service-linux,container-registry,service-fabric,container-instances,kubernetes-service&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
### [Azure Kubernetes Service](https://azure.microsoft.com/services/kubernetes-service/)
-Azure Kubernetes Service (AKS) supports Impact Level 5 workloads in Azure Government with this configuration:
- - Configure encryption at rest of content in AKS by [using customer-managed keys in Azure Key Vault](../aks/azure-disk-customer-managed-keys.md). ### [Container Instances](https://azure.microsoft.com/services/container-instances/)
-Azure Container Instances supports Impact Level 5 workloads in Azure Government with this configuration:
- - Azure Container Instances automatically encrypts data related to your containers when it's persisted in the cloud. Data in Container Instances is encrypted and decrypted with 256-bit AES encryption and enabled for all Container Instances deployments. You can rely on Microsoft-managed keys for the encryption of your container data, or you can manage the encryption by using your own keys. For more information, see [Encrypt deployment data](../container-instances/container-instances-encrypt-data.md). ### [Container Registry](https://azure.microsoft.com/services/container-registry/)
-Azure Container Registry supports Impact Level 5 workloads in Azure Government with this configuration:
- - When you store images and other artifacts in a Container Registry, Azure automatically encrypts the registry content at rest by using service-managed keys. You can supplement the default encryption with an additional encryption layer by [using a key that you create and manage in Azure Key Vault](../container-registry/container-registry-customer-managed-keys.md). ## Databases
-For Databases services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-sql,sql-server-stretch-database,redis-cache,database-migration,postgresql,mariadb,mysql,sql-database,cosmos-db&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+For Databases services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-sql,sql-server-stretch-database,redis-cache,database-migration,postgresql,mariadb,mysql,sql-database,cosmos-db&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
### [Azure API for FHIR](https://azure.microsoft.com/services/azure-api-for-fhir/)
-Azure API for FHIR supports Impact Level 5 workloads in Azure Government with this configuration:
- - Configure encryption at rest of content in Azure API for FHIR [using customer-managed keys in Azure Key Vault](../healthcare-apis/fhir/customer-managed-key.md). ### [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/)
-Azure Cosmos DB supports Impact Level 5 workloads in Azure Government with this configuration:
- - Data stored in your Azure Cosmos account is automatically and seamlessly encrypted with keys managed by Microsoft (service-managed keys). Optionally, you can choose to add a second layer of encryption with keys you manage (customer-managed keys). For more information, see [Configure customer-managed keys for your Azure Cosmos account with Azure Key Vault](../cosmos-db/how-to-setup-cmk.md). ### [Azure Database for MySQL](https://azure.microsoft.com/services/mysql/)
-Azure Database for MySQL supports Impact Level 5 workloads in Azure Government with this configuration:
- - Data encryption with customer-managed keys for Azure Database for MySQL enables you to bring your own key (BYOK) for data protection at rest. This encryption is set at the server level. For a given server, a customer-managed key, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. For more information, see [Azure Database for MySQL data encryption with a customer-managed key](../mysql/concepts-data-encryption-mysql.md). ### [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/)
-Azure Database for PostgreSQL supports Impact Level 5 workloads in Azure Government with this configuration:
- - Data encryption with customer-managed keys for Azure Database for PostgreSQL Single Server is set at the server level. For a given server, a customer-managed key, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. For more information, see [Azure Database for PostgreSQL Single Server data encryption with a customer-managed key](../postgresql/concepts-data-encryption-postgresql.md). ### [Azure SQL Database](https://azure.microsoft.com/services/sql-database/)
-Azure SQL Database supports Impact Level 5 workloads in Azure Government with this configuration:
- - Add transparent data encryption with customer-managed keys via Azure Key Vault. For more information, see the [Azure SQL documentation](../azure-sql/database/transparent-data-encryption-byok-overview.md). ### [SQL Server Stretch Database](https://azure.microsoft.com/services/sql-server-stretch-database/)
-SQL Server Stretch Database supports Impact Level 5 workloads in Azure Government with this configuration:
- - Add transparent data encryption with customer-managed keys via Azure Key Vault. For more information, see [Azure SQL transparent data encryption](../azure-sql/database/transparent-data-encryption-byok-overview.md).
SQL Server Stretch Database supports Impact Level 5 workloads in Azure Governmen
### [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/)
-Azure Stack Edge supports Impact Level 5 workloads in Azure Government with this configuration:
- - You can protect data at rest via storage accounts because your device is associated with a storage account that's used as a destination for your data in Azure. You can configure your storage account to use data encryption with customer-managed keys stored in Azure Key Vault. For more information, see [Protect data in storage accounts](../databox-online/azure-stack-edge-pro-r-security.md#protect-data-in-storage-accounts). ## Integration
-For Integration services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=event-grid,api-management,service-bus,logic-apps&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+For Integration services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=event-grid,api-management,service-bus,logic-apps&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
### [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/)
-Azure Logic Apps supports Impact Level 5 workloads in Azure Government. To meet these requirements, Logic Apps supports the capability for you to create and run workflows in an environment with dedicated resources so that you can avoid sharing computing resources with other tenants. For more information, see [Secure access and data in Azure Logic Apps: Isolation guidance](../logic-apps/logic-apps-securing-a-logic-app.md#isolation-logic-apps).
+- Azure Logic Apps supports Impact Level 5 workloads in Azure Government. To meet these requirements, Logic Apps supports the capability for you to create and run workflows in an environment with dedicated resources so that you can avoid sharing computing resources with other tenants. For more information, see [Secure access and data in Azure Logic Apps: Isolation guidance](../logic-apps/logic-apps-securing-a-logic-app.md#isolation-logic-apps).
### [Service Bus](https://azure.microsoft.com/services/service-bus/)
-Azure Service Bus supports Impact Level 5 workloads in Azure Government with this configuration:
+- Configure encryption of data at rest in Azure Service Bus by [using customer-managed keys in Azure Key Vault](../service-bus-messaging/configure-customer-managed-key.md).
-- Use client-side encryption to encrypt data before using Azure Service Bus in Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia. ## Internet of Things
-For Internet of Things services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=notification-hubs,azure-rtos,azure-maps,iot-central,iot-hub&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+For Internet of Things services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=notification-hubs,azure-rtos,azure-maps,iot-central,iot-hub&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
### [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/)
-Azure IoT Hub supports Impact Level 5 workloads in Azure Government with this configuration:
- - IoT Hub supports encryption of data at rest with customer-managed keys, also known as "bring your own key" (BYOK). Azure IoT Hub provides encryption of data at rest and in transit. By default, Azure IoT Hub uses Microsoft-managed keys to encrypt the data. Customer-managed key support enables customers to encrypt data at rest by using an [encryption key that they manage via Azure Key Vault](../iot-hub/iot-hub-customer-managed-keys.md). ## Management and governance
-For Management and governance services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=azure-automanage,resource-mover,azure-portal,azure-lighthouse,cloud-shell,managed-applications,azure-policy,monitor,automation,scheduler,site-recovery,cost-management,backup,blueprints,advisor&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+For Management and governance services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=azure-automanage,resource-mover,azure-portal,azure-lighthouse,cloud-shell,managed-applications,azure-policy,monitor,automation,scheduler,site-recovery,cost-management,backup,blueprints,advisor&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope).
### [Automation](https://azure.microsoft.com/services/automation/)
-Automation supports Impact Level 5 workloads in Azure Government with this configuration:
- - By default, your Azure Automation account uses Microsoft-managed keys. You can manage the encryption of secure assets for your Automation account by using your own keys. When you specify a customer-managed key at the level of the Automation account, that key is used to protect and control access to the account encryption key for the Automation account. For more information, see [Encryption of secure assets in Azure Automation](../automation/automation-secure-asset-encryption.md). ### [Azure Managed Applications](https://azure.microsoft.com/services/managed-applications/)
-Azure Managed Applications supports Impact Level 5 workloads in Azure Government with this configuration:
- - You can store your managed application definition in a storage account that you provide when you create the application. Doing so allows you to manage its location and access for your regulatory needs. For more information, see [Bring your own storage](../azure-resource-manager/managed-applications/publish-service-catalog-app.md#bring-your-own-storage-for-the-managed-application-definition).
-### [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/)
+### [Azure Monitor](https://azure.microsoft.com/services/monitor/)
-Azure Site Recovery supports Impact Level 5 workloads in Azure Government with this configuration:
+- By default, all data and saved queries are encrypted at rest using Microsoft-managed keys. Configure encryption at rest of your data in Azure Monitor [using customer-managed keys in Azure Key Vault](../azure-monitor/logs/customer-managed-keys.md).
-- You can replicate Azure VMs with managed disks enabled for customer-managed keys from one Azure region to another. For more information, see [Replicate machines with customer-managed key disks](../site-recovery/azure-to-azure-how-to-enable-replication-cmk-disks.md).
+> [!IMPORTANT]
+> See additional guidance below for **[Log Analytics]**, which is a feature of Azure Monitor.
-### [Log Analytics](../azure-monitor/logs/data-platform-logs.md)
+#### [Log Analytics](../azure-monitor/logs/data-platform-logs.md)
Log Analytics is intended to be used for monitoring the health and status of services and infrastructure. The monitoring data and logs primarily store [logs and metrics](../azure-monitor/logs/data-security.md#data-retention) that are service generated. When used in this primary capacity, Log Analytics supports Impact Level 5 workloads in Azure Government with no extra configuration required. Log Analytics may also be used to ingest additional customer-provided logs. These logs may include data ingested as part of operating Azure Security Center or Azure Sentinel. If the ingested logs or the queries written against these logs are categorized as IL5 data, then you should configure customer-managed keys (CMK) for your Log Analytics workspaces and Application Insights components. Once configured, any data sent to your workspaces or components is encrypted with your Azure Key Vault key. For more information, see [Azure Monitor customer-managed keys](../azure-monitor/logs/customer-managed-keys.md).
+### [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/)
+
+- You can replicate Azure VMs with managed disks enabled for customer-managed keys from one Azure region to another. For more information, see [Replicate machines with customer-managed key disks](../site-recovery/azure-to-azure-how-to-enable-replication-cmk-disks.md).
+ ### [Microsoft Intune](/intune/what-is-intune)
-Intune supports Impact Level 5 workloads in Azure Government with no extra configuration required. Line-of-business apps should be evaluated for IL5 restrictions prior to [uploading to Intune storage](/mem/intune/apps/apps-add). While Intune does encrypt applications that are uploaded to the service for distribution, it does not support customer-managed keys.
+- Intune supports Impact Level 5 workloads in Azure Government with no extra configuration required. Line-of-business apps should be evaluated for IL5 restrictions prior to [uploading to Intune storage](/mem/intune/apps/apps-add). While Intune does encrypt applications that are uploaded to the service for distribution, it does not support customer-managed keys.
## Migration
-For Migration services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=database-migration,azure-migrate&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+For Migration services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=database-migration,azure-migrate&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
### [Azure Data Box](https://azure.microsoft.com/services/databox/)
-Azure Data Box supports Impact Level 5 workloads in Azure Government with this configuration:
- - Configure encryption at rest of content in Azure Data Box [using customer-managed keys in Azure Key Vault](../databox/data-box-customer-managed-encryption-key-portal.md). ### [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/)
-Azure Migrate supports Impact Level 5 workloads in Azure Government with this configuration:
- - Configure encryption at rest of content in Azure Migrate by [using customer-managed keys in Azure Key Vault](../migrate/how-to-migrate-vmware-vms-with-cmk-disks.md). ## Security
-For Security services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-sentinel,azure-dedicated-hsm,security-center,key-vault&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+For Security services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-sentinel,azure-dedicated-hsm,security-center,key-vault&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
### [Azure Information Protection](https://azure.microsoft.com/services/information-protection/)
-Azure Information Protection supports Impact Level 5 workloads in Azure Government with this configuration:
- - Configure encryption at rest of content in Azure Information Protection [using customer-managed keys in Azure Key Vault](/azure/information-protection/byok-price-restrictions). ### [Azure Sentinel](https://azure.microsoft.com/services/azure-sentinel/)
-Azure Sentinel supports Impact Level 5 workloads in Azure Government with this configuration:
- - Configure encryption at rest of content in Azure Sentinel by [using customer-managed keys in Azure Key Vault](../sentinel/customer-managed-keys.md). ### [Microsoft Cloud App Security](/cloud-app-security/what-is-cloud-app-security)
-Microsoft Cloud App Security supports Impact Level 5 workloads in Azure Government with this configuration:
- - Configure encryption at rest of content in Microsoft Cloud App Security [using customer-managed keys in Azure Key Vault](/cloud-app-security/cas-compliance-trust#security). ## Storage
-For Storage services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=hpc-cache,managed-disks,storsimple,storage&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+For Storage services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=hpc-cache,managed-disks,storsimple,storage&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
### [Azure Archive Storage](https://azure.microsoft.com/services/storage/archive/)
-Azure Archive Storage can be used in Azure Government to support Impact Level 5 data. Azure Archive Storage is a tier of Azure Storage. It automatically helps secure data at rest by using 256-bit AES encryption. Just like hot and cool tiers, Archive Storage can be set at the blob level. To enable access to the content, you need to rehydrate the archived blob or copy it to an online tier, at which point customers can enforce customer-managed keys that are in place for their online storage tiers. When you create a target storage account for Impact Level 5 data in Archive Storage, add storage encryption via customer-managed keys. For more information, see the [storage services section](#storage-encryption-with-key-vault-managed-keys).
-
-The target storage account for Archive Storage can be located in any Azure Government region.
+- Azure Archive Storage is a tier of Azure Storage. It automatically helps secure data at rest by using 256-bit AES encryption. Just like hot and cool tiers, Archive Storage can be set at the blob level. To enable access to the content, you need to rehydrate the archived blob or copy it to an online tier, at which point you can enforce customer-managed keys that are in place for your online storage tiers. When you create a target storage account for IL5 data in Archive Storage, add storage encryption via customer-managed keys. For more information, see the [storage services section](#storage-encryption-with-key-vault-managed-keys).
+- The target storage account for Archive Storage can be located in any Azure Government region.
### [Azure File Sync](../storage/file-sync/file-sync-planning.md)
-Azure File Sync supports Impact Level 5 workloads in Azure Government with this configuration:
- - Configure encryption at rest of content in Azure File Sync by [using customer-managed keys in Azure Key Vault](../storage/file-sync/file-sync-planning.md#azure-file-share-encryption-at-rest). ### [Azure HPC Cache](https://azure.microsoft.com/services/hpc-cache/)
-Azure HPC Cache supports Impact Level 5 workloads in Azure Government with this configuration:
- - Configure encryption at rest of content in Azure HPC Cache [using customer-managed keys in Azure Key Vault](../hpc-cache/customer-keys.md) ### [Azure Import/Export service](../import-export/storage-import-export-service.md)
-Azure Import/Export service can be used in Azure Government to import and export Impact Level 5 data. By default, the Import/Export service will encrypt data that's written to the hard drive for transport. When you create a target storage account for import and export of Impact Level 5 data, add storage encryption via customer-managed keys. For more information, see the [storage services section](#storage-encryption-with-key-vault-managed-keys) of this document.
-
-The target storage account for import and source storage account for export can be located in any Azure Government region.
+- By default, the Import/Export service will encrypt data that's written to the hard drive for transport. When you create a target storage account for import and export of IL5 data, add storage encryption via customer-managed keys. For more information, see the [storage services section](#storage-encryption-with-key-vault-managed-keys) of this document.
+- The target storage account for import and source storage account for export can be located in any Azure Government region.
### [Azure NetApp Files](https://azure.microsoft.com/services/netapp/)
-Azure NetApp Files supports Impact Level 5 workloads in Azure Government with this configuration:
- - Configure encryption at rest of content in Azure NetApp Files [using customer-managed keys in Azure Key Vault](../azure-netapp-files/azure-netapp-files-faqs.md#security-faqs) ### [Azure Storage](https://azure.microsoft.com/services/storage/)
-Azure Storage consists of multiple data features: Blob Storage, File Storage, Table Storage, and Queue Storage. Blob Storage supports both standard and premium storage. Premium storage uses only SSDs, to provide the fastest performance possible. Storage also includes configurations that modify these storage types, like hot and cool to provide appropriate speed-of-availability for data scenarios.
+Azure Storage consists of multiple data features: Blob storage, File storage, Table storage, and Queue storage. Blob storage supports both standard and premium storage. Premium storage uses only SSDs, to provide the fastest performance possible. Storage also includes configurations that modify these storage types, like hot and cool to provide appropriate speed-of-availability for data scenarios.
-When you use an Azure Storage account, you must follow the steps for [storage encryption with Key Vault managed keys](#storage-encryption-with-key-vault-managed-keys) to ensure the data is protected with customer-managed keys. Azure Storage supports Impact Level 5 workloads in all Azure Government and Azure Government for DoD regions.
-
-> [!IMPORTANT]
-> When you use Tables and Queues outside the US DoD regions, you must encrypt the data before you insert it into the table or queue. For more information, see the instructions for using [client-side encryption](../storage/common/storage-client-side-encryption-java.md).
+Blob storage and File storage always use the account encryption key to encrypt data. Queue storage and Table storage can be [optionally configured](../storage/common/account-encryption-key-create.md) to encrypt data with the account encryption key when the storage account is created. You can opt to use customer-managed keys to encrypt data at rest in all Azure Storage features, including Blob, File, Table, and Queue storage. When you use an Azure Storage account, you must follow the steps below to ensure the data is protected with customer-managed keys.
#### Storage encryption with Key Vault managed keys
To implement Impact Level 5 compliant controls on an Azure Storage account that
For more information about how to enable this Azure Storage encryption feature, see the documentation for [Azure Storage](../storage/common/customer-managed-keys-configure-key-vault.md). > [!NOTE]
-> When you use this encryption method, you need to enable it before you add content to the storage account. Any content that's added earlier won't be encrypted with the selected key. It will be encrypted only via the standard encryption at rest provided by Azure Storage.
+> When you use this encryption method, you need to enable it before you add content to the storage account. Any content that's added earlier won't be encrypted with the selected key. It will be encrypted only via the standard encryption at rest provided by Azure Storage that uses Microsoft-managed keys.
### [StorSimple](https://azure.microsoft.com/services/storsimple/)
-StorSimple supports Impact Level 5 workloads in Azure Government with this configuration:
- - To help ensure the security and integrity of data moved to the cloud, StorSimple allows you to [define cloud storage encryption keys](../storsimple/storsimple-8000-security.md#storsimple-data-protection). You specify the cloud storage encryption key when you create a volume container.
azure-monitor Azure Vm Vmss Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/azure-vm-vmss-apps.md
This article walks you through enabling Application Insights monitoring using th
> **Java** based applications running on Azure VMs and VMSS are monitored with **[Application Insights Java 3.0 agent](./java-in-process-agent.md)**, which is generally available. > [!IMPORTANT]
-> Azure Application Insights Agent for ASP.NET applications running on **Azure VMs and VMSS** is currently in public preview. For monitoring your ASP.Net applications running **on-premises**, use the [Azure Application Insights Agent for on-premises servers](./status-monitor-v2-overview.md), which is generally available and fully supported.
+> Azure Application Insights Agent for ASP.NET and ASP.NET Core applications running on **Azure VMs and VMSS** is currently in public preview. For monitoring your ASP.NET applications running **on-premises**, use the [Azure Application Insights Agent for on-premises servers](./status-monitor-v2-overview.md), which is generally available and fully supported.
> The preview version for Azure VMs and VMSS is provided without a service-level agreement, and we don't recommend it for production workloads. Some features might not be supported, and some might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
There are two ways to enable application monitoring for Azure virtual machines a
* For Azure virtual machines and Azure virtual machine scale sets we recommend at a minimum enabling this level of monitoring. After that, based on your specific scenario, you can evaluate whether manual instrumentation is needed. > [!NOTE]
-> Auto-instrumentation is currently only available for .NET IIS-hosted applications and Java. Use an SDK to instrument ASP.NET Core, Node.js, and Python applications hosted on an Azure virtual machines and virtual machine scale sets.
+> Auto-instrumentation is currently only available for ASP.NET, ASP.NET Core IIS-hosted applications and Java. Use an SDK to instrument Node.js and Python applications hosted on an Azure virtual machines and virtual machine scale sets.
-#### .NET
+#### ASP.NET / ASP.NET Core
* The Application Insights Agent auto-collects the same dependency signals out-of-the-box as the .NET SDK. See [Dependency auto-collection](./auto-collect-dependencies.md#net) to learn more.
There are two ways to enable application monitoring for Azure virtual machines a
### Code-based via SDK
-#### .NET
+#### ASP.NET / ASP.NET Core
* For .NET apps, this approach is much more customizable, but it requires [adding a dependency on the Application Insights SDK NuGet packages](./asp-net.md). This method, also means you have to manage the updates to the latest version of the packages yourself. * If you need to make custom API calls to track events/dependencies not captured by default with agent-based monitoring, you would need to use this method. Check out the [API for custom events and metrics article](./api-custom-events-metrics.md) to learn more.
There are two ways to enable application monitoring for Azure virtual machines a
> [!NOTE] > For .NET apps only - if both agent based monitoring and manual SDK based instrumentation is detected only the manual instrumentation settings will be honored. This is to prevent duplicate data from being sent. To learn more about this check out the [troubleshooting section](#troubleshooting) below.
-#### .NET Core
-To monitor .NET Core applications, use the [SDK](./asp-net-core.md)
- #### Java If you need additional custom telemetry for Java applications, see what [is available](./java-in-process-agent.md#send-custom-telemetry-from-your-application), add [custom dimensions](./java-standalone-config.md#custom-dimensions), or use [telemetry processors](./java-standalone-telemetry-processors.md).
azure-monitor Status Monitor V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/status-monitor-v2-overview.md
It replaces Status Monitor.
Telemetry is sent to the Azure portal, where you can [monitor](./app-insights-overview.md) your app. > [!NOTE]
-> The module only currently supports codeless instrumentation of .NET web apps hosted with IIS. Use an SDK to instrument ASP.NET Core, Java, and Node.js applications.
+> The module currently supports codeless instrumentation of .NET and .NET Core web apps hosted with IIS. Use an SDK to instrument Java and Node.js applications.
## PowerShell Gallery
Each of these options is described in the [detailed instructions](status-monitor
- Does Status Monitor v2 support ASP.NET Core applications?
- *No*. For instructions to enable monitoring of ASP.NET Core applications, see [Application Insights for ASP.NET Core applications](./asp-net-core.md). There's no need to install StatusMonitor for an ASP.NET Core application. This is true even if ASP.NET Core application is hosted in IIS.
+ *Yes*. Starting from [Application Insights Agent 2.0.0-beta1](https://www.powershellgallery.com/packages/Az.ApplicationMonitor/2.0.0-beta1), ASP.NET Core applications hosted in IIS are supported.
- How do I verify that the enablement succeeded?
Add more telemetry:
* [Create web tests](monitor-web-app-availability.md) to make sure your site stays live. * [Add web client telemetry](./javascript.md) to see exceptions from web page code and to enable trace calls.
-* [Add the Application Insights SDK to your code](./asp-net.md) so you can insert trace and log calls.
+* [Add the Application Insights SDK to your code](./asp-net.md) so you can insert trace and log calls.
azure-monitor Status Monitor V2 Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/status-monitor-v2-troubleshoot.md
Review the [API reference](status-monitor-v2-api-reference.md) for a detailed de
### Troubleshooting running processes
-You can inspect the processes on the instrumented computer to determine if all DLLs are loaded.
+You can inspect the processes on the instrumented computer to determine if all DLLs are loaded and environment variables are set.
If monitoring is working, at least 12 DLLs should be loaded.
-Use the `Get-ApplicationInsightsMonitoringStatus -InspectProcess` command to check the DLLs.
+* Use the `Get-ApplicationInsightsMonitoringStatus -InspectProcess` command to check the DLLs.
+* Use the `(Get-Process -id {PID}).StartInfo.EnvironmentVariables` command to check the environment variables. Following are the environment varibles set in the worker process or dotnet core process:
+
+```
+COR_ENABLE_PROFILING=1
+COR_PROFILER={324F817A-7420-4E6D-B3C1-143FBED6D855}
+COR_PROFILER_PATH_32=Path to MicrosoftInstrumentationEngine_x86.dll
+COR_PROFILER_PATH_64=Path to MicrosoftInstrumentationEngine_x64.dll
+MicrosoftInstrumentationEngine_Host={CA487940-57D2-10BF-11B2-A3AD5A13CBC0}
+MicrosoftInstrumentationEngine_HostPath_32=Path to Microsoft.ApplicationInsights.ExtensionsHost_x86.dll
+MicrosoftInstrumentationEngine_HostPath_64=Path to Microsoft.ApplicationInsights.ExtensionsHost_x64.dll
+MicrosoftInstrumentationEngine_ConfigPath32_Private=Path to Microsoft.InstrumentationEngine.Extensions.config
+MicrosoftInstrumentationEngine_ConfigPath64_Private=Path to Microsoft.InstrumentationEngine.Extensions.config
+MicrosoftAppInsights_ManagedHttpModulePath=Path to Microsoft.ApplicationInsights.RedfieldIISModule.dll
+MicrosoftAppInsights_ManagedHttpModuleType=Microsoft.ApplicationInsights.RedfieldIISModule.RedfieldIISModule
+ASPNETCORE_HOSTINGSTARTUPASSEMBLIES=Microsoft.ApplicationInsights.StartupBootstrapper
+DOTNET_STARTUP_HOOKS=Path to Microsoft.ApplicationInsights.StartupHook.dll
+```
Review the [API reference](status-monitor-v2-api-reference.md) for a detailed description of how to use this cmdlet.
Review the [API reference](status-monitor-v2-api-reference.md) for a detailed de
- **Zip** - **Merge** - **.NET Symbol Collection**
-5. Set these **Additional Providers**: `61f6ca3b-4b5f-5602-fa60-759a2a2d1fbd,323adc25-e39b-5c87-8658-2c1af1a92dc5,925fa42b-9ef6-5fa7-10b8-56449d7a2040,f7d60e07-e910-5aca-bdd2-9de45b46c560,7c739bb9-7861-412e-ba50-bf30d95eae36,61f6ca3b-4b5f-5602-fa60-759a2a2d1fbd,323adc25-e39b-5c87-8658-2c1af1a92dc5,252e28f4-43f9-5771-197a-e8c7e750a984`
+5. Set these **Additional Providers**: `61f6ca3b-4b5f-5602-fa60-759a2a2d1fbd,323adc25-e39b-5c87-8658-2c1af1a92dc5,925fa42b-9ef6-5fa7-10b8-56449d7a2040,f7d60e07-e910-5aca-bdd2-9de45b46c560,7c739bb9-7861-412e-ba50-bf30d95eae36,252e28f4-43f9-5771-197a-e8c7e750a984,f9c04365-1d1f-5177-1cdc-a0b0554b6903`
#### Collecting logs
Review the [API reference](status-monitor-v2-api-reference.md) for a detailed de
## Next steps -- Review the [API reference](status-monitor-v2-overview.md#powershell-api-reference) to learn about parameters you might have missed.
+- Review the [API reference](status-monitor-v2-overview.md#powershell-api-reference) to learn about parameters you might have missed.
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-data-export.md
Supported tables are currently limited to those specified below. All data from t
| AADManagedIdentitySignInLogs | | | AADNonInteractiveUserSignInLogs | | | AADProvisioningLogs | |
+| AADRiskyUsers | |
| AADServicePrincipalSignInLogs | |
+| AADUserRiskEvents | |
| ABSBotRequests | |
+| ACSAuthIncomingOperations | |
| ACSBillingUsage | | | ACSChatIncomingOperations | | | ACSSMSIncomingOperations | |
Supported tables are currently limited to those specified below. All data from t
| ADFSSignInLogs | | | ADFTriggerRun | | | ADPAudit | |
+| ADPDiagnostics | |
| ADPRequests | | | ADReplicationResult | | | ADSecurityAssessmentRecommendation | |
Supported tables are currently limited to those specified below. All data from t
| ADXQuery | | | AegDeliveryFailureLogs | | | AegPublishFailureLogs | |
+| AEWAuditLogs | |
| Alert | |
+| AmlOnlineEndpointConsoleLog | |
| ApiManagementGatewayLogs | | | AppCenterError | | | AppPlatformSystemLogs | |
Supported tables are currently limited to those specified below. All data from t
| AutoscaleEvaluationsLog | | | AutoscaleScaleActionsLog | | | AWSCloudTrail | |
+| AWSGuardDuty | |
+| AWSVPCFlow | |
| AzureAssessmentRecommendation | | | AzureDevOpsAuditing | | | BehaviorAnalytics | | | BlockchainApplicationLog | | | BlockchainProxyLog | |
+| CDBCassandraRequests | |
| CDBControlPlaneRequests | | | CDBDataPlaneRequests | |
+| CDBGremlinRequests | |
| CDBMongoRequests | | | CDBPartitionKeyRUConsumption | | | CDBPartitionKeyStatistics | |
Supported tables are currently limited to those specified below. All data from t
| Dynamics365Activity | | | EmailAttachmentInfo | | | EmailEvents | |
+| EmailPostDeliveryEvents | |
| EmailUrlInfo | | | Event | Partial support ΓÇô data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported in export. Data arriving via Diagnostics Extension agent is collected though storage while this path isnΓÇÖt supported in export.2 | | ExchangeAssessmentRecommendation | |
Supported tables are currently limited to those specified below. All data from t
| HDInsightAmbariSystemMetrics | | | HDInsightHadoopAndYarnLogs | | | HDInsightHadoopAndYarnMetrics | |
+| HDInsightHBaseLogs | |
+| HDInsightHBaseMetrics | |
| HDInsightHiveAndLLAPLogs | | | HDInsightHiveAndLLAPMetrics | | | HDInsightHiveTezAppStats | |
+| HDInsightJupyterNotebookEvents | |
+| HDInsightKafkaLogs | |
+| HDInsightKafkaMetrics | |
| HDInsightOozieLogs | | | HDInsightSecurityLogs | | | HDInsightSparkApplicationEvents | |
Supported tables are currently limited to those specified below. All data from t
| KubeServices | | | LAQueryLogs | | | McasShadowItReporting | |
+| MCCEventLogs | |
| MicrosoftAzureBastionAuditLogs | | | MicrosoftDataShareReceivedSnapshotLog | | | MicrosoftDataShareSentSnapshotLog | |
Supported tables are currently limited to those specified below. All data from t
| MicrosoftHealthcareApisAuditLogs | | | NWConnectionMonitorPathResult | | | NWConnectionMonitorTestResult | |
-| OfficeActivity | Partial support (relevant to government clouds only) ΓÇô some of the data to ingested via webhooks from O365 into LA. This portion is missing in export currently. |
+| OfficeActivity | Partial support in government clouds ΓÇô some of the data to ingested via webhooks from O365 into LA. This portion is missing in export currently. |
| Operation | Partial support ΓÇô some of the data is ingested through internal services that isn't supported for export. This portion is missing in export currently. | | Perf | Partial support ΓÇô only windows perf data is currently supported. The Linux perf data is missing in export currently. | | PowerBIDatasetsWorkspace | |
Supported tables are currently limited to those specified below. All data from t
| SigninLogs | | | SPAssessmentRecommendation | | | SQLAssessmentRecommendation | |
+| SQLSecurityAuditEvents | |
| SucceededIngestion | | | SynapseBigDataPoolApplicationsEnded | | | SynapseBuiltinSqlPoolRequestsEnded | |
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
na Previously updated : 06/21/2021 Last updated : 07/27/2021
Also, some solutions, such as [Azure Defender (Security Center)](https://azure.m
[Log Analytics Dedicated Clusters](logs-dedicated-clusters.md) are collections of workspaces in a single managed Azure Data Explorer cluster to support advanced scenarios, like [Customer-Managed Keys](customer-managed-keys.md). Log Analytics Dedicated Clusters use a commitment tier pricing model that must be configured to at least 1000 GB/day. The cluster commitment tier has a 31-day commitment period after the commitment level is increased. During the commitment period, the commitment tier level can't be reduced, but it can be increased at any time. When workspaces are associated to a cluster, the data ingestion billing for those workspaces is done at the cluster level using the configured commitment tier level. Learn more about [creating a Log Analytics Clusters](customer-managed-keys.md#create-cluster) and [associating workspaces to it](customer-managed-keys.md#link-workspace-to-cluster). For information about commitment tier pricing, see the [Azure Monitor pricing page]( https://azure.microsoft.com/pricing/details/monitor/).
-The cluster commitment tier level is programmatically configured with Azure Resource Manager using the `Capacity` parameter under `Sku`. The `Capacity` is specified in units of GB and can have values of 1000 GB/day or more in increments of 100 GB/day. For more information, see [Azure Monitor customer-managed key](customer-managed-keys.md).
+The cluster commitment tier level is programmatically configured with Azure Resource Manager using the `Capacity` parameter under `Sku`. The `Capacity` is specified in units of GB and can have values of 1000, 2000 or 5000 GB/day. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. For more information, see [Azure Monitor customer-managed key](customer-managed-keys.md).
There are two modes of billing for usage on a cluster. These can be specified by the `billingType` parameter when [creating a cluster](logs-dedicated-clusters.md#creating-a-cluster) or set after creation. The two modes are:
azure-netapp-files Azure Netapp Files Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-faqs.md
na ms.devlang: na Previously updated : 06/08/2021 Last updated : 07/27/2021 # FAQs About Azure NetApp Files
Using Azure NetApp Files NFS or SMB volumes with AVS is supported in the followi
Yes. Azure NetApp Files is a first-party service. It fully adheres to Azure Resource Provider standards. As such, Azure NetApp Files can be integrated into Azure Policy via *custom policy definitions*. For information about how to implement custom policies for Azure NetApp Files, see [Azure Policy now available for Azure NetApp Files](https://techcommunity.microsoft.com/t5/azure/azure-policy-now-available-for-azure-netapp-files/m-p/2282258) on Microsoft Tech Community.
+### Which Unicode Character Encoding is supported by Azure NetApp Files for the creation and display of file and directory names?
+
+Azure NetApp Files only supports file and directory names that are encoded with the UTF-8 Unicode Character Encoding format for both NFS and SMB volumes.
+
+If you try to create files or directories with names that use supplementary characters or surrogate pairs such as non-regular characters and emoji that are not supported by UTF-8, the operation will fail. In this case, an error from a Windows client might read ΓÇ£The file name you specified is not valid or too long. Specify a different file name.ΓÇ¥
+ ## Next steps - [Microsoft Azure ExpressRoute FAQs](../expressroute/expressroute-faqs.md)
azure-sql Performance Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/performance-guidance.md
Previously updated : 03/10/2020 Last updated : 07/26/2021 # Tune applications and databases for performance in Azure SQL Database and Azure SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
Some applications are write-intensive. Sometimes you can reduce the total IO loa
Some database applications have read-heavy workloads. Caching layers might reduce the load on the database and might potentially reduce the compute size required to support a database by using Azure SQL Database and Azure SQL Managed Instance. With [Azure Cache for Redis](https://azure.microsoft.com/services/cache/), if you have a read-heavy workload, you can read the data once (or perhaps once per application-tier machine, depending on how it is configured), and then store that data outside of your database. This is a way to reduce database load (CPU and read IO), but there is an effect on transactional consistency because the data being read from the cache might be out of sync with the data in the database. Although in many applications some level of inconsistency is acceptable, that's not true for all workloads. You should fully understand any application requirements before you implement an application-tier caching strategy.
+## Get configuration and design tips
+
+If you use Azure SQL Database, you can execute an open-source T-SQL [script](https://aka.ms/sqldbtips) to analyze your database on demand and provide tips to improve database performance and health. Some tips suggest configuration and operational changes based on best practices, while other tips recommend design changes suitable for your workload, such as enabling advanced database engine features.
+
+To learn more about the script and get started, visit the [wiki](https://aka.ms/sqldbtipswiki) page.
+ ## Next steps - For more information about DTU-based service tiers, see [DTU-based purchasing model](service-tiers-dtu.md).
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/archive-tier-support.md
Title: Archive Tier support (Preview)
+ Title: Archive Tier support
description: Learn about Archive Tier Support for Azure Backup Previously updated : 06/03/2021 Last updated : 07/27/2021
-# Archive Tier support (Preview)
+# Archive Tier support
Customers rely on Azure Backup for storing backup data including their Long-Term Retention (LTR) backup data with retention needs being defined by the organization's compliance rules. In most cases, the older backup data is rarely accessed and is only stored for compliance needs. Azure Backup supports backup of long-term retention points in the archive tier, in addition to snapshots and the Standard tier.
-## Scope for preview
+## Scope
Supported workloads:
Supported clients:
- The capability is provided using PowerShell
->[!NOTE]
->Archive Tier Support for Azure VMs and SQL Server in Azure VMs is in limited public preview with limited signups. To sign up for archive support use this [link](https://aka.ms/ArchivePreviewInterestForm).
+>[!Note]
+>Archive Tier support for SQL Servers in Azure VMs is now generally available in North Europe, Central India, and Australia East. For the detailed list of supported regions, refer to the [support matrix](#support-matrix). <br><br> For the remaining regions for SQL Servers in Azure VMs, Archive Tier support is in limited public preview. Archive Tier support for Azure Virtual Machines is also in limited public preview. To sign up for limited public preview, use this [link](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR463S33c54tEiJLEM6Enqb9UNU5CVTlLVFlGUkNXWVlMNlRPM1lJWUxLRy4u).
## Get started with PowerShell
Supported clients:
1. Run the following command in PowerShell: ```azurepowershell
- install-module -name Az.RecoveryServices -Repository PSGallery -RequiredVersion 4.0.0-preview -AllowPrerelease -force
+ install-module -name Az.RecoveryServices -Repository PSGallery -RequiredVersion 4.4.0 -AllowPrerelease -force
``` 1. Connect to Azure using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
Recovery points that haven't stayed in archive for a minimum of six months will
Stop protection and delete data deletes all the recovery points. For recovery points in archive that haven't stayed for a duration of 180 days in archive tier, deletion of recovery points will lead to early deletion cost.
+## Support matrix
+
+| Workloads | Preview | Generally available |
+| | | |
+| SQL Server in Azure VM | East US, East US 2, Central US, South Central US, West US, West US 2, West Central US, North Central US, Brazil South, Canada East, Canada Central, West Europe, UK South, UK West, East Asia, Japan East, South India, South East Asia | Australia East, Central India, North Europe |
+| Azure Virtual Machines | East US, East US 2, Central US, South Central US, West US, West US 2, West Central US, North Central US, Brazil South, Canada East, Canada Central, West Europe, UK South, UK West, East Asia, Japan East, South India, South East Asia, Australia East, Central India, North Europe | None |
+ ## Error codes and troubleshooting steps There are several error codes that come up when a recovery point can't be moved to archive.
There are several error codes that come up when a recovery point can't be moved
**Description** ΓÇô This error code is shown when the selected recovery point type isn't eligible to be moved to archive.
-**Recommended action** ΓÇô Check eligibility of the recovery point [here](#scope-for-preview)
+**Recommended action** ΓÇô Check eligibility of the recovery point [here](#scope)
### RecoveryPointHaveActiveDependencies
There are several error codes that come up when a recovery point can't be moved
**Description ΓÇô** The selected recovery point has active dependencies and so canΓÇÖt be moved to archive.
-**Recommended action** ΓÇô Check eligibility of the recovery point [here](#scope-for-preview)
+**Recommended action** ΓÇô Check eligibility of the recovery point [here](#scope)
### MinLifeSpanInStandardRequiredForArchive
There are several error codes that come up when a recovery point can't be moved
**Description** ΓÇô The recovery point has to stay in Standard tier for a minimum of three months for Azure virtual machines, and 45 days for SQL Server in Azure virtual machines
-**Recommended action** ΓÇô Check eligibility of the recovery point [here](#scope-for-preview)
+**Recommended action** ΓÇô Check eligibility of the recovery point [here](#scope)
### MinRemainingLifeSpanInArchiveRequired
There are several error codes that come up when a recovery point can't be moved
**Description** ΓÇô The minimum lifespan required for a recovery point for archive move eligibility is six months.
-**Recommended action** ΓÇô Check eligibility of the recovery point [here](#scope-for-preview)
+**Recommended action** ΓÇô Check eligibility of the recovery point [here](#scope)
### UserErrorRecoveryPointAlreadyInArchiveTier
backup Back Up Azure Stack Hyperconverged Infrastructure Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/back-up-azure-stack-hyperconverged-infrastructure-virtual-machines.md
+
+ Title: Back up Azure Stack HCI virtual machines with MABS
+description: This article contains the procedures to back up and recover virtual machines using Microsoft Azure Backup Server (MABS).
+ Last updated : 07/27/2021++
+# Back up Azure Stack HCI virtual machines with Azure Backup Server
+
+This article explains how to back up virtual machines on Azure Stack HCI using Microsoft Azure Backup Server (MABS).
+
+## Supported scenarios
+
+MABS can back up Azure Stack HCI virtual machines in the following scenarios:
+
+- **Azure Stack HCI Host**: Back up and recover System State/BMR of the Azure Stack HCI host. The MABS protection agent must be installed on the host.
+
+- **Virtual machines in cluster with local or direct storage**: Back up guest virtual machines in a cluster that has local or directly attached storage. For example, a hard drive, a storage area network (SAN) device, or a network attached storage (NAS) device.
+
+- **Virtual machines in a cluster with CSV storage**: Back up guest virtual machines hosted on an Azure Stack HCI cluster with Cluster Shared Volume (CSV) storage. The MABS protection agent is installed on each cluster node.
+
+- **VM Move within a cluster**: When VMs are moved within a stretched/normal cluster, MABS continues to protect the virtual machines as long as the MABS protection agent is installed on the Azure Stack HCI host. The way in which MABS protects the virtual machines depends on the type of live migration involved. With a VM Move within a cluster, MABS detects the migration, and backs up the virtual machine from the new cluster node without any requirement for user intervention. Because the storage location hasn't changed, MABS continues with express full backups.
+
+- **VM Move to a different stretched/normal cluster**: VM Move to a different stretched/normal cluster is not supported.
+
+## Host versus guest backup
+
+MABS can do a host or guest-level backup of VMs on Azure Stack HCI. At the host level, the MABS protection agent is installed on the Azure Stack HCI host server or cluster and protects the entire VMs and data files running on that host. At the guest level, the agent is installed on each virtual machine and protects the workload present on that machine.
+
+Both methods have pros and cons:
+
+- Host-level backups are flexible because they work regardless of the type of OS running on the guest machines and don't require the installation of the MABS protection agent on each VM. If you deploy host level backup, you can recover an entire virtual machine, or files and folders (item-level recovery).
+
+- Guest-level backup is useful if you want to protect specific workloads running on a virtual machine. At host-level you can recover an entire VM or specific files, but it won't provide recovery in the context of a specific application. For example, to recover specific SharePoint items from a backed-up VM, you should do guest-level backup of that VM. Use guest-level backup if you want to protect data stored on passthrough disks. Passthrough allows the virtual machine to directly access the storage device and doesn't store virtual volume data in a VHD file.
+
+## Backup prerequisites
+
+These are the prerequisites for backing up virtual machines with MABS:
+
+| Prerequisite | Details |
+| | - |
+| MABS prerequisites | <ul> <li>If you want to perform item-level recovery for virtual machines (recover files, folders, volumes), then you'll need to install the Hyper-V role on the MABS server. If you only want to recover the virtual machine and not item-level, then the role isn't required.</li> <li>You can protect up to 800 virtual machines of 100 GB each on one MABS server and allow multiple MABS servers that support larger clusters.</li> <li>MABS excludes the page file from incremental backups to improve virtual machine backup performance.</li> <li>MABS can back up a server or cluster in the same domain as the MABS server, or in a child or trusted domain. If you want to back up VMs in a workgroup or an untrusted domain, you'll need to set up authentication. For a single server, you can use NTLM or certificate authentication. For a cluster, you can use certificate authentication only.</li> <li>Using host-level backup to back up virtual machine data on passthrough disks isn't supported. In this scenario, we recommend you use host-level backup to back up VHD files and guest-level backup to back up the other data that isn't visible on the host.</li> <li>You can back up VMs stored on deduplicated volumes.</li> </ul> |
+| VM | <ul> <li> The version of Integration Components that's running on the virtual machine should be the same as the version of the Azure Stack HCI host. </li> <li> For each virtual machine backup you'll need free space on the volume hosting the virtual hard disk files to allow enough room for differencing disks (AVHD's) during backup. The space must be at least equal to the calculation Initial disk size*Churn rate*Backup window time. If you're running multiple backups on a cluster, you'll need enough storage capacity to accommodate the AVHDs for each of the virtual machines using this calculation. </li> </ul> |
+| Linux prerequisites | <ul><li> You can back up Linux virtual machines using MABS. Only file-consistent snapshots are supported.</li></ul> |
+
+## Back up virtual machines
+
+1. Set up your [MABS server](backup-azure-microsoft-azure-backup.md) and [your storage](backup-mabs-add-storage.md). When setting up your storage, use these storage capacity guidelines.
+
+ - Average virtual machine size - 100 GB
+ - Number of virtual machines per MABS server - 800
+ - Total size of 800 VMs - 80 TB
+ - Required space for backup storage - 80 TB
+
+2. Set up the MABS protection agent on the server or each cluster node.
+
+3. In the MABS Administrator console, select **Protection** > **Create protection group** to open the **Create New Protection Group** wizard.
+
+4. On the **Select Group Members** page, select the VMs you want to protect from the host servers on which they're located. We recommend you put all VMs that will have the same protection policy into one protection group. To make efficient use of space, enable colocation. Colocation allows you to locate data from different protection groups on the same disk or tape storage, so that multiple data sources have a single replica and recovery point volume.
+
+5. On the **Select Data Protection Method** page, specify a protection group name. Select **I want short-term protection using Disk** and select **I want online protection** if you want to back up data to Azure using the Azure Backup service.
+
+6. In **Specify Short-Term Goals** > **Retention range**, specify how long you want to retain disk data. In **Synchronization frequency**, specify how often incremental backups of the data should run. Alternatively, instead of selecting an interval for incremental backups you can enable **Just before a recovery point**. With this setting enabled, MABS will run an express full backup just before each scheduled recovery point.
+
+ > [!NOTE]
+ >If you're protecting application workloads, recovery points are created in accordance with Synchronization frequency, provided the application supports incremental backups. If it doesn't, then MABS runs an express full backup, instead of an incremental backup, and creates recovery points in accordance with the express backup schedule.<br></br>The backup process doesn't back up the checkpoints associated with VMs.
+
+7. In the **Review disk allocation** page, review the storage pool disk space allocated for the protection group.
+
+ **Total Data size** is the size of the data you want to back up, and **Disk space to be provisioned on MABS** is the space that MABS recommends for the protection group. MABS chooses the ideal backup volume, based on the settings. However, you can edit the backup volume choices in the **Disk allocation details**. For the workloads, select the preferred storage in the dropdown menu. Your edits change the values for **Total Storage** and **Free Storage** in the **Available Disk Storage** pane. Underprovisioned space is the amount of storage MABS suggests you add to the volume, to continue with backups smoothly in the future.
+
+8. On the **Choose Replica Creation Method** page, specify how the initial replication of data in the protection group will be performed. If you select to **Automatically replicate over the network**, we recommended you choose an off-peak time. For large amounts of data or less than optimal network conditions, consider selecting **Manually**, which requires replicating the data offline using removable media.
+
+9. On the **Consistency Check Options** page, select how you want to automate consistency checks. You can enable a check to run only when replica data becomes inconsistent, or according to a schedule. If you don't want to configure automatic consistency checking, you can run a manual check at any time by right-clicking the protection group and selecting **Perform Consistency Check**.
+
+ After you create the protection group, initial replication of the data occurs in accordance with the method you selected. After initial replication, each backup takes place in line with the protection group settings. If you need to recover backed up data, note the following:
+
+## Back up replica virtual machines
+
+If MABS is running on Windows Server 2012 R2 or later, then you can back up replica virtual machines. This is useful for several reasons:
+
+**Reduces the impact of backups on the running workload** - Taking a backup of a virtual machine incurs some overhead as a snapshot is created. By offloading the backup process to a secondary remote site, the running workload is no longer affected by the backup operation. This is applicable only to deployments where the backup copy is stored on a remote site. For example, you might take daily backups and store data locally to ensure quick restore times, but take monthly or quarterly backups from replica virtual machines stored remotely for long-term retention.
+
+**Saves bandwidth** - In a typical remote branch office/headquarters deployment you need an appropriate amount of provisioned bandwidth to transfer backup data between sites. If you create a replication and failover strategy, in addition to your data backup strategy, you can reduce the amount of redundant data sent over the network. By backing up the replica virtual machine data rather than the primary, you save the overhead of sending the backed-up data over the network.
+
+**Enables hoster backup** - You can use a hosted datacenter as a replica site, with no need for a secondary datacenter. In this case, the hoster SLA requires consistent backup of replica virtual machines.
+
+A replica virtual machine is turned off until a failover is initiated, and VSS can't guarantee an application-consistent backup for a replica virtual machine. So the backup of a replica virtual machine will be crash-consistent only. If crash-consistency can't be guaranteed, then the backup will fail and this might occur in a number of conditions:
+
+- The replica virtual machine isn't healthy and is in a critical state.
+
+- The replica virtual machine is resynchronizing (in the Resynchronization in Progress or Resynchronization Required state).
+
+- Initial replication between the primary and secondary site is in progress or pending for the virtual machine.
+
+- .hrl logs are being applied to the replica virtual machine, or a previous action to apply the .hrl logs on the virtual disk failed, or was canceled or interrupted.
+
+- Migration or failover of the replica virtual machine is in progress.
+
+## Recover backed up virtual machines
+
+When you can recover a backed up virtual machine, you use the Recovery wizard to select the virtual machine and the specific recovery point. To open the Recovery Wizard and recover a virtual machine:
+
+1. In the MABS Administrator console, type the name of the VM, or expand the list of protected items, navigate to **All Protected HyperV Data**, and select the VM you want to recover.
+
+2. In the **Recovery points for** pane, on the calendar, select any date to see the recovery points available. Then in the **Path** pane, select the recovery point you want to use in the Recovery wizard.
+
+3. From the **Actions** menu, select **Recover** to open the Recovery Wizard.
+
+ The VM and recovery point you selected appear in the **Review Recovery Selection** screen. Select **Next**.
+
+4. In the **Select Recovery Type** screen, select where you want to restore the data and then select **Next**.
+
+ - **Recover to original instance**: When you recover to the original instance, the original VHD and all associated checkpoints are deleted. MABS recovers the VHD and other configuration files to the original location using Hyper-V VSS writer. At the end of the recovery process, virtual machines are still highly available.
+ The resource group must be present for recovery. If it isn't available, recover to an alternate location and then make the virtual machine highly available.
+
+ - **Recover as virtual machine to any host**: MABS supports alternate location recovery (ALR), which provides a seamless recovery of a protected Azure Stack HCI virtual machine to a different host within the same cluster, independent of processor architecture. Azure Stack HCI virtual machines that are recovered to a cluster node won't be highly available. If you choose this option, the Recovery Wizard presents you with an additional screen for identifying the destination and destination path.
+
+ >[!NOTE]
+ >If you select the original host, the behavior is the same as **Recover to original instance**. The original VHD and all associated checkpoints will be deleted.
+
+ - **Copy to a network folder**: MABS supports item-level recovery (ILR), which allows you to do item-level recovery of files, folders, volumes, and virtual hard disks (VHDs) from a host-level backup of Azure Stack HCI virtual machines to a network share or a volume on a MABS protected server. The MABS protection agent doesn't have to be installed inside the guest to perform item-level recovery. If you choose this option, the Recovery Wizard presents you with an additional screen for identifying the destination and destination path.
+
+5. In **Specify Recovery Options**, configure the recovery options and select **Next**:
+
+ - If you are recovering a VM over low bandwidth, select **Modify** to enable **Network bandwidth usage throttling**. After turning on the throttling option, you can specify the amount of bandwidth you want to make available and the time when that bandwidth is available.
+ - Select **Enable SAN based recovery using hardware snapshots** if you've configured your network.
+ - Select **Send an e-mail when the recovery completes** and then provide the email addresses, if you want email notifications sent once the recovery process completes.
+
+6. In the Summary screen, make sure all details are correct. If the details aren't correct, or you want to make a change, select **Back**. If you're satisfied with the settings, select **Recover** to start the recovery process.
+
+7. The **Recovery Status** screen provides information about the recovery job.
+
+## Next steps
+
+[Recover data from Azure Backup Server](./backup-azure-alternate-dpm-server.md)
backup Backup Azure Backup Server Vmware https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-backup-server-vmware.md
Title: Back up VMware VMs with Azure Backup Server description: In this article, learn how to use Azure Backup Server to back up VMware VMs running on a VMware vCenter/ESXi server. Previously updated : 05/24/2020 Last updated : 07/27/2021 # Back up VMware VMs with Azure Backup Server This article explains how to back up VMware VMs running on VMware ESXi hosts/vCenter Server to Azure using Azure Backup Server (MABS).
+>[!Note]
+>With MABS v3 Update Rollup 2 release, you can now back up VMware 7.0 VMs as well.
+ This article explains how to: - Set up a secure channel so that Azure Backup Server can communicate with VMware servers over HTTPS.
MABS provides the following features when backing up VMware virtual machines:
- MABS protects VMs migrated for load balancing: As VMs are migrated for load balancing, MABS automatically detects and continues VM protection. - MABS can recover files/folders from a Windows VM without recovering the entire VM, which helps recover necessary files faster.
+## Support matrix
+
+| MABS versions | Supported VMware VM versions for backup |
+| | |
+| MABS v3 UR2 | VMware server 7.0, 6.7, 6.5, or 6.0 (Licensed Version) |
+| MABS v3 UR1 | VMware server 6.7, 6.5, 6.0, or 5.5 (Licensed Version) |
+ ## Prerequisites and limitations Before you start backing up a VMware virtual machine, review the following list of limitations and prerequisites.
The Azure Backup Server needs a user account with permissions to access v-Center
The following table captures the privileges that you need to assign to the user account that you create:
-| Privileges for vCenter 6.5 user account | Privileges for vCenter 6.7 user account |
+| Privileges for vCenter 6.5 user account | Privileges for vCenter 6.7 (and later) user account |
|-|-| | Datastore cluster.Configure a datastore cluster | Datastore cluster.Configure a datastore cluster | | Datastore.AllocateSpace | Datastore.AllocateSpace |
Add VMware VMs for backup. Protection groups gather multiple VMs and apply the s
## VMware parallel backups >[!NOTE]
-> This feature is applicable for MABS V3 UR1.
+> This feature is applicable for MABS V3 UR1 (and later).
-With earlier versions of MABS, parallel backups were performed only across protection groups. With MABS V3 UR1, all your VMware VMs backups within a single protection group are parallel, leading to faster VM backups. All VMware delta replication jobs run in parallel. By default, the number of jobs to run in parallel is set to 8.
+With earlier versions of MABS, parallel backups were performed only across protection groups. With MABS V3 UR1 (and later), all your VMware VMs backups within a single protection group are parallel, leading to faster VM backups. All VMware delta replication jobs run in parallel. By default, the number of jobs to run in parallel is set to 8.
You can modify the number of jobs by using the registry key as shown below (not present by default, you need to add it):
You can modify the number of jobs by using the registry key as shown below (not
> [!NOTE] > You can modify the number of jobs to a higher value. If you set the jobs number to 1, replication jobs run serially. To increase the number to a higher value, you must consider the VMware performance. Consider the number of resources in use and additional usage required on VMWare vSphere Server, and determine the number of delta replication jobs to run in parallel. Also, this change will affect only the newly created protection groups. For existing protection groups you must temporarily add another VM to the protection group. This should update the protection group configuration accordingly. You can remove this VM from the protection group after the procedure is completed.
-## VMware vSphere 6.7
+## VMware vSphere 6.7 and 7.0
-To back up vSphere 6.7, do the following:
+To back up vSphere 6.7 and 7.0, do the following:
- Enable TLS 1.2 on the MABS Server
Windows Registry Editor Version 5.00
## Exclude disk from VMware VM backup > [!NOTE]
-> This feature is applicable for MABS V3 UR1.
+> This feature is applicable for MABS V3 UR1 (and later).
-With MABS V3 UR1, you can exclude the specific disk from VMware VM backup. The configuration script **ExcludeDisk.ps1** is located in the `C:\Program Files\Microsoft Azure Backup Server\DPM\DPM\bin folder`.
+With MABS V3 UR1 (and later), you can exclude the specific disk from VMware VM backup. The configuration script **ExcludeDisk.ps1** is located in the `C:\Program Files\Microsoft Azure Backup Server\DPM\DPM\bin folder`.
To configure the disk exclusion, follow the steps below:
backup Backup Azure Manage Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-manage-vms.md
Title: Manage and monitor Azure VM backups description: Learn how to manage and monitor Azure VM backups by using the Azure Backup service. Previously updated : 08/02/2020 Last updated : 07/27/2021 # Manage Azure VM backups with Azure Backup service
A notification lets you know that the backup jobs have been stopped.
To stop protection and delete data of a VM:
+>[!Note]
+>For recovery points in archive that haven't stayed for a duration of 180 days in Archive Tier, deletion of those recovery points lead to early deletion cost. [Learn more](/azure/storage/blobs/storage-blob-storage-tiers#cool-and-archive-early-deletion).
++ 1. On the [vault item's dashboard](#view-vms-on-the-dashboard), select **Stop backup**. 2. Choose **Delete Backup Data**, and confirm your selection as needed. Enter the name of the backup item and add a comment if you want.
backup Backup Mabs Protection Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-mabs-protection-matrix.md
Title: MABS (Azure Backup Server) V3 UR1 protection matrix description: This article provides a support matrix listing all workloads, data types, and installations that Azure Backup Server protects. Previously updated : 03/19/2020 Last updated : 07/27/2021
-# MABS (Azure Backup Server) V3 UR1 protection matrix
+# MABS (Azure Backup Server) V3 UR1 (and later) protection matrix
This article lists the various servers and workloads that you can protect with Azure Backup Server. The following matrix lists what can be protected with Azure Backup Server.
-Use the following matrix for MABS v3 UR1:
+Use the following matrix for MABS v3 UR1 (and later):
* Workloads ΓÇô The workload type of technology.
Use the following matrix for MABS v3 UR1:
* Protection and recovery ΓÇô List the detailed information about the workloads such as supported storage container or supported deployment. >[!NOTE]
->Support for the 32-bit protection agent is deprecated with MABS v3 UR1. See [32-Bit protection agent deprecation](backup-mabs-whats-new-mabs.md#32-bit-protection-agent-deprecation).
+>Support for the 32-bit protection agent is deprecated with MABS v3 UR1 (and later). See [32-Bit protection agent deprecation](backup-mabs-whats-new-mabs.md#32-bit-protection-agent-deprecation).
## Protection support matrix
The following sections details the protection support matrix for MABS:
| **Workload** | **Version** | **Azure Backup Server installation** | **Supported Azure Backup Server** | **Protection and recovery** | | -- | | | | |
-| Client computers (64-bit) | Windows 10 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine | V3 UR1 | Volume, share, folder, files, deduped volumes <br><br> Protected volumes must be NTFS. FAT and FAT32 aren't supported. <br><br> Volumes must be at least 1 GB. Azure Backup Server uses Volume Shadow Copy Service (VSS) to take the data snapshot and the snapshot only works if the volume is at least 1 GB. |
-| Servers (64-bit) | Windows Server 2019, 2016, 2012 R2, 2012 | Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure Stack | V3 UR1 | Volume, share, folder, file <br><br> Deduped volumes (NTFS only) <br><br>When protecting a WS 2016 NTFS deduped volume with MABS v3 running on Windows Server 2019, the recoveries may be affected. We have a fix for doing recoveries in a non-deduped way that will be part of later versions of MABS. Contact MABS support if you need this fix on MABS v3 UR1.<br><br> When protecting a WS 2019 NTFS deduped volume with MABS v3 on Windows Server 2016, the backups and restores will be non-deduped. This means that the backups will consume more space on the MABS server than the original NTFS deduped volume. <br><br> System state and bare metal (Not supported when workload is running as Azure virtual machine) |
-| Servers (64-bit) | Windows Server 2008 R2 SP1, Windows Server 2008 SP2 (You need to install [Windows Management Framework](https://www.microsoft.com/download/details.aspx?id=54616)) | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure Stack | V3 UR1 | Volume, share, folder, file, system state/bare metal |
-| SQL Server | SQL Server 2019, 2017, 2016 and [supported SPs](https://support.microsoft.com/lifecycle/search?alpha=SQL%20Server%202016), 2014 and supported [SPs](https://support.microsoft.com/lifecycle/search?alpha=SQL%20Server%202014) | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Azure Stack | V3 UR1 | All deployment scenarios: database <br><br> MABS v3 UR1 supports the backup of SQL databases over ReFS volumes <br><br> MABS doesn't support SQL Server databases hosted on Windows Server 2012 Scale-Out File Servers (SOFS). <br><br> MABS can't protect SQL server Distributed Availability Group (DAG) or Availability Group (AG), where the role name on the failover cluster is different than the named AG on SQL. |
-| Exchange | Exchange 2019, 2016 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure Stack <br><br> Azure virtual machine (when workload is running as Azure virtual machine) | V3 UR1 | Protect (all deployment scenarios): Standalone Exchange server, database under a database availability group (DAG) <br><br> Recover (all deployment scenarios): Mailbox, mailbox databases under a DAG <br><br> Backup of Exchange over ReFS is supported with MABS v3 UR1 |
-| SharePoint | SharePoint 2019, 2016 with latest SPs | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Azure Stack | V3 UR1 | Protect (all deployment scenarios): Farm, frontend web server content <br><br> Recover (all deployment scenarios): Farm, database, web application, file, or list item, SharePoint search, frontend web server <br><br> Protecting a SharePoint farm that's using the SQL Server 2012 AlwaysOn feature for the content databases isn't supported. |
+| Client computers (64-bit) | Windows 10 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine | V3 UR1 and V3 UR2 | Volume, share, folder, files, deduped volumes <br><br> Protected volumes must be NTFS. FAT and FAT32 aren't supported. <br><br> Volumes must be at least 1 GB. Azure Backup Server uses Volume Shadow Copy Service (VSS) to take the data snapshot and the snapshot only works if the volume is at least 1 GB. |
+| Servers (64-bit) | Windows Server 2019, 2016, 2012 R2, 2012 | Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure Stack | V3 UR1 and V3 UR2 | Volume, share, folder, file <br><br> Deduped volumes (NTFS only) <br><br>When protecting a WS 2016 NTFS deduped volume with MABS v3 running on Windows Server 2019, the recoveries may be affected. We have a fix for doing recoveries in a non-deduped way that will be part of later versions of MABS. Contact MABS support if you need this fix on MABS v3 UR1.<br><br> When protecting a WS 2019 NTFS deduped volume with MABS v3 on Windows Server 2016, the backups and restores will be non-deduped. This means that the backups will consume more space on the MABS server than the original NTFS deduped volume. <br><br> System state and bare metal (Not supported when workload is running as Azure virtual machine) |
+| Servers (64-bit) | Windows Server 2008 R2 SP1, Windows Server 2008 SP2 (You need to install [Windows Management Framework](https://www.microsoft.com/download/details.aspx?id=54616)) | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure Stack | V3 UR1 and V3 UR2 | Volume, share, folder, file, system state/bare metal |
+| SQL Server | SQL Server 2019, 2017, 2016 and [supported SPs](https://support.microsoft.com/lifecycle/search?alpha=SQL%20Server%202016), 2014 and supported [SPs](https://support.microsoft.com/lifecycle/search?alpha=SQL%20Server%202014) | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Azure Stack | V3 UR1 and V3 UR2 | All deployment scenarios: database <br><br> MABS v3 UR2 and later supports the backup of SQL database, stored on the Cluster Shared Volume. <br><br> MABS v3 UR1 supports the backup of SQL databases over ReFS volumes <br><br> MABS doesn't support SQL Server databases hosted on Windows Server 2012 Scale-Out File Servers (SOFS). <br><br> MABS can't protect SQL server Distributed Availability Group (DAG) or Availability Group (AG), where the role name on the failover cluster is different than the named AG on SQL. |
+| Exchange | Exchange 2019, 2016 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure Stack <br><br> Azure virtual machine (when workload is running as Azure virtual machine) | V3 UR1 and V3 UR2 | Protect (all deployment scenarios): Standalone Exchange server, database under a database availability group (DAG) <br><br> Recover (all deployment scenarios): Mailbox, mailbox databases under a DAG <br><br> Backup of Exchange over ReFS is supported with MABS v3 UR1 |
+| SharePoint | SharePoint 2019, 2016 with latest SPs | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Azure Stack | V3 UR1 and V3 UR2 | Protect (all deployment scenarios): Farm, frontend web server content <br><br> Recover (all deployment scenarios): Farm, database, web application, file, or list item, SharePoint search, frontend web server <br><br> Protecting a SharePoint farm that's using the SQL Server 2012 AlwaysOn feature for the content databases isn't supported. |
## VM Backup | **Workload** | **Version** | **Azure Backup Server installation** | **Supported Azure Backup Server** | **Protection and recovery** | | | - | | - | |
-| Hyper-V host - MABS protection agent on Hyper-V host server, cluster, or VM | Windows Server 2019, 2016, 2012 R2, 2012 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine | V3 UR1 | Protect: Hyper-V computers, cluster shared volumes (CSVs) <br><br> Recover: Virtual machine, Item-level recovery of files and folders available only for Windows, volumes, virtual hard drives |
+| Hyper-V host - MABS protection agent on Hyper-V host server, cluster, or VM | Windows Server 2019, 2016, 2012 R2, 2012 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine | V3 UR1 and and V3 UR2 | Protect: Virtual machines, cluster shared volumes (CSVs) <br><br> Recover: Virtual machine, Item-level recovery of files and folders available only for Windows, volumes, virtual hard drives |
+| Azure Stack HCI | V1 and 20H2 | Physical server <br><br> Hyper-V / Azure Stack HCI virtual machine <br><br> VMware virtual machine | V3 UR2 and later | Protect: Virtual machines, cluster shared volumes (CSVs) <br><br> Recover: Virtual machine, Item-level recovery of files and folders available only for Windows, volumes, virtual hard drives |
| VMware VMs | VMware server 5.5, 6.0, or 6.5, 6.7 (Licensed Version) | Hyper-V virtual machine <br><br> VMware virtual machine | V3 UR1 | Protect: VMware VMs on cluster-shared volumes (CSVs), NFS, and SAN storage <br><br> Recover: Virtual machine, Item-level recovery of files and folders available only for Windows, volumes, virtual hard drives <br><br> VMware vApps aren't supported. |
+| VMware VMs | VMware server 7.0, 6.7, 6.5 or 6.0 (Licensed Version) | Hyper-V virtual machine <br><br> VMware virtual machine | V3 UR2 and later | Protect: VMware VMs on cluster-shared volumes (CSVs), NFS, and SAN storage <br><br> Recover: Virtual machine, Item-level recovery of files and folders available only for Windows, volumes, virtual hard drives <br><br> VMware vApps aren't supported. |
>[!NOTE] > MABS doesn't support backup of virtual machines with pass-through disks or those that use a remote VHD. We recommend that in these scenarios you use guest-level backup using MABS, and install an agent on the virtual machine to back up the data.
The following sections details the protection support matrix for MABS:
| **Workload** | **Version** | **Azure Backup Server installation** | **Supported Azure Backup Server** | **Protection and recovery** | | | -- | | - | |
-| Linux | Linux running as [Hyper-V](back-up-hyper-v-virtual-machines-mabs.md) or [VMware](backup-azure-backup-server-vmware.md) guest | Physical server, On-premises Hyper-V VM, Windows VM in VMware | V3 UR1 | Hyper-V must be running on Windows Server 2012 R2, Windows Server 2016, or Windows Server 2019. Protect: Entire virtual machine <br><br> Recover: Entire virtual machine <br><br> Only file-consistent snapshots are supported. <br><br> For a complete list of supported Linux distributions and versions, see the article, [Linux on distributions endorsed by Azure](../virtual-machines/linux/endorsed-distros.md). |
+| Linux | Linux running as [Hyper-V](back-up-hyper-v-virtual-machines-mabs.md) or [VMware](backup-azure-backup-server-vmware.md) guest | Physical server, On-premises Hyper-V VM, Windows VM in VMware | V3 UR1 and V3 UR2 | Hyper-V must be running on Windows Server 2012 R2, Windows Server 2016, or Windows Server 2019. Protect: Entire virtual machine <br><br> Recover: Entire virtual machine <br><br> Only file-consistent snapshots are supported. <br><br> For a complete list of supported Linux distributions and versions, see the article, [Linux on distributions endorsed by Azure](../virtual-machines/linux/endorsed-distros.md). |
## Azure ExpressRoute support
backup Backup Mabs Release Notes V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-mabs-release-notes-v3.md
Title: Release notes for Microsoft Azure Backup Server v3 description: This article provides the information about the known issues and workarounds for Microsoft Azure Backup Server (MABS) v3. Previously updated : 06/03/2020 Last updated : 07/27/2021 ms.asset: 0c4127f2-d936-48ef-b430-a9198e425d81
This article provides the known issues and workarounds for Microsoft Azure Backu
**Description**: With UR1, the MABS report formatting issue is fixed with updated RDL files. The new RDL files aren't automatically replaced with existing files.
+>[!NOTE]
+>This issue has been fixed in MABS v3 UR2.
+ **Workaround**: To replace the RDL files, follow the steps below: 1. On the MABS machine, open SQL Reporting Services Web Portal URL.
backup Backup Mabs Whats New Mabs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-mabs-whats-new-mabs.md
Title: What's new in Microsoft Azure Backup Server description: Microsoft Azure Backup Server gives you enhanced backup capabilities for protecting VMs, files and folders, workloads, and more. Previously updated : 05/24/2020 Last updated : 07/27/2021 # What's new in Microsoft Azure Backup Server (MABS)
+## WhatΓÇÖs new in MABS v3 UR2
+
+Microsoft Azure Backup Server (MABS) version 3 UR2 supports the following new features/feature updates.
+
+For information about the UR2 issues fixes and the installation instructions, see the [KB article](https://support.microsoft.com/topic/update-rollup-2-for-microsoft-azure-backup-server-v3-350de164-0ae4-459a-8acf-7777dbb7fd73).
+
+### Support for Azure Stack HCI
+
+With MABS v3 UR2, you can backup Virtual Machines on Azure Stack HCI. [Learn more](/azure-stack/hci).
+
+### Support for VMware 7.0
+
+With MABS v3 UR2, you can back up VMware 7.0 VMs. [Learn more](/azure/backup/backup-support-matrix-mabs-dpm).
+
+### Support for SQL Server Failover Cluster Instance (FCI) using Cluster Shared Volume (CSV)
+
+MABS v3 UR2 supports SQL Server Failover Cluster Instance (FCI) using Cluster Shared Volume (CSV). With CSV, the management of your SQL Server Instance is simplified. This helps you to manage the underlying storage from any node as there is an abstraction to which node owns the disk. [Learn more](/azure/backup/backup-azure-sql-mabs).
+
+### Optimized Volume Migration
+
+MABS v3 UR2 supports optimized volume migration. The optimized volume migration allows you to move data sources to the new volume much faster. The enhanced migration process migrates only the active backup copy (Active Replica) to the new volume. All new recovery points are created on the new volume, while existing recovery points are maintained on the existing volume and are purged based on the retention policy. [Learn more](https://support.microsoft.com/topic/microsoft-azure-backup-server-v3-feb4523f-8da7-da61-2f47-eaa9fca9a3de).
+
+### Offline Backup using Azure Data Box
+
+MABS v3 UR2 supports Offline backup using Azure Data Box. With Microsoft Azure Data Box integration, you can overcome the challenge of moving terabytes of backup data from on-premises to Azure storage. Azure Data Box saves the effort required to procure your own Azure-compatible disks and connectors or to provision temporary storage as a staging location. Microsoft also handles the end-to-end transfer logistics, which you can track through the Azure portal. [Learn more](/azure/backup/offline-backup-azure-data-box-dpm-mabs).
+ ## What's new in MABS V3 UR1 Microsoft Azure Backup Server (MABS) version 3 UR1 is the latest update, and includes critical bug fixes and other features and enhancements. To view the list of bugs fixed and the installation instructions for MABS V3 UR1, see KB article [4534062](https://support.microsoft.com/help/4534062).
With MABS V3 UR1, an additional a layer of authentication is added for critical
MABS v3 UR1 improves the experience of offline backup with Azure Import/Export Service. For more information, see the updated steps [here](./backup-azure-backup-server-import-export.md). >[!NOTE]
->The update also brings the preview for Offline Backup using Azure Data Box in MABS. Contact [SystemCenterFeedback@microsoft.com](mailto:SystemCenterFeedback@microsoft.com) to learn more.
+>From MABS v3 UR2, MABS can perform offline backup using Azure Data Box. [Learn more](/azure/backup/offline-backup-azure-data-box-dpm-mabs).
### New cmdlet parameter
backup Manage Monitor Sql Database Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/manage-monitor-sql-database-backup.md
Title: Manage and monitor SQL Server DBs on an Azure VM description: This article describes how to manage and monitor SQL Server databases that are running on an Azure VM. Previously updated : 09/11/2019 Last updated : 07/27/2021 # Manage and monitor backed up SQL Server databases
In the vault dashboard, go to **Manage** > **Backup Policies** and choose the po
Policy modification will impact all the associated Backup Items and trigger corresponding **configure protection** jobs.
+>[!Note]
+>Modification of policy will affect existing recovery points also. <br><br> For recovery points in archive that haven't stayed for a duration of 180 days in Archive Tier, deletion of those recovery points lead to early deletion cost. [Learn more](/azure/storage/blobs/storage-blob-storage-tiers#cool-and-archive-early-deletion).
+ ### Inconsistent policy Sometimes, a modify policy operation can lead to an **inconsistent** policy version for some backup items. This happens when the corresponding **configure protection** job fails for the backup item after a modify policy operation is triggered. It appears as follows in the backup item view:
backup Manage Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/manage-telemetry.md
+
+ Title: Manage telemetry settings in Microsoft Azure Backup Server (MABS)
+description: This article provides information about how to manage the telemetry settings in MABS.
Last updated : 07/27/2021+++
+# Manage telemetry settings
+
+>[!NOTE]
+>This feature is applicable for MABS V3 UR2 and later.
+
+This article provides information about how to manage the telemetry (Diagnostics and utility data) settings in Microsoft Azure Backup Server (MABS).
+
+By default, MABS sends diagnostic and connectivity data to Microsoft. Microsoft uses this data to provide and improve the quality, security, and integrity of Microsoft products and services.
+
+Administrators can turn off this feature at any point of time. For more information on the data collected, see the [following section](#telemetry-data-collected).
+
+## Turn on/off telemetry from console
+
+1. In the Microsoft Azure Backup Server console, go to **Management** and click **Options** in the top pane.
+1. In the **Options** dialog box, select **Diagnostic and Usage Data Settings**.
+
+ ![console telemetry options](./media/telemetry/telemetry-options.png)
+
+1. Select the diagnostic and usage data sharing preference from the options displayed and then click **OK**.
+
+ >[!NOTE]
+ >We recommend you to read the [Privacy Statement](https://privacy.microsoft.com/privacystatement) before you select the option.
+ >- To turn on telemetry, select **Yes, I am willing to send data to Microsoft**.
+ >- To turn off telemetry, select **No, I prefer not to send data to Microsoft**.
+
+## Telemetry data collected
+
+| Data related To | Data collected* |
+| | |
+| **Setup** | Version of MABS installed. <br/><br/>Version of the MABS update rollup installed. <br/><br/> Unique machine identifier. <br/><br/> Operating system on which MABS is installed. <br/><br/> Unique cloud subscription identifier.<br/><br/> MARS agent version.<br/><br/> Whether tiered storage is enabled. <br/><br/> Size of the storage used. |
+| **Workload Protected** | Workload unique Identifier. <br/><br/>Size of the workload being backed up. <br/><br/>Workload type and its version number. <br/><br/>If the workload is currently being protected by MABS. <br/><br/>Unique Identifier of the Protection Group under which the workload is protected.<br/><br/> Location where the workload is getting backed up - to disk/tape or cloud.|
+| **Jobs** | Status of the backup/restore job. <br/><br/> Size of the data backed up/restored. <br/><br/>Failure message, in case backup/restore job fails.<br/><br/> Time taken for the restore job.<br/><br/>Details of the workload for which the backup/restore job was run. |
+| **Telemetry change status** | The status change details for the telemetry settings, if enabled or disabled, and when. |
+| **MABS Console Crash Error** | The details of the error when a MABS console crashes.|
+
+## Next steps
+
+[Protect workloads](/azure/backup/back-up-hyper-v-virtual-machines-mabs)
backup Microsoft Azure Backup Server Protection V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/microsoft-azure-backup-server-protection-v3.md
Title: What Azure Backup Server V3 RTM can back up description: This article provides a protection matrix listing all workloads, data types, and installations that Azure Backup Serve V3 RTM protects. Previously updated : 11/13/2018 Last updated : 07/27/2021
The following matrix lists what can be protected with Azure Backup Server V3 RTM
|Hyper-V host - MABS protection agent on Hyper-V host server, cluster, or VM|Windows Server 2008 R2 SP1 - Enterprise and Standard|Physical server<br /><br />On-premises Hyper-V virtual machine|V3, V2|Protect: Hyper-V computers, cluster shared volumes (CSVs)<br /><br />Recover: Virtual machine, Item-level recovery of files and folder, volumes, virtual hard drives| |Hyper-V host - MABS protection agent on Hyper-V host server, cluster, or VM|Windows Server 2008 SP2|Physical server<br /><br />On-premises Hyper-V virtual machine|Not supported|Protect: Hyper-V computers, cluster shared volumes (CSVs)<br /><br />Recover: Virtual machine, Item-level recovery of files and folder, volumes, virtual hard drives| |VMware VMs|VMware vCenter/vSphere ESX/ESXi Licensed Version 5.5/6.0/6.5 |Physical server, <br/>On-premises Hyper-V VM, <br/> Windows VM in VMware|V3, V2|VMware VMs on cluster-shared volumes (CSVs), NFS, and SAN storage<br /> Item-level recovery of files and folders is available only for Windows VMs, VMware vApps are not supported.|
-|VMware VMs|[VMware vSphere Licensed Version 6.7](backup-azure-backup-server-vmware.md#vmware-vsphere-67) |Physical server, <br/>On-premises Hyper-V VM, <br/> Windows VM in VMware|V3|VMware VMs on cluster-shared volumes (CSVs), NFS, and SAN storage<br /> Item-level recovery of files and folders is available only for Windows VMs, VMware vApps are not supported.|
+|VMware VMs|[VMware vSphere Licensed version 6.7 and 7.0](backup-azure-backup-server-vmware.md#vmware-vsphere-67-and-70) |Physical server, <br/>On-premises Hyper-V VM, <br/> Windows VM in VMware|V3|VMware VMs on cluster-shared volumes (CSVs), NFS, and SAN storage<br /> Item-level recovery of files and folders is available only for Windows VMs, VMware vApps are not supported.|
|Linux|Linux running as [Hyper-V](back-up-hyper-v-virtual-machines-mabs.md) or [VMware](backup-azure-backup-server-vmware.md) guest|Physical server, <br/>On-premises Hyper-V VM, <br/> Windows VM in VMware|V3, V2|Hyper-V must be running on Windows Server 2012 R2 or Windows Server 2016. Protect: Entire virtual machine<br /><br />Recover: Entire virtual machine <br/><br/> Only file-consistent snapshots are supported. <br/><br/> For a complete list of supported Linux distributions and versions, see the article, [Linux on distributions endorsed by Azure](../virtual-machines/linux/endorsed-distros.md).| ## Azure ExpressRoute support
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
**Note**: Get started with the Speech SDK [here](speech-sdk.md#get-the-speech-sdk). **Highlights summary**-- Ubuntu 16.04 reached end of life in April of 2021. In conjunction with Azure DevOps and Github we will drop support for 16.04 in September 2021. Please migrate ubuntu-16.04 workflows to ubuntu-18.04 or newer before then.
+- Ubuntu 16.04 reached end of life in April of 2021. In conjunction with Azure DevOps and Github, we will drop support for 16.04 in September 2021. Please migrate ubuntu-16.04 workflows to ubuntu-18.04 or newer before then.
#### New features
- **Important**: The Speaker Recognition feature is in Preview. All voice profiles created in Preview will be discontinued 90 days after the Speaker Recognition feature is moved out of Preview into General Availability. At that point the Preview voice profiles will stop functioning. - **Python**: Added [support for continuous Language Identification (LID)](/azure/cognitive-services/speech-service/how-to-automatic-language-detection?pivots=programming-language-python) on the existing `SpeechRecognizer` and `TranslationRecognizer` objects. - **Python**: Added a [new Python object](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.sourcelanguagerecognizer?view=azure-python) named `SourceLanguageRecognizer` to do one-time or continuous LID (without recognition or translation). -- Support AAD authentication and User assigned Managed Identity-- **JavaScript** `getActivationPhrasesAsync` API added to `VoiceProfileClient` class for receiving a list of valid activation phrases in speaker recognition enrollment phase for independent recognition scenarios. -- **JavaScript** `VoiceProfileClient`'s `enrollProfileAsync` API is now async awaitable. See this independent identification code for example usage.
+- **JavaScript**: `getActivationPhrasesAsync` API added to `VoiceProfileClient` class for receiving a list of valid activation phrases in speaker recognition enrollment phase for independent recognition scenarios.
+- **JavaScript** `VoiceProfileClient`'s `enrollProfileAsync` API is now async awaitable. See [this independent identification code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/quickstart/javascript/node/speaker-recognition/identification/independent-identification.js) for example usage.
#### Improvements -- **AutoCloseable** support added to many Java objects. Now the try-with-resources model is supported to release resources. See [this sample that uses try-with-resources](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/quickstart/java/jre/intent-recognition/src/speechsdk/quickstart/Main.java#L28). Also see the Oracle Java documentation tutorial for [The try-with-resources Statement](https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html) to learn about this pattern.-- SDK size reductions.
+- **Java**: **AutoCloseable** support added to many Java objects. Now the try-with-resources model is supported to release resources. See [this sample that uses try-with-resources](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/quickstart/java/jre/intent-recognition/src/speechsdk/quickstart/Main.java#L28). Also see the Oracle Java documentation tutorial for [The try-with-resources Statement](https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html) to learn about this pattern.
+- **Disk footprint** has been significantly reduced for many platforms and architectures. Examples for the `Microsoft.CognitiveServices.Speech.core` binary: x64 Linux is 475KB smaller (8.0% reduction); ARM64 Windows UWP is 464KB smaller (11.5% reduction); x86 Windows is 343KB smaller (17.5% reduction); and x64 Windows is 451KB smaller (19.4% reduction).
#### Bug fixes - **Java**: Fixed synthesis error when the synthesis text contains surrogate characters. Details [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/1118). - **JavaScript**: Browser microphone audio processing now uses `AudioWorkletNode` instead of deprecated `ScriptProcessorNode`. Details [here](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/391). - **JavaScript**: Correctly keep conversations alive during long running conversation translation scenarios. Details [here](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/389).-- **JavaScript**: Fixed issue with recognizer reconnecting to a mediastream in continuous recognition. Details [here]- (https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/385).
+- **JavaScript**: Fixed issue with recognizer reconnecting to a mediastream in continuous recognition. Details [here](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/385).
- **JavaScript**: Fixed issue with recognizer reconnecting to a pushStream in continuous recognition. Details [here](https://github.com/microsoft/cognitive-services-speech-sdk-js/pull/399). - **JavaScript**: Corrected word level offset calculation in detailed recognition results. Details [here](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/394). #### Samples--Java quickstart samples updated [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/java).--JavaScript speaker recognition samples updated to show new usage of `enrollProfileAsync()`. See samples [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/js/node).+
+- Java quickstart samples updated [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/java).
+- JavaScript speaker recognition samples updated to show new usage of `enrollProfileAsync()`. See samples [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/js/node).
## Text-to-speech 2021-June release
cosmos-db Managed Identity Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/managed-identity-based-authentication.md
The List Keys API returns the `DatabaseAccountListKeysResult` object. This type
```csharp namespace Monitor {
- public class DatabaseAccountListKeysResult
- {
- public string primaryMasterKey {get;set;}
- public string primaryReadonlyMasterKey {get; set;}
- public string secondaryMasterKey {get; set;}
- public string secondaryReadonlyMasterKey {get;set;}
- }
+ public class DatabaseAccountListKeysResult
+ {
+ public string primaryMasterKey { get; set; }
+ public string primaryReadonlyMasterKey { get; set; }
+ public string secondaryMasterKey { get; set; }
+ public string secondaryReadonlyMasterKey { get; set; }
+ }
} ```
namespace Monitor
public string id { get; set; } = Guid.NewGuid().ToString(); public DateTime RecordTime { get; set; } public int Temperature { get; set; }- } } ```
data-factory Connector Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-blob-storage.md
Previously updated : 06/17/2021 Last updated : 07/19/2021 # Copy and transform data in Azure Blob storage by using Azure Data Factory
This Blob storage connector supports the following authentication types. See the
- [Account key authentication](#account-key-authentication) - [Shared access signature authentication](#shared-access-signature-authentication) - [Service principal authentication](#service-principal-authentication)-- [Managed identities for Azure resource authentication](#managed-identity)
+- [System-assigned managed identity authentication](#managed-identity)
+- [User-assigned managed identity authentication](#user-assigned-managed-identity-authentication)
>[!NOTE] >- If want to use the public Azure integration runtime to connect to your Blob storage by leveraging the **Allow trusted Microsoft services to access this storage account** option enabled on Azure Storage firewall, you must use [managed identity authentication](#managed-identity).
These properties are supported for an Azure Blob storage linked service:
} ```
-### <a name="managed-identity"></a> Managed identities for Azure resource authentication
+### <a name="managed-identity"></a> System-assigned managed identity authentication
-A data factory can be associated with a [managed identity for Azure resources](data-factory-service-identity.md), which represents this specific data factory. You can directly use this managed identity for Blob storage authentication, which is similar to using your own service principal. It allows this designated factory to access and copy data from or to Blob storage.
+A data factory can be associated with a [system-assigned managed identity for Azure resources](data-factory-service-identity.md#system-assigned-managed-identity), which represents this specific data factory. You can directly use this system-assigned managed identity for Blob storage authentication, which is similar to using your own service principal. It allows this designated factory to access and copy data from or to Blob storage. To learn more about managed identities for Azure resources, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
For general information about Azure Storage authentication, see [Authenticate access to Azure Storage using Azure Active Directory](../storage/common/storage-auth-aad.md). To use managed identities for Azure resource authentication, follow these steps:
-1. [Retrieve Data Factory managed identity information](data-factory-service-identity.md#retrieve-managed-identity) by copying the value of the managed identity object ID generated along with your factory.
+1. [Retrieve Data Factory system-assigned managed identity information](data-factory-service-identity.md#retrieve-managed-identity) by copying the value of the system-assigned managed identity object ID generated along with your factory.
2. Grant the managed identity permission in Azure Blob storage. For more information on the roles, see [Use the Azure portal to assign an Azure role for access to blob and queue data](../storage/blobs/assign-azure-role-data-access.md). - **As source**, in **Access control (IAM)**, grant at least the **Storage Blob Data Reader** role. - **As sink**, in **Access control (IAM)**, grant at least the **Storage Blob Data Contributor** role.
->[!IMPORTANT]
->If you use PolyBase or COPY statement to load data from Blob storage (as a source or as staging) into Azure Synapse Analytics, when you use managed identity authentication for Blob storage, make sure you also follow steps 1 to 3 in [this guidance](../azure-sql/database/vnet-service-endpoint-rule-overview.md#impact-of-using-virtual-network-service-endpoints-with-azure-storage). Those steps will register your server with Azure AD and assign the Storage Blob Data Contributor role to your server. Data Factory handles the rest. If you configure Blob storage with an Azure Virtual Network endpoint, you also need to have **Allow trusted Microsoft services to access this storage account** turned on under Azure Storage account **Firewalls and Virtual networks** settings menu as required by Synapse.
- These properties are supported for an Azure Blob storage linked service: | Property | Description | Required |
These properties are supported for an Azure Blob storage linked service:
| accountKind | Specify the kind of your storage account. Allowed values are: **Storage** (general purpose v1), **StorageV2** (general purpose v2), **BlobStorage**, or **BlockBlobStorage**. <br/><br/>When using Azure Blob linked service in data flow, managed identity or service principal authentication is not supported when account kind as empty or "Storage". Specify the proper account kind, choose a different authentication, or upgrade your storage account to general purpose v2. | No | | connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. | No |
-> [!NOTE]
->
-> - If your blob account enables [soft delete](../storage/blobs/soft-delete-blob-overview.md), managed identity authentication is not supported in Data Flow.
-> - If you access the blob storage through private endpoint using Data Flow, note when managed identity authentication is used Data Flow connects to the ADLS Gen2 endpoint instead of Blob endpoint . Make sure you create the corresponding private endpoint in ADF to enable access.
+**Example:**
-> [!NOTE]
-> Managed identities for Azure resource authentication are supported only by the "AzureBlobStorage" type linked service, not the previous "AzureStorage" type linked service.
+```json
+{
+ "name": "AzureBlobStorageLinkedService",
+ "properties": {
+ "type": "AzureBlobStorage",
+ "typeProperties": {
+ "serviceEndpoint": "https://<accountName>.blob.core.windows.net/",
+ "accountKind": "StorageV2"
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+
+### User-assigned managed identity authentication
+A data factory can be assigned with one or multiple [user-assigned managed identities](data-factory-service-identity.md#user-assigned-managed-identity). You can use this user-assigned managed identity for Blob storage authentication, which allows to access and copy data from or to Blob storage. To learn more about managed identities for Azure resources, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
+
+For general information about Azure storage authentication, see [Authenticate access to Azure Storage using Azure Active Directory](../storage/common/storage-auth-aad.md). To use user-assigned managed identity authentication, follow these steps:
+
+1. [Create one or multiple user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md) and grant permission in Azure Blob storage. For more information on the roles, see [Use the Azure portal to assign an Azure role for access to blob and queue data](../storage/common/storage-auth-aad-rbac-portal.md).
+
+ - **As source**, in **Access control (IAM)**, grant at least the **Storage Blob Data Reader** role.
+ - **As sink**, in **Access control (IAM)**, grant at least the **Storage Blob Data Contributor** role.
+
+2. Assign one or multiple user-assigned managed identities to your data factory and [create credentials](data-factory-service-identity.md#credentials) for each user-assigned managed identity.
++
+These properties are supported for an Azure Blob storage linked service:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The **type** property must be set to **AzureBlobStorage**. | Yes |
+| serviceEndpoint | Specify the Azure Blob storage service endpoint with the pattern of `https://<accountName>.blob.core.windows.net/`. | Yes |
+| accountKind | Specify the kind of your storage account. Allowed values are: **Storage** (general purpose v1), **StorageV2** (general purpose v2), **BlobStorage**, or **BlockBlobStorage**. <br/><br/>When using Azure Blob linked service in data flow, managed identity or service principal authentication is not supported when account kind as empty or "Storage". Specify the proper account kind, choose a different authentication, or upgrade your storage account to general purpose v2. | No |
+| credentials | Specify the user-assigned managed identity as the credential object. | Yes |
+| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. | No |
**Example:**
These properties are supported for an Azure Blob storage linked service:
"type": "AzureBlobStorage", "typeProperties": { "serviceEndpoint": "https://<accountName>.blob.core.windows.net/",
- "accountKind": "StorageV2"
+ "accountKind": "StorageV2",
+ "credential": {
+ "referenceName": "credential1",
+ "type": "CredentialReference"
+ }
}, "connectVia": { "referenceName": "<name of Integration Runtime>",
These properties are supported for an Azure Blob storage linked service:
} ```
+>[!IMPORTANT]
+>If you use PolyBase or COPY statement to load data from Blob storage (as a source or as staging) into Azure Synapse Analytics, when you use managed identity authentication for Blob storage, make sure you also follow steps 1 to 3 in [this guidance](../azure-sql/database/vnet-service-endpoint-rule-overview.md#impact-of-using-virtual-network-service-endpoints-with-azure-storage). Those steps will register your server with Azure AD and assign the Storage Blob Data Contributor role to your server. Data Factory handles the rest. If you configure Blob storage with an Azure Virtual Network endpoint, you also need to have **Allow trusted Microsoft services to access this storage account** turned on under Azure Storage account **Firewalls and Virtual networks** settings menu as required by Synapse.
+
+> [!NOTE]
+>
+> - If your blob account enables [soft delete](../storage/blobs/soft-delete-blob-overview.md), system-assigned/user-assigned managed identity authentication is not supported in Data Flow.
+> - If you access the blob storage through private endpoint using Data Flow, note when system-assigned/user-assigned managed identity authentication is used Data Flow connects to the ADLS Gen2 endpoint instead of Blob endpoint. Make sure you create the corresponding private endpoint in ADF to enable access.
+
+> [!NOTE]
+> System-assigned/user-assigned managed identity authentication is supported only by the "AzureBlobStorage" type linked service, not the previous "AzureStorage" type linked service.
+ ## Dataset properties For a full list of sections and properties available for defining datasets, see the [Datasets](concepts-datasets-linked-services.md) article.
data-factory Connector Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-data-explorer.md
Previously updated : 03/24/2020 Last updated : 07/19/2020 # Copy data to or from Azure Data Explorer by using Azure Data Factory
The following sections provide details about properties that are used to define
The Azure Data Explorer connector supports the following authentication types. See the corresponding sections for details: - [Service principal authentication](#service-principal-authentication)-- [Managed identities for Azure resources authentication](#managed-identity)
+- [System-assigned managed identity authentication](#managed-identity)
+- [User-assigned managed identity authentication](#user-assigned-managed-identity-authentication)
### Service principal authentication
The following properties are supported for the Azure Data Explorer linked servic
} ```
-### <a name="managed-identity"></a> Managed identities for Azure resources authentication
+### <a name="managed-identity"></a> System-assigned managed identity authentication
-To use managed identities for Azure resource authentication, follow these steps to grant permissions:
+To learn more about managed identities for Azure resources, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+
+To use system-assigned managed identity authentication, follow these steps to grant permissions:
1. [Retrieve the Data Factory managed identity information](data-factory-service-identity.md#retrieve-managed-identity) by copying the value of the **managed identity object ID** generated along with your factory.
The following properties are supported for the Azure Data Explorer linked servic
| database | Name of database. | Yes | | connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. |No |
-**Example: using managed identity authentication**
+**Example: using system-assigned managed identity authentication**
+
+```json
+{
+ "name": "AzureDataExplorerLinkedService",
+ "properties": {
+ "type": "AzureDataExplorer",
+ "typeProperties": {
+ "endpoint": "https://<clusterName>.<regionName>.kusto.windows.net ",
+ "database": "<database name>",
+ }
+ }
+}
+```
+
+### User-assigned managed identity authentication
+To learn more about managed identities for Azure resources, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
+To use user-assigned managed identity authentication, follow these steps:
+
+1. [Create one or multiple user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md) and grant permission in Azure Data Explorer. See [Manage Azure Data Explorer database permissions](/azure/data-explorer/manage-database-permissions) for detailed information about roles and permissions and about managing permissions. In general, you must:
+
+ - **As source**, grant at least the **Database viewer** role to your database
+ - **As sink**, grant at least the **Database ingestor** role to your database
+
+2. Assign one or multiple user-assigned managed identities to your data factory and [create credentials](data-factory-service-identity.md#credentials) for each user-assigned managed identity.
+
+The following properties are supported for the Azure Data Explorer linked service:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The **type** property must be set to **AzureDataExplorer**. | Yes |
+| endpoint | Endpoint URL of the Azure Data Explorer cluster, with the format as `https://<clusterName>.<regionName>.kusto.windows.net`. | Yes |
+| database | Name of database. | Yes |
+| credentials | Specify the user-assigned managed identity as the credential object. | Yes |
+| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. |No |
+
+**Example: using user-assigned managed identity authentication**
```json { "name": "AzureDataExplorerLinkedService",
The following properties are supported for the Azure Data Explorer linked servic
"typeProperties": { "endpoint": "https://<clusterName>.<regionName>.kusto.windows.net ", "database": "<database name>",
+ "credential": {
+ "referenceName": "credential1",
+ "type": "CredentialReference"
+ }
} } }
data-factory Connector Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-data-lake-storage.md
Previously updated : 06/17/2021 Last updated : 07/19/2021 # Copy and transform data in Azure Data Lake Storage Gen2 using Azure Data Factory
The Azure Data Lake Storage Gen2 connector supports the following authentication
- [Account key authentication](#account-key-authentication) - [Service principal authentication](#service-principal-authentication)-- [Managed identities for Azure resources authentication](#managed-identity)-
+- [System-assigned managed identity authentication](#managed-identity)
+- [User-assigned managed identity authentication](#user-assigned-managed-identity-authentication)
+-
>[!NOTE] >- If want to use the public Azure integration runtime to connect to the Data Lake Storage Gen2 by leveraging the **Allow trusted Microsoft services to access this storage account** option enabled on Azure Storage firewall, you must use [managed identity authentication](#managed-identity). >- When you use PolyBase or COPY statement to load data into Azure Synapse Analytics, if your source or staging Data Lake Storage Gen2 is configured with an Azure Virtual Network endpoint, you must use managed identity authentication as required by Synapse. See the [managed identity authentication](#managed-identity) section with more configuration prerequisites.
You can also store service principal key in Azure Key Vault.
} ```
-### <a name="managed-identity"></a> Managed identities for Azure resources authentication
+### <a name="managed-identity"></a> System-assigned managed identity authentication
-A data factory can be associated with a [managed identity for Azure resources](data-factory-service-identity.md), which represents this specific data factory. You can directly use this managed identity for Data Lake Storage Gen2 authentication, similar to using your own service principal. It allows this designated factory to access and copy data to or from your Data Lake Storage Gen2.
+A data factory can be associated with a [system-assigned managed identity](data-factory-service-identity.md), which represents this specific data factory. You can directly use this system-assigned managed identity for Data Lake Storage Gen2 authentication, similar to using your own service principal. It allows this designated factory to access and copy data to or from your Data Lake Storage Gen2.
-To use managed identities for Azure resource authentication, follow these steps.
+To use system-assigned managed identity authentication, follow these steps.
-1. [Retrieve the Data Factory managed identity information](data-factory-service-identity.md#retrieve-managed-identity) by copying the value of the **managed identity object ID** generated along with your factory.
+1. [Retrieve the Data Factory system-assigned managed identity information](data-factory-service-identity.md#retrieve-managed-identity) by copying the value of the **managed identity object ID** generated along with your factory.
-2. Grant the managed identity proper permission. See examples on how permission works in Data Lake Storage Gen2 from [Access control lists on files and directories](../storage/blobs/data-lake-storage-access-control.md#access-control-lists-on-files-and-directories).
+2. Grant the system-assigned managed identity proper permission. See examples on how permission works in Data Lake Storage Gen2 from [Access control lists on files and directories](../storage/blobs/data-lake-storage-access-control.md#access-control-lists-on-files-and-directories).
- **As source**: In Storage Explorer, grant at least **Execute** permission for ALL upstream folders and the file system, along with **Read** permission for the files to copy. Alternatively, in Access control (IAM), grant at least the **Storage Blob Data Reader** role. - **As sink**: In Storage Explorer, grant at least **Execute** permission for ALL upstream folders and the file system, along with **Write** permission for the sink folder. Alternatively, in Access control (IAM), grant at least the **Storage Blob Data Contributor** role.
->[!NOTE]
->If you use Data Factory UI to author and the managed identity is not set with "Storage Blob Data Reader/Contributor" role in IAM, when doing test connection or browsing/navigating folders, choose "Test connection to file path" or "Browse from specified path", and specify a path with **Read + Execute** permission to continue.
+These properties are supported for the linked service:
->[!IMPORTANT]
->If you use PolyBase or COPY statement to load data from Data Lake Storage Gen2 into Azure Synapse Analytics, when you use managed identity authentication for Data Lake Storage Gen2, make sure you also follow steps 1 to 3 in [this guidance](../azure-sql/database/vnet-service-endpoint-rule-overview.md#impact-of-using-virtual-network-service-endpoints-with-azure-storage). Those steps will register your server with Azure AD and assign the Storage Blob Data Contributor role to your server. Data Factory handles the rest. If you configure Blob storage with an Azure Virtual Network endpoint, you also need to have **Allow trusted Microsoft services to access this storage account** turned on under Azure Storage account **Firewalls and Virtual networks** settings menu as required by Synapse.
+| Property | Description | Required |
+|: |: |: |
+| type | The type property must be set to **AzureBlobFS**. |Yes |
+| url | Endpoint for Data Lake Storage Gen2 with the pattern of `https://<accountname>.dfs.core.windows.net`. | Yes |
+| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. |No |
+
+**Example:**
+
+```json
+{
+ "name": "AzureDataLakeStorageGen2LinkedService",
+ "properties": {
+ "type": "AzureBlobFS",
+ "typeProperties": {
+ "url": "https://<accountname>.dfs.core.windows.net",
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+
+### User-assigned managed identity authentication
+
+A data factory can be assigned with one or multiple [user-assigned managed identities](data-factory-service-identity.md#user-assigned-managed-identity). You can use this user-assigned managed identity for Blob storage authentication, which allows to access and copy data from or to Data Lake Storage Gen2. To learn more about managed identities for Azure resources, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
+
+To use user-assigned managed identity authentication, follow these steps:
+
+1. [Create one or multiple user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md) and grant access to Azure Data Lake Storage Gen2. See examples on how permission works in Data Lake Storage Gen2 from [Access control lists on files and directories](../storage/blobs/data-lake-storage-access-control.md#access-control-lists-on-files-and-directories).
+
+ - **As source**: In Storage Explorer, grant at least **Execute** permission for ALL upstream folders and the file system, along with **Read** permission for the files to copy. Alternatively, in Access control (IAM), grant at least the **Storage Blob Data Reader** role.
+ - **As sink**: In Storage Explorer, grant at least **Execute** permission for ALL upstream folders and the file system, along with **Write** permission for the sink folder. Alternatively, in Access control (IAM), grant at least the **Storage Blob Data Contributor** role.
+
+2. Assign one or multiple user-assigned managed identities to your data factory and [create credentials](data-factory-service-identity.md#credentials) for each user-assigned managed identity.
These properties are supported for the linked service:
These properties are supported for the linked service:
|: |: |: | | type | The type property must be set to **AzureBlobFS**. |Yes | | url | Endpoint for Data Lake Storage Gen2 with the pattern of `https://<accountname>.dfs.core.windows.net`. | Yes |
+| credentials | Specify the user-assigned managed identity as the credential object. | Yes |
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. |No | **Example:**
These properties are supported for the linked service:
"type": "AzureBlobFS", "typeProperties": { "url": "https://<accountname>.dfs.core.windows.net",
- },
+ "credential": {
+ "referenceName": "credential1",
+ "type": "CredentialReference"
+ },
+ },
"connectVia": { "referenceName": "<name of Integration Runtime>", "type": "IntegrationRuntimeReference"
These properties are supported for the linked service:
} ```
+>[!NOTE]
+>If you use Data Factory UI to author and the managed identity is not set with "Storage Blob Data Reader/Contributor" role in IAM, when doing test connection or browsing/navigating folders, choose "Test connection to file path" or "Browse from specified path", and specify a path with **Read + Execute** permission to continue.
+
+>[!IMPORTANT]
+>If you use PolyBase or COPY statement to load data from Data Lake Storage Gen2 into Azure Synapse Analytics, when you use managed identity authentication for Data Lake Storage Gen2, make sure you also follow steps 1 to 3 in [this guidance](../azure-sql/database/vnet-service-endpoint-rule-overview.md#impact-of-using-virtual-network-service-endpoints-with-azure-storage). Those steps will register your server with Azure AD and assign the Storage Blob Data Contributor role to your server. Data Factory handles the rest. If you configure Blob storage with an Azure Virtual Network endpoint, you also need to have **Allow trusted Microsoft services to access this storage account** turned on under Azure Storage account **Firewalls and Virtual networks** settings menu as required by Synapse.
+ ## Dataset properties For a full list of sections and properties available for defining datasets, see [Datasets](concepts-datasets-linked-services.md).
data-factory Connector Azure Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-data-lake-store.md
Previously updated : 03/17/2021 Last updated : 07/19/2021 # Copy data to or from Azure Data Lake Storage Gen1 using Azure Data Factory
The following properties are supported:
} ```
-### <a name="managed-identity"></a> Use managed identities for Azure resources authentication
+### <a name="managed-identity"></a> Use system-assigned managed identity authentication
-A data factory can be associated with a [managed identity for Azure resources](data-factory-service-identity.md), which represents this specific data factory. You can directly use this managed identity for Data Lake Store authentication, similar to using your own service principal. It allows this designated factory to access and copy data to or from Data Lake Store.
+A data factory can be associated with a [system-assigned managed identity](data-factory-service-identity.md), which represents this specific data factory. You can directly use this system-assigned managed identity for Data Lake Store authentication, similar to using your own service principal. It allows this designated factory to access and copy data to or from Data Lake Store.
-To use managed identities for Azure resources authentication, follow these steps.
+To use system-assigned managed identity authentication, follow these steps.
-1. [Retrieve the data factory managed identity information](data-factory-service-identity.md#retrieve-managed-identity) by copying the value of the "Service Identity Application ID" generated along with your factory.
+1. [Retrieve the data factory system-assigned managed identity information](data-factory-service-identity.md#retrieve-managed-identity) by copying the value of the "Service Identity Application ID" generated along with your factory.
-2. Grant the managed identity access to Data Lake Store. See examples on how permission works in Data Lake Storage Gen1 from [Access control in Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-access-control.md#common-scenarios-related-to-permissions).
+2. Grant the system-assigned managed identity access to Data Lake Store. See examples on how permission works in Data Lake Storage Gen1 from [Access control in Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-access-control.md#common-scenarios-related-to-permissions).
- **As source**: In **Data explorer** > **Access**, grant at least **Execute** permission for ALL upstream folders including the root, along with **Read** permission for the files to copy. You can choose to add to **This folder and all children** for recursive, and add as **an access permission and a default permission entry**. There's no requirement on account-level access control (IAM). - **As sink**: In **Data explorer** > **Access**, grant at least **Execute** permission for ALL upstream folders including the root, along with **Write** permission for the sink folder. You can choose to add to **This folder and all children** for recursive, and add as **an access permission and a default permission entry**.
In Azure Data Factory, you don't need to specify any properties besides the gene
} ```
+### Use user-assigned managed identity authentication
+
+A data factory can be assigned with one or multiple [user-assigned managed identities](data-factory-service-identity.md). You can use this user-assigned managed identity for Blob storage authentication, which allows to access and copy data from or to Data Lake Store. To learn more about managed identities for Azure resources, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
+
+To use user-assigned managed identity authentication, follow these steps:
+
+1. [Create one or multiple user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md) and grant access to Azure Data Lake. See examples on how permission works in Data Lake Storage Gen1 from [Access control in Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-access-control.md#common-scenarios-related-to-permissions).
+
+ - **As source**: In **Data explorer** > **Access**, grant at least **Execute** permission for ALL upstream folders including the root, along with **Read** permission for the files to copy. You can choose to add to **This folder and all children** for recursive, and add as **an access permission and a default permission entry**. There's no requirement on account-level access control (IAM).
+ - **As sink**: In **Data explorer** > **Access**, grant at least **Execute** permission for ALL upstream folders including the root, along with **Write** permission for the sink folder. You can choose to add to **This folder and all children** for recursive, and add as **an access permission and a default permission entry**.
+
+2. Assign one or multiple user-assigned managed identities to your data factory and [create credentials](data-factory-service-identity.md#credentials) for each user-assigned managed identity.
+
+The following property is supported:
+
+| Property | Description | Required |
+|: |: |: |
+| credentials | Specify the user-assigned managed identity as the credential object. | Yes |
+
+**Example:**
+
+```json
+{
+ "name": "AzureDataLakeStoreLinkedService",
+ "properties": {
+ "type": "AzureDataLakeStore",
+ "typeProperties": {
+ "dataLakeStoreUri": "https://<accountname>.azuredatalakestore.net/webhdfs/v1",
+ "subscriptionId": "<subscription of ADLS>",
+ "resourceGroupName": "<resource group of ADLS>",
+ "credential": {
+ "referenceName": "credential1",
+ "type": "CredentialReference"
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+ ## Dataset properties For a full list of sections and properties available for defining datasets, see the [Datasets](concepts-datasets-linked-services.md) article.
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-rest.md
description: Learn how to copy data from a cloud or on-premises REST source to s
Previously updated : 03/16/2021 Last updated : 07/19/2021 # Copy data from and to a REST endpoint by using Azure Data Factory
Set the **authenticationType** property to **AadServicePrincipal**. In addition
} ```
-### <a name="managed-identity"></a> Use managed identities for Azure resources authentication
+### <a name="managed-identity"></a> Use system-assigned managed identity authentication
Set the **authenticationType** property to **ManagedServiceIdentity**. In addition to the generic properties that are described in the preceding section, specify the following properties:
Set the **authenticationType** property to **ManagedServiceIdentity**. In additi
} ```
+### Use user-assigned managed identity authentication
+Set the **authenticationType** property to **ManagedServiceIdentity**. In addition to the generic properties that are described in the preceding section, specify the following properties:
+
+| Property | Description | Required |
+|: |: |: |
+| aadResourceId | Specify the AAD resource you are requesting for authorization, for example, `https://management.core.windows.net`.| Yes |
+| credentials | Specify the user-assigned managed identity as the credential object. | Yes |
++
+**Example**
+
+```json
+{
+ "name": "RESTLinkedService",
+ "properties": {
+ "type": "RestService",
+ "typeProperties": {
+ "url": "<REST endpoint e.g. https://www.example.com/>",
+ "authenticationType": "ManagedServiceIdentity",
+ "aadResourceId": "<AAD resource URL e.g. https://management.core.windows.net>",
+ "credential": {
+ "referenceName": "credential1",
+ "type": "CredentialReference"
+ }
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+ ### Using authentication headers In addition, you can configure request headers for authentication along with the built-in authentication types.
data-factory Control Flow Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-expression-language-functions.md
The below example shows a complex example that references a deep sub-field of ac
`@activity('*activityName*').output.*subfield1*.*subfield2*[pipeline().parameters.*subfield3*].*subfield4*`
+Creating files dynamically and naming them is common pattern. Let us explore few dynamic file naming examples.
+
+ 1. Append Date to a filename: `@concat('Test_', formatDateTime(utcnow(), 'yyyy-dd-MM'))`
+
+ 2. Append DateTime in customer timezone : `@concat('Test_', convertFromUtc(utcnow(), 'Pacific Standard Time'))`
+
+ 3. Append Trigger Time :` @concat('Test_', pipeline().TriggerTime)`
+
+ 4. Output a custom filename in a Mapping Data Flow when outputting to a single file with date : `'Test_' + toString(currentDate()) + '.csv'`
+
+In above cases, 4 dynamic filenames are created starting with Test_.
+ ### Dynamic content editor Dynamic content editor automatically escapes characters in your content when you finish editing. For example, the following content in content editor is a string interpolation with two expression functions.
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/managed-virtual-network-private-endpoint.md
Previously updated : 06/16/2021 Last updated : 07/20/2021 # Azure Data Factory Managed Virtual Network (preview)
To access on premises data sources from managed Virtual Network using Private En
- **Test connection** operation for Linked Service of Azure Key Vault only validates the URL format, but doesn't do any network operation. - The column **Using private endpoint** is always shown as blank even if you create Private Endpoint for Azure Key Vault.
+### Linked Service creation of Azure HDI
+- The column **Using private endpoint** is always shown as blank even if you create Private Endpoint for HDI using private link service and load balancer with port forwarding.
+ :::image type="content" source="./media/managed-vnet/akv-pe.png" alt-text="Private Endpoint for AKV"::: ## Next steps
databox-online Azure Stack Edge Gpu Deploy Compute Acceleration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-compute-acceleration.md
Previously updated : 02/22/2021 Last updated : 02/26/2021
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-This article describes how to use compute acceleration on Azure Stack Edge devices when using Kubernetes deployments. The article applies to Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices.
+This article describes how to use compute acceleration on Azure Stack Edge devices when using Kubernetes deployments.
## About compute acceleration
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Custom Script Extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md
Previously updated : 02/22/2021 Last updated : 02/26/2021 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using APIs so that I can efficiently manage my VMs.
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-The Custom Script Extension downloads and runs scripts or commands on virtual machines running on your Azure Stack Edge Pro devices. This article details how to install and run the Custom Script Extension by using an Azure Resource Manager template.
-
-This article applies to Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices.
+The Custom Script Extension downloads and runs scripts or commands on virtual machines running on your Azure Stack Edge Pro devices. This article details how to install and run the Custom Script Extension by using an Azure Resource Manager template.
## About custom script extension
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-powershell.md
Previously updated : 06/25/2021 Last updated : 02/26/2021 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device. I want to use APIs so that I can efficiently manage my VMs.
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-This article describes how to create and manage a virtual machine (VM) on your Azure Stack Edge device by using Azure PowerShell. The information applies to Azure Stack Edge Pro with GPU (graphical processing unit), Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices.
+This article describes how to create and manage a virtual machine (VM) on your Azure Stack Edge device by using Azure PowerShell.
## VM deployment workflow
The high-level deployment workflow of the VM deployment is as follows:
1. Connect to the local Azure Resource Manager of your device. 1. Identify the built-in subscription on the device. 1. Bring your VM image.
-1. Create a resource group in the built-in subscription. The resource group will contain the VM and all the related resources.
+1. Create a resource group in the built-in subscription. The resource group will contain the VM and all the related resources.
1. Create a local storage account on the device to store the VHD that will be used to create a VM image. 1. Upload a Windows/Linux source image into the storage account to create a managed disk. 1. Use the managed disk to create a VM image.
databox-online Azure Stack Edge Gpu Manage Shares https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-manage-shares.md
Previously updated : 02/22/2021 Last updated : 02/26/2021 # Use Azure portal to manage shares on your Azure Stack Edge Pro [!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-This article describes how to manage shares on your Azure Stack Edge Pro. You can manage the Azure Stack Edge Pro via the Azure portal or via the local web UI. Use the Azure portal to add, delete, refresh shares, or sync storage key for storage account associated with the shares. This article applies to Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices.
+This article describes how to manage shares on your Azure Stack Edge Pro. You can manage the Azure Stack Edge Pro via the Azure portal or via the local web UI. Use the Azure portal to add, delete, refresh shares, or sync storage key for storage account associated with the shares.
## About shares
databox-online Azure Stack Edge Gpu Proactive Log Collection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-proactive-log-collection.md
Previously updated : 02/23/2021 Last updated : 02/26/2021
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-Proactive log collection gathers system health indicators on your Azure Stack Edge device to help you efficiently troubleshoot any device issues. Proactive log collection is enabled by default. This article describes what is logged, how Microsoft handles the data, and how to disable or enable proactive log collection.
-
-The information in this article applies to Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices.
+Proactive log collection gathers system health indicators on your Azure Stack Edge device to help you efficiently troubleshoot any device issues. Proactive log collection is enabled by default. This article describes what is logged, how Microsoft handles the data, and how to disable or enable proactive log collection.
## About proactive log collection
defender-for-iot Tutorial Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/tutorial-servicenow.md
description: In this tutorial, learn how to integrate ServiceNow with Azure Defe
Previously updated : 07/26/2021 Last updated : 07/27/2021
In this tutorial, you learn how to:
## Prerequisites
-### Software Requirements
+### Software requirements
Access to ServiceNow and Defender for IoT
To access the Defender for IoT application within ServiceNow, you will need to d
1. Search for `Defender for IoT` or `CyberX IoT/ICS Management`.
- :::image type="content" source="media/tutorial-servicenow/search-results.png" alt-text="Search for CyberX in the search bar.":::
+ :::image type="content" source="media/tutorial-servicenow/search-results.png" alt-text="Screenshot of the search screen in ServiceNow.":::
1. Select the application.
- :::image type="content" source="media/tutorial-servicenow/cyberx-app.png" alt-text="Select the application from the list.":::
+ :::image type="content" source="media/tutorial-servicenow/cyberx-app.png" alt-text="Screenshot of the search screen results.":::
1. Select **Request App**.
Configure Defender for IoT to push alert information to the ServiceNow tables. D
1. Select the :::image type="icon" source="media/tutorial-servicenow/plus-icon.png" border="false"::: button.
- :::image type="content" source="media/tutorial-servicenow/forwarding-rule.png" alt-text="Create Forwarding Rule":::
+ :::image type="content" source="media/tutorial-servicenow/forwarding-rule.png" alt-text="Screenshot of the Create Forwarding Rule window.":::
1. Add a rule name.
Configure Defender for IoT to push alert information to the ServiceNow tables. D
1. Enter the ServiceNow action parameters:
- :::image type="content" source="media/tutorial-servicenow/parameters.png" alt-text="Fill in the ServiceNow action parameters":::
+ :::image type="content" source="media/tutorial-servicenow/parameters.png" alt-text="Fill in the ServiceNow action parameters.":::
1. In the **Actions** pane, set the following parameters:
Configure Defender for IoT to push an extensive range of device attributes to th
1. Select **System Settings**, and then **ServiceNow** from the on-premises management console Integration section.
- :::image type="content" source="media/tutorial-servicenow/servicenow.png" alt-text="Select the ServiceNow button.":::
+ :::image type="content" source="media/tutorial-servicenow/servicenow.png" alt-text="Screenshot of the select the ServiceNow button.":::
1. Enter the following sync parameters in the ServiceNow Sync dialog box.
- :::image type="content" source="media/tutorial-servicenow/sync.png" alt-text="The ServiceNow sync dialog box.":::
+ :::image type="content" source="media/tutorial-servicenow/sync.png" alt-text="Screenshot of the ServiceNow sync dialog box.":::
Parameter | Description | |--|--|
Configure Defender for IoT to push an extensive range of device attributes to th
Verify that the on-premises management console is connected to the ServiceNow instance by reviewing the Last Sync date. ## Set up the integrations using a HTTPS proxy
This article describes the device attributes and alert information presented in
3. Navigate to **Inventory**, or **Alert**.
- [:::image type="content" source="media/tutorial-servicenow/alert-list.png" alt-text="Inventory or Alert":::](media/tutorial-servicenow/alert-list.png#lightbox)
+ [:::image type="content" source="media/tutorial-servicenow/alert-list.png" alt-text="Screenshot of the Inventory or Alert.":::](media/tutorial-servicenow/alert-list.png#lightbox)
## View connected devices
To view connected devices:
1. Select a device, and then select the **Appliance** listed in for that device.
- :::image type="content" source="media/tutorial-servicenow/appliance.png" alt-text="Select the desired appliance from the list.":::
+ :::image type="content" source="media/tutorial-servicenow/appliance.png" alt-text="Screenshot of the desired appliance from the list.":::
1. In the **Device Details** dialog box, select **Connected Devices**.
event-hubs Event Hubs Dotnet Standard Getstarted Send https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-dotnet-standard-getstarted-send.md
This section shows you how to create a .NET Core console application to send eve
// Create a batch of events using EventDataBatch eventBatch = await producerClient.CreateBatchAsync();
- for (int i = 1; i <= 3; i++)
+ for (int i = 1; i <= numOfEvents; i++)
{ if (! eventBatch.TryAdd(new EventData(Encoding.UTF8.GetBytes($"Event {i}")))) {
This section shows you how to create a .NET Core console application to send eve
{ // Use the producer client to send the batch of events to the event hub await producerClient.SendAsync(eventBatch);
- Console.WriteLine("A batch of 3 events has been published.");
+ Console.WriteLine($"A batch of {numEvents} events has been published.");
} finally {
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Campinas** | [Ascenty](https://www.ascenty.com/en/data-centers-en/campinas/) | 3 | Brazil South | 10G, 100G | | | **Canberra** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central | 10G, 100G | CDC | | **Canberra2** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central 2| 10G, 100G | CDC, Equinix |
-| **Cape Town** | [Teraco CT1](https://www.teraco.co.za/data-centre-locations/cape-town/) | 3 | South Africa West | 10G | BCX, Internet Solutions - Cloud Connect, Liquid Telecom, Teraco |
+| **Cape Town** | [Teraco CT1](https://www.teraco.co.za/data-centre-locations/cape-town/) | 3 | South Africa West | 10G | BCX, Internet Solutions - Cloud Connect, Liquid Telecom, MTN Global Connect, Teraco |
| **Chennai** | Tata Communications | 2 | South India | 10G | BSNL, Global CloudXchange (GCX), SIFY, Tata Communications, VodafoneIdea | | **Chennai2** | Airtel | 2 | South India | 10G | Airtel | | **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | 1 | North Central US | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Equinix, InterCloud, Internet2, Level 3 Communications, Megaport, PacketFabric, PCCW Global Limited, Sprint, Telia Carrier, Verizon, Zayo |
The following table shows connectivity locations and the service providers for e
| **Hong Kong** | [Equinix HK1](https://www.equinix.com/data-centers/asia-pacific-colocation/hong-kong-colocation/hong-kong-data-centers/hk1) | 2 | East Asia | 10G | Aryaka Networks, British Telecom, CenturyLink Cloud Connect, Chief Telecom, China Telecom Global, China Unicom, Colt, Equinix, InterCloud, Megaport, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telia Carrier, Verizon | | **Hong Kong2** | [iAdvantage MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | 10G | China Mobile International, China Telecom Global, iAdvantage, Megaport, PCCW Global Limited, SingTel | | **Jakarta** | Telin, Telkom Indonesia | 4 | n/a | 10G | Telin |
-| **Johannesburg** | [Teraco JB1](https://www.teraco.co.za/data-centre-locations/johannesburg/#jb1) | 3 | South Africa North | 10G | BCX, British Telecom, Internet Solutions - Cloud Connect, Liquid Telecom, Orange, Teraco |
+| **Johannesburg** | [Teraco JB1](https://www.teraco.co.za/data-centre-locations/johannesburg/#jb1) | 3 | South Africa North | 10G | BCX, British Telecom, Internet Solutions - Cloud Connect, Liquid Telecom, MTN Global Connect, Orange, Teraco |
| **Kuala Lumpur** | [TIME dotCom Menara AIMS](https://www.time.com.my/enterprise/connectivity/direct-cloud) | 2 | n/a | n/a | TIME dotCom | | **Las Vegas** | [Switch LV](https://www.switch.com/las-vegas) | 1 | n/a | 10G, 100G | CenturyLink Cloud Connect, Megaport, PacketFabric | | **London** | [Equinix LD5](https://www.equinix.com/locations/europe-colocation/united-kingdom-colocation/london-data-centers/ld5/) | 1 | UK South | 10G, 100G | AT&T NetBond, British Telecom, CenturyLink, Colt, Equinix, euNetworks, InterCloud, Internet Solutions - Cloud Connect, Interxion, Jisc, Level 3 Communications, Megaport, MTN, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telehouse - KDDI, Telenor, Telia Carrier, Verizon, Vodafone, Zayo |
The following table shows connectivity locations and the service providers for e
| **Oslo** | [DigiPlex Ulven](https://www.digiplex.com/locations/oslo-datacentre) | 1 | Norway East | 10G, 100G | GlobalConnect, Megaport, Telenor, Telia Carrier | | **Paris** | [Interxion PAR5](https://www.interxion.com/Locations/paris/) | 1 | France Central | 10G, 100G | British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Interxion, Jaguar Network, Megaport, Orange, Telia Carrier, Zayo | | **Perth** | [NextDC P1](https://www.nextdc.com/data-centres/p1-perth-data-centre) | 2 | n/a | 10G | Megaport, NextDC |
-| **Phoenix** | [EdgeConneX PHX01](https://www.edgeconnex.com/locations/north-america/phoenix-az/) | 1 | n/a | 10G, 100G | Megaport |
+| **Phoenix** | [EdgeConneX PHX01](https://www.edgeconnex.com/locations/north-america/phoenix-az/) | 1 | n/a | 10G, 100G | Megaport, Zayo |
| **Quebec City** | [Vantage](https://vantage-dc.com/data_centers/quebec-city-data-center-campus/) | 1 | Canada East | 10G, 100G | Bell Canada, Megaport, Telus | | **Queretaro (Mexico)** | [KIO Networks QR01](https://www.kionetworks.com/es-mx/) | 4 | n/a | 10G | Transtelco| | **Quincy** | [Sabey Datacenter - Building A](https://sabeydatacenters.com/data-center-locations/central-washington-data-centers/quincy-data-center) | 1 | West US 2 | 10G, 100G | |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[LGUplus](http://www.uplus.co.kr/)** |Supported |Supported |Seoul | | **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** |Supported |Supported |Amsterdam, Atlanta, Auckland, Chennai, Chicago, Dallas, Denver, Dubai2, Dublin, Frankfurt, Geneva, Hong Kong, Hong Kong2, Las Vegas, London, London2, Los Angeles, Madrid, Melbourne, Miami, Minneapolis, Montreal, Munich, New York, Osaka, Oslo, Paris, Perth, Phoenix, Quebec City, San Antonio, Seattle, Silicon Valley, Singapore, Singapore2, Stavanger, Stockholm, Sydney, Sydney2, Tokyo, Tokyo2 Toronto, Vancouver, Washington DC, Washington DC2, Zurich | | **[MTN](https://www.mtnbusiness.co.za/en/Cloud-Solutions/Pages/microsoft-express-route.aspx)** |Supported |Supported |London |
+| **MTN Global Connect** |Supported |Supported |Cape Town,Johannesburg|
| **[National Telecom](https://www.nc.ntplc.co.th/cat/category/264/855/CAT+Direct+Cloud+Connect+for+Microsoft+ExpressRoute?lang=en_EN)** |Supported |Supported |Bangkok | | **[Neutrona Networks](https://www.neutrona.com/index.php/azure-expressroute/)** |Supported |Supported |Dallas, Los Angeles, Miami, Sao Paulo, Washington DC | | **[Next Generation Data](https://vantage-dc-cardiff.co.uk/)** |Supported |Supported |Newport(Wales) |
The following table shows locations by service provider. If you want to view ava
| **[Vocus Group NZ](https://www.vocus.co.nz/business/cloud-data-centres)** | Supported | Supported | Auckland, Sydney | | **[Vodafone](https://www.vodafone.com/business/global-enterprise/global-connectivity/vodafone-ip-vpn-cloud-connect)** |Supported |Supported |Amsterdam2, London, Singapore | | **[Vodafone Idea](https://www.vodafone.in/business/enterprise-solutions/connectivity/vpn-extended-connect)** | Supported | Supported | Mumbai2 |
-| **[Zayo](https://www.zayo.com/solutions/industries/cloud-connectivity/microsoft-expressroute)** |Supported |Supported |Amsterdam, Chicago, Dallas, Denver, London, Los Angeles, Montreal, New York, Paris, Seattle, Silicon Valley, Toronto, Washington DC, Washington DC2 |
+| **[Zayo](https://www.zayo.com/solutions/industries/cloud-connectivity/microsoft-expressroute)** |Supported |Supported |Amsterdam, Chicago, Dallas, Denver, London, Los Angeles, Montreal, New York, Paris, Phoenix, Seattle, Silicon Valley, Toronto, Washington DC, Washington DC2 |
**+** denotes coming soon
firewall Premium Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-migrate.md
param (
[string] $PolicyId,
- # #new filewallpolicy name, if not specified will be the previous name with the '_premium' suffix
+ #new firewallpolicy name, if not specified will be the previous name with the '_premium' suffix
[Parameter(Mandatory=$false)] [string] $NewPolicyName = ""
frontdoor Front Door Http Headers Protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-http-headers-protocol.md
Front Door includes headers for an incoming request unless they're removed becau
| X-Forwarded-For | *X-Forwarded-For: 127.0.0.1* </br> The X-Forwarded-For (XFF) HTTP header field often identifies the originating IP address of a client connecting to a web server through an HTTP proxy or load balancer. If there's an existing XFF header, then Front Door appends the client socket IP to it or adds the XFF header with the client socket IP. | | X-Forwarded-Host | *X-Forwarded-Host: contoso.azurefd.net* </br> The X-Forwarded-Host HTTP header field is a common method used to identify the original host requested by the client in the Host HTTP request header. This is because the host name from Front Door may differ for the backend server handling the request. Any previous value will be overridden by Front Door. | | X-Forwarded-Proto | *X-Forwarded-Proto: http* </br> The X-Forwarded-Proto HTTP header field is often used to identify the originating protocol of an HTTP request. Front Door based on configuration might communicate with the backend by using HTTPS. This is true even if the request to the reverse proxy is HTTP. Any previous value will be overridden by Front Door. |
-| X-FD-HealthProbe | X-FD-HealthProbe HTTP header field is used to identify the health probe from Front Door. If this header set to 1, the request is health probe. You can use when want to strict access from particular Front Door with X-Forwarded-Host header field. |
+| X-FD-HealthProbe | X-FD-HealthProbe HTTP header field is used to identify the health probe from Front Door. If this header is set to 1, the request is from the health probe. It can be used to restrict access from Front Door with a particular value for the X-Forwarded-Host header field. |
| X-Azure-FDID | *X-Azure-FDID header: 437c82cd-360a-4a54-94c3-5ff707647783* </br> This field contains frontdoorID that can be used to identify which Front Door the incoming request is from. This field is populated by Front Door service. | ## Front Door to client
frontdoor Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/overview.md
With Azure Front Door Standard/Premium, you can transform your global consumer a
Azure Front Door Standard/Premium works at Layer 7 (HTTP/HTTPS layer) using anycast protocol with split TCP and Microsoft's global network to improve global connectivity. Based on your customized routing method using rules set, you can ensure that Azure Front Door will route your client requests to the fastest and most available origin. An application origin is any Internet-facing service hosted inside or outside of Azure. Azure Front Door Standard/Premium provides a range of traffic-routing methods and origin health monitoring options to suit different application needs and automatic failover scenarios. Similar to Traffic Manager, Front Door is resilient to failures, including failures to an entire Azure region.
-Azure Front Door also protect your app at the edges with integrated Web Application Firewall protection, Bot Protection, and built-in layer 3/layer 4 distributed denial of service (DDoS) protection. It also secures your private back-ends with private link service. Azure Front Door gives you Microsoft’s best-in-practice security at global scale. 
+Azure Front Door also protects your app at the edges with integrated Web Application Firewall protection, Bot Protection, and built-in layer 3/layer 4 distributed denial of service (DDoS) protection. It also secures your private back-ends with private link service. Azure Front Door gives you Microsoft’s best-in-practice security at global scale. 
>[!NOTE] > Azure provides a suite of fully managed load-balancing solutions for your scenarios.
iot-dps How To Send Additional Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/how-to-send-additional-data.md
Title: How to transfer a payload between device and Azure Device Provisioning Service description: This document describes how to transfer a payload between device and Device Provisioning Service (DPS)--++ Last updated 02/11/2020
iot-dps Iot Dps Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/iot-dps-customer-managed-keys.md
Title: Azure Device Provisioning Service data encryption at rest via customer-managed keys| Microsoft Docs description: Encryption of data at rest with customer-managed keys for Device Provisioning Service-+ Last updated 02/24/2020-+ # Encryption of data at rest with customer-managed keys for Device Provisioning Service
iot-hub Iot Hub C C Module Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-c-c-module-twin-getstarted.md
Title: Get started with Azure IoT Hub module identity & module twin (C) description: Learn how to create module identity and update module twin using IoT SDKs for C.-+ ms.devlang: c Last updated 06/25/2018-+
iot-hub Iot Hub Csharp Csharp Module Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-csharp-csharp-module-twin-getstarted.md
Title: Get started w/ Azure IoT Hub module identity & module twin (.NET) description: Learn how to create module identity and update module twin using IoT SDKs for .NET.-+ ms.devlang: csharp Last updated 08/07/2019-+
iot-hub Iot Hub Device Management Iot Extension Azure Cli 2 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-device-management-iot-extension-azure-cli-2-0.md
Title: Azure IoT device management with IoT extension for Azure CLI | Microsoft Docs description: Use the IoT extension for Azure CLI tool for Azure IoT Hub device management, featuring the Direct methods and the TwinΓÇÖs desired properties management options.-+ keywords: azure iot device management, azure iot hub device management, device management iot, iot hub device management Last updated 01/16/2018-+ # Use the IoT extension for Azure CLI for Azure IoT Hub device management
iot-hub Iot Hub Python Python Module Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-python-python-module-twin-getstarted.md
Title: Azure IoT Hub module identity and module twin (Python) description: Learn how to create module identity and update module twin using IoT SDKs for Python.-+ ms.devlang: python Last updated 04/03/2020-+
mariadb Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/overview.md
Title: Overview - Azure Database for MariaDB
-description: Learn about the Azure Database for MariaDB service, a relational database service in the Microsoft cloud based on the MySQL community edition.
+description: Learn about the Azure Database for MariaDB service, a relational database service in the Microsoft cloud based on the MariaDB community edition.
media-services Analyze Video Audio Files Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/analyze-video-audio-files-concept.md
Previously updated : 03/22/2021 Last updated : 07/26/2021
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-Azure Media Services v3 lets you extract insights from your video and audio files with Video Indexer. This article describes the Media Services v3 analyzer presets used to extract those insights. If you want more detailed insights, use Video Indexer directly. To understand when to use Video Indexer vs. Media Services analyzer presets, check out the [comparison document](../../azure-video-analyzer/video-analyzer-for-media-docs/compare-video-indexer-with-media-services-presets.md).
+Azure Media Services v3 lets you extract insights from your video and audio files with Azure Video Analyzer for Media (formerly Video Indexer). This article describes the Media Services v3 analyzer presets used to extract those insights. If you want more detailed insights, use Video Analyzer for Media directly. To understand when to use Video Analyzer for Media vs. Media Services analyzer presets, check out the [comparison document](../../azure-video-analyzer/video-analyzer-for-media-docs/compare-video-indexer-with-media-services-presets.md).
There are two modes for the Audio Analyzer preset, basic and standard. See the description of the differences in the table below.
To analyze your content using Media Services v3 presets, you create a **Transfor
## Compliance, Privacy and Security
-As an important reminder, you must comply with all applicable laws in your use of Video Indexer, and you may not use Video Indexer or any other Azure service in a manner that violates the rights of others or may be harmful to others. Before uploading any videos, including any biometric data, to the Video Indexer service for processing and storage, You must have all the proper rights, including all appropriate consents, from the individual(s) in the video. To learn about compliance, privacy and security in Video Indexer, the Azure [Cognitive Services Terms](https://azure.microsoft.com/support/legal/cognitive-services-compliance-and-privacy/). For MicrosoftΓÇÖs privacy obligations and handling of your data, please review MicrosoftΓÇÖs [Privacy Statement](https://privacy.microsoft.com/PrivacyStatement), the [Online Services Terms](https://www.microsoft.com/licensing/product-licensing/products) (ΓÇ£OSTΓÇ¥) and [Data Processing Addendum](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=67) (ΓÇ£DPAΓÇ¥). Additional privacy information, including on data retention, deletion/destruction, is available in the OST and [here](../../azure-video-analyzer/video-analyzer-for-media-docs/faq.md). By using Video Indexer, you agree to be bound by the Cognitive Services Terms, the OST, DPA and the Privacy Statement.
+As an important reminder, you must comply with all applicable laws in your use of Video Analyzer for Media, and you may not use Video Analyzer for Media or any other Azure service in a manner that violates the rights of others or may be harmful to others. Before uploading any videos, including any biometric data, to the Video Analyzer for Media service for processing and storage, You must have all the proper rights, including all appropriate consents, from the individual(s) in the video. To learn about compliance, privacy and security in Video Analyzer for Media, the Azure [Cognitive Services Terms](https://azure.microsoft.com/support/legal/cognitive-services-compliance-and-privacy/). For MicrosoftΓÇÖs privacy obligations and handling of your data, please review MicrosoftΓÇÖs [Privacy Statement](https://privacy.microsoft.com/PrivacyStatement), the [Online Services Terms](https://www.microsoft.com/licensing/product-licensing/products) (ΓÇ£OSTΓÇ¥) and [Data Processing Addendum](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=67) (ΓÇ£DPAΓÇ¥). Additional privacy information, including on data retention, deletion/destruction, is available in the OST and [here](../../azure-video-analyzer/video-analyzer-for-media-docs/faq.md). By using Video Analyzer for Media, you agree to be bound by the Cognitive Services Terms, the OST, DPA and the Privacy Statement.
## Built-in presets
Sentiments are aggregated by their sentimentType field (Positive/Neutral/Negativ
#### visualContentModeration
-The visualContentModeration block contains time ranges which Video Indexer found to potentially have adult content. If visualContentModeration is empty, there's no adult content that was identified.
+The visualContentModeration block contains time ranges which Video Analyzer for Media found to potentially have adult content. If visualContentModeration is empty, there's no adult content that was identified.
Videos that are found to contain adult or racy content might be available for private view only. Users can submit a request for a human review of the content, in which case the `IsAdult` attribute will contain the result of the human review.
Videos that are found to contain adult or racy content might be available for pr
``` ## Next steps
-[Tutorial: Analyze videos with Azure Media Services](analyze-videos-tutorial.md)
+[Tutorial: Analyze videos with Azure Media Services](analyze-videos-tutorial.md)
media-services Analyze Videos Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/analyze-videos-tutorial.md
Previously updated : 07/23/2021 Last updated : 07/26/2021
This tutorial shows you how to:
## Compliance, Privacy, and Security
-As an important reminder, you must comply with all applicable laws in your use of Video Indexer. You must not use Video Indexer or any other Azure service in a manner that violates the rights of others. Before uploading any videos, including any biometric data, to the Video Indexer service for processing and storage, you must have all the proper rights, including all appropriate consents, from the individuals in the video. To learn about compliance, privacy and security in Video Indexer, the Azure [Cognitive Services Terms](https://azure.microsoft.com/support/legal/cognitive-services-compliance-and-privacy/). For MicrosoftΓÇÖs privacy obligations and handling of your data, review MicrosoftΓÇÖs [Privacy Statement](https://privacy.microsoft.com/PrivacyStatement), the [Online Services Terms](https://www.microsoft.com/licensing/product-licensing/products) (OST) and [Data Processing Addendum](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=67) (ΓÇ£DPAΓÇ¥). More privacy information, including on data retention, deletion/destruction, is available in the OST and [here](../../azure-video-analyzer/video-analyzer-for-media-docs/faq.md). By using Video Indexer, you agree to be bound by the Cognitive Services Terms, the OST, DPA, and the Privacy Statement.
+As an important reminder, you must comply with all applicable laws in your use of Azure Video Analyzer for Media (formerly Video Indexer). You must not use Video Analyzer for Media or any other Azure service in a manner that violates the rights of others. Before uploading any videos, including any biometric data, to the Video Analyzer for Media service for processing and storage, you must have all the proper rights, including all appropriate consents, from the individuals in the video. To learn about compliance, privacy and security in Video Analyzer for Media, the Azure [Cognitive Services Terms](https://azure.microsoft.com/support/legal/cognitive-services-compliance-and-privacy/). For MicrosoftΓÇÖs privacy obligations and handling of your data, review MicrosoftΓÇÖs [Privacy Statement](https://privacy.microsoft.com/PrivacyStatement), the [Online Services Terms](https://www.microsoft.com/licensing/product-licensing/products) (OST) and [Data Processing Addendum](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=67) (ΓÇ£DPAΓÇ¥). More privacy information, including on data retention, deletion/destruction, is available in the OST and [here](../../azure-video-analyzer/video-analyzer-for-media-docs/faq.md). By using Video Analyzer for Media, you agree to be bound by the Cognitive Services Terms, the OST, DPA, and the Privacy Statement.
## Prerequisites
The output file of analyzing videos is called insights.json. This file contains
## Next steps > [!div class="nextstepaction"]
-> [Tutorial: upload, encode, and stream files](stream-files-tutorial-with-api.md)
+> [Tutorial: upload, encode, and stream files](stream-files-tutorial-with-api.md)
media-services Concept Media Reserved Units https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/concept-media-reserved-units.md
The following table helps you make a decision when choosing between different en
## Considerations
-* For Audio Analysis and Video Analysis jobs that are triggered by Media Services v3 or Video Indexer, provisioning the account with ten S3 units is highly recommended. If you need more than 10 S3 MRUs, open a support ticket using the [Azure portal](https://portal.azure.com/).
+* For Audio Analysis and Video Analysis jobs that are triggered by Media Services v3 or Azure Video Analyzer for Media, provisioning the account with ten S3 units is highly recommended. If you need more than 10 S3 MRUs, open a support ticket using the [Azure portal](https://portal.azure.com/).
* For encoding tasks that don't have MRUs, there is no upper bound to the time your tasks can spend in queued state, and at most only one task will be running at a time. ## Billing
media-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/release-notes.md
Added support for the following new recommended partner encoders for RTMP live s
- Standard encoding now maintains a regular GOP cadence for variable frame rate (VFR) contents during VOD encoding when using the time-based GOP setting. This means that customer submitting mixed frame rate content that varies between 15-30 fps, for example, should now see regular GOP distances calculated on output to adaptive bitrate streaming MP4 files. This will improve the ability to switch seamlessly between tracks when delivering over HLS or DASH. - Improved AV sync for variable frame rate (VFR) source content
-### Video Indexer, Video analytics
+### Azure Video Analyzer for Media, Video analytics
- Keyframes extracted using the VideoAnalyzer preset are now in the original resolution of the video instead of being resized. High-resolution keyframe extraction gives you original quality images and allows you to make use of the image-based artificial intelligence models provided by the Microsoft Computer Vision and Custom Vision services to gain even more insights from your video.
Media Services v3 is announcing the preview of 24 hrs x 365 days of live linear
#### Deprecation of media processors
-We are announcing deprecation of *Azure Media Indexer* and *Azure Media Indexer 2 Preview*. For the retirement dates, see the [legacy components](../previous/legacy-components.md) article. Azure Media Services Video Indexer replaces these legacy media processors.
+We are announcing deprecation of *Azure Media Indexer* and *Azure Media Indexer 2 Preview*. For the retirement dates, see the [legacy components](../previous/legacy-components.md) article. Azure Video Analyzer for Media replaces these legacy media processors.
-For more information, see [Migrate from Azure Media Indexer and Azure Media Indexer 2 to Azure Media Services Video Indexer](../previous/migrate-indexer-v1-v2.md).
+For more information, see [Migrate from Azure Media Indexer and Azure Media Indexer 2 to **Azure Media Services Video Indexer**](../previous/migrate-indexer-v1-v2.md).
## August 2019
Check out the [Azure Media Services community](media-services-community.md) arti
## Next steps - [Overview](media-services-overview.md)-- [Media Services v2 release notes](../previous/media-services-release-notes.md)
+- [Media Services v2 release notes](../previous/media-services-release-notes.md)
media-services Legacy Components https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/legacy-components.md
na ms.devlang: na Previously updated : 03/10/2021 Last updated : 07/26/2021 # Azure Media Services legacy components
The *Windows Azure Media Encoder* (WAME) and *Azure Media Encoder* (AME) media p
The following Media Analytics media processors are either deprecated or soon to be deprecated:
-
| **Media processor name** | **Retirement date** | **Additional notes** | | | | |
-| Azure Media Indexer 2 | January 1st, 2020 | This media processor will be replaced by the [Media Services v3 AudioAnalyzerPreset Basic mode](../latest/analyze-video-audio-files-concept.md). For more information, see [Migrate from Azure Media Indexer 2 to Azure Media Services Video Indexer](migrate-indexer-v1-v2.md). |
-| Azure Media Indexer | March 1, 2023 | This media processor will be replaced by the [Media Services v3 AudioAnalyzerPreset Basic mode](../latest/analyze-video-audio-files-concept.md). For more information, see [Migrate from Azure Media Indexer 2 to Azure Media Services Video Indexer](migrate-indexer-v1-v2.md). |
+| Azure Media Indexer 2 | January 1st, 2020 | This media processor will be replaced by the [Media Services v3 AudioAnalyzerPreset Basic mode](../latest/analyze-video-audio-files-concept.md). For more information, see [Migrate from Azure Media Indexer 2 to Azure Video Analyzer for Media (formerly Video Indexer)](migrate-indexer-v1-v2.md). |
+| Azure Media Indexer | March 1, 2023 | This media processor will be replaced by the [Media Services v3 AudioAnalyzerPreset Basic mode](../latest/analyze-video-audio-files-concept.md). For more information, see [Migrate from Azure Media Indexer 2 to Azure Video Analyzer for Media](migrate-indexer-v1-v2.md). |
| Motion Detection | June 1st, 2020|No replacement plans at this time. | | Video Summarization |June 1st, 2020|No replacement plans at this time.|
-| Video Optical Character Recognition | June 1st, 2020 |This media processor was replaced by Azure Media Services Video Indexer. Also, consider using [Azure Media Services v3 API](../latest/analyze-video-audio-files-concept.md). <br/>See Compare Azure Media Services v3 presets and Video Indexer. |
-| Face Detector | June 1st, 2020 | This media processor was replaced by Azure Media Services Video Indexer. Also, consider using [Azure Media Services v3 API](../latest/analyze-video-audio-files-concept.md). <br/>See Compare Azure Media Services v3 presets and Video Indexer. |
-| Content Moderator | June 1st, 2020 |This media processor was replaced by Azure Media Services Video Indexer. Also, consider using [Azure Media Services v3 API](../latest/analyze-video-audio-files-concept.md). <br/>See Compare Azure Media Services v3 presets and Video Indexer. |
+| Video Optical Character Recognition | June 1st, 2020 |This media processor was replaced by Azure Video Analyzer for Media. Also, consider using [Azure Media Services v3 API](../latest/analyze-video-audio-files-concept.md). <br/>See [Compare Azure Media Services v3 presets and Video Analyzer for Media](../../azure-video-analyzer/video-analyzer-for-media-docs/compare-video-indexer-with-media-services-presets.md). |
+| Face Detector | June 1st, 2020 | This media processor was replaced by Azure Video Analyzer for Media. Also, consider using [Azure Media Services v3 API](../latest/analyze-video-audio-files-concept.md). <br/>See [Compare Azure Media Services v3 presets and Video Analyzer for Media](../../azure-video-analyzer/video-analyzer-for-media-docs/compare-video-indexer-with-media-services-presets.md). |
+| Content Moderator | June 1st, 2020 |This media processor was replaced by Azure Video Analyzer for Media. Also, consider using [Azure Media Services v3 API](../latest/analyze-video-audio-files-concept.md). <br/>See [Compare Azure Media Services v3 presets and Video Analyzer for Media](../../azure-video-analyzer/video-analyzer-for-media-docs/compare-video-indexer-with-media-services-presets.md). |
## Next steps
media-services Migrate Indexer V1 V2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/migrate-indexer-v1-v2.md
na ms.devlang: na Previously updated : 3/10/2021 Last updated : 07/26/2021
-# Migrate from Media Indexer and Media Indexer 2 to Video Indexer
+# Migrate from Media Indexer and Media Indexer 2 to Video Analyzer for Media
[!INCLUDE [media services api v2 logo](./includes/v2-hr.md)] > [!IMPORTANT] > It is recommended that customers migrate from Indexer v1 and Indexer v2 to using the [Media Services v3 AudioAnalyzerPreset Basic mode](../latest/analyze-video-audio-files-concept.md). The [Azure Media Indexer](media-services-index-content.md) media processor and [Azure Media Indexer 2 Preview](./legacy-components.md) media processors are being retired. For the retirement dates, see this [legacy components](legacy-components.md) topic.
-Azure Media Services Video Indexer is built on Azure Media Analytics, Azure Cognitive Search, Cognitive Services (such as the Face API, Microsoft Translator, the Computer Vision API, and Custom Speech Service). It enables you to extract the insights from your videos using Video Indexer video and audio models. To see what scenarios Video Indexer can be used in, what features it offers, and how to get started, see [Video Indexer video and audio models](../../azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-overview.md).
+Azure Video Analyzer for Media is built on Azure Media Analytics, Azure Cognitive Search, Cognitive Services (such as the Face API, Microsoft Translator, the Computer Vision API, and Custom Speech Service). It enables you to extract the insights from your videos using Video Analyzer for Media video and audio models. To see what scenarios Video Analyzer for Media can be used in, and what features it offers, and how to get started, see [Video Analyzer for Media video and audio models](../../azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-overview.md).
-You can extract insights from your video and audio files by using the [Azure Media Services v3 analyzer presets](../latest/analyze-video-audio-files-concept.md) or directly by using the [Video Indexer APIs](https://api-portal.videoindexer.ai/). Currently, there is an overlap between features offered by the Video Indexer APIs and the Media Services v3 APIs.
+You can extract insights from your video and audio files by using the [Azure Media Services v3 analyzer presets](../latest/analyze-video-audio-files-concept.md) or directly by using the [Video Analyzer for Media APIs](https://api-portal.videoindexer.ai/). Currently, there is an overlap between features offered by the Video Analyzer for Media APIs and the Media Services v3 APIs.
> [!NOTE]
-> To understand the differences between the Video Indexer vs. Media Services analyzer presets, check out the [comparison document](../../azure-video-analyzer/video-analyzer-for-media-docs/compare-video-indexer-with-media-services-presets.md).
+> To understand the differences between the Video Analyzer for Media vs. Media Services analyzer presets, check out the [comparison document](../../azure-video-analyzer/video-analyzer-for-media-docs/compare-video-indexer-with-media-services-presets.md).
-This article discusses the steps for migrating from the Azure Media Indexer and Azure Media Indexer 2 to Azure Media Services Video Indexer.
+This article discusses the steps for migrating from the Azure Media Indexer and Azure Media Indexer 2 to Video Analyzer for Media.
## Migration options |If you require |then | |||
-|a solution that provides a speech-to-text transcription for any media file format in a closed caption file formats: VTT, SRT, or TTML<br/>as well as additional audio insights such as: keywords, topic inferencing, acoustic events, speaker diarization, entities extraction and translation| update your applications to use the Azure Video Indexer capabilities through the Video Indexer v2 REST API or the Azure Media Services v3 Audio Analyzer preset.|
+|a solution that provides a speech-to-text transcription for any media file format in a closed caption file formats: VTT, SRT, or TTML<br/>as well as additional audio insights such as: keywords, topic inferencing, acoustic events, speaker diarization, entities extraction and translation| update your applications to use the Video Analyzer for Media capabilities through the Video Analyzer for Media v2 REST API or the Azure Media Services v3 Audio Analyzer preset.|
|speech-to-text capabilities| use the Cognitive Services Speech API directly.|
-## Getting started with Video Indexer
+## Getting started with Video Analyzer for Media
-The following section points you to relevant links: [How can I get started with Video Indexer?](../../azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-overview.md#how-can-i-get-started-with-video-analyzer-for-media)
+The following section points you to relevant links: [How can I get started with Video Analyzer for Media?](../../azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-overview.md#how-can-i-get-started-with-video-analyzer-for-media)
## Getting started with Media Services v3 APIs
For more information about the text-to-speech service and how to get started, se
## Known differences from deprecated services
-You will find that Video Indexer, Azure Media Services v3 AudioAnalyzerPreset, and Cognitive Services Speech Services services are more reliable and produces better quality output than the retired Azure Media Indexer 1 and Azure Media Indexer 2 processors.
+You will find that Video Analyzer for Media, Azure Media Services v3 AudioAnalyzerPreset, and Cognitive Services Speech Services services are more reliable and produces better quality output than the retired Azure Media Indexer 1 and Azure Media Indexer 2 processors.
Some known differences include:
-* Cognitive Services Speech Services does not support keyword extraction. However, Video Indexer and Media Services v3 AudioAnalyzerPreset both offer a more robust set of keywords in JSON file format.
+* Cognitive Services Speech Services does not support keyword extraction. However, Video Analyzer for Media and Media Services v3 AudioAnalyzerPreset both offer a more robust set of keywords in JSON file format.
## Support
You can open a support ticket by navigating to [New support request](https://por
## Next steps * [Legacy components](legacy-components.md)
-* [Pricing page](https://azure.microsoft.com/pricing/details/media-services/#encoding)
+* [Pricing page](https://azure.microsoft.com/pricing/details/media-services/#encoding)
migrate How To Use Azure Migrate With Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/how-to-use-azure-migrate-with-private-endpoints.md
We recommend the private endpoint connectivity method when there's an organizati
Review the following required permissions and the supported scenarios and tools.
+### Supported geographies
+
+The functionality is now in preview in all [public cloud regions.](/azure/migrate/migrate-support-matrix#supported-geographies-public-cloud)
+ ### Required permissions You must have Contributor + User Access Administrator or Owner permissions on the subscription.
migrate Migrate Support Matrix Hyper V Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-support-matrix-hyper-v-migration.md
You can select up to 10 VMs at once for replication. If you want to migrate more
| :-- | :- | | **Operating system** | All [Windows](https://support.microsoft.com/help/2721672/microsoft-server-software-support-for-microsoft-azure-virtual-machines) and [Linux](../virtual-machines/linux/endorsed-distros.md) operating systems that are supported by Azure. | **Windows Server 2003** | For VMs running Windows Server 2003, you need to [install Hyper-V Integration Services](prepare-windows-server-2003-migration.md) before migration. |
-**Linux VMs in Azure** | Some VMs might require changes so that they can run in Azure.<br/><br/> For Linux, Azure Migrate makes the changes automatically for these operating systems:<br/> - Red Hat Enterprise Linux 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x<br/> - Cent OS 7.7, 7.6, 7.5, 7.4, 6.x</br> - SUSE Linux Enterprise Server 12 SP1+<br/> - SUSE Linux Enterprise Server 15 SP1 <br/>- Ubuntu 19.04, 19.10, 14.04LTS, 16.04LTS, 18.04LTS<br/> - Debian 7, 8 <br/> Oracle Linux 7.7, 7.7-CI<br/> For other operating systems you make the [required changes](prepare-for-migration.md#verify-required-changes-before-migrating) manually.
+**Linux VMs in Azure** | Some VMs might require changes so that they can run in Azure.<br/><br/> For Linux, Azure Migrate makes the changes automatically for these operating systems:<br/> - Red Hat Enterprise Linux 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x<br/> - Cent OS 7.7, 7.6, 7.5, 7.4, 6.x</br> - SUSE Linux Enterprise Server 12 SP1+<br/> - SUSE Linux Enterprise Server 15 SP1 <br/>- Ubuntu 20.04, 19.04, 19.10, 14.04LTS, 16.04LTS, 18.04LTS<br/> - Debian 7, 8 <br/> Oracle Linux 7.7, 7.7-CI<br/> For other operating systems you make the [required changes](prepare-for-migration.md#verify-required-changes-before-migrating) manually.
| **Required changes for Azure** | Some VMs might require changes so that they can run in Azure. Make adjustments manually before migration. The relevant articles contain instructions about how to do this. | | **Linux boot** | If /boot is on a dedicated partition, it should reside on the OS disk, and not be spread across multiple disks.<br/> If /boot is part of the root (/) partition, then the '/' partition should be on the OS disk, and not span other disks. | | **UEFI boot** | Supported. UEFI-based VMs will be migrated to Azure generation 2 VMs. |
migrate Migrate Support Matrix Vmware Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-support-matrix-vmware-migration.md
The table summarizes agentless migration requirements for VMware VMs.
| **Supported operating systems** | You can migrate [Windows](https://support.microsoft.com/help/2721672/microsoft-server-software-support-for-microsoft-azure-virtual-machines) and [Linux](../virtual-machines/linux/endorsed-distros.md) operating systems that are supported by Azure. **Windows VMs in Azure** | You might need to [make some changes](prepare-for-migration.md#verify-required-changes-before-migrating) on VMs before migration.
-**Linux VMs in Azure** | Some VMs might require changes so that they can run in Azure.<br/><br/> For Linux, Azure Migrate makes the changes automatically for these operating systems:<br/> - Red Hat Enterprise Linux 8, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x <br/> - Cent OS 8, 7.7, 7.6, 7.5, 7.4, 6.x</br> - SUSE Linux Enterprise Server 11, 12, 15 SP0, 15 SP1 <br/>- Ubuntu 19.04, 19.10, 14.04LTS, 16.04LTS, 18.04LTS<br/> - Debian 7, 8, 9 <br/> Oracle Linux 6, 7.7, 7.7-CI<br/> For other operating systems you make the [required changes](prepare-for-migration.md#verify-required-changes-before-migrating) manually.
+**Linux VMs in Azure** | Some VMs might require changes so that they can run in Azure.<br/><br/> For Linux, Azure Migrate makes the changes automatically for these operating systems:<br/> - Red Hat Enterprise Linux 8, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x <br/> - Cent OS 8, 7.7, 7.6, 7.5, 7.4, 6.x</br> - SUSE Linux Enterprise Server 11, 12, 15 SP0, 15 SP1 <br/>- Ubuntu 20.04, 19.04, 19.10, 14.04LTS, 16.04LTS, 18.04LTS<br/> - Debian 7, 8, 9 <br/> Oracle Linux 6, 7.7, 7.7-CI<br/> For other operating systems you make the [required changes](prepare-for-migration.md#verify-required-changes-before-migrating) manually.
**Boot requirements** | If /boot is on a dedicated partition, it should reside on the OS disk, and not be spread across multiple disks.<br/> If /boot is part of the root (/) partition, then the '/' partition should be on the OS disk, and not span other disks. **UEFI boot** | Supported. UEFI-based VMs will be migrated to Azure generation 2 VMs. **Disk size** | up to 2 TB OS disk for gen 1 VM and gen 2 VMs; 32 TB for data disks.
The table summarizes agentless migration requirements for VMware VMs.
**IPv6** | Not supported. **Target disk** | VMs can only be migrated to managed disks (standard HDD, standard SSD, premium SSD) in Azure. **Simultaneous replication** | Up to 300 simultaneously replicating VMs per vCenter Server with 1 appliance. Up to 500 simultaneously replicating VMs per vCenter Server when an additional [scale-out appliance](./how-to-scale-out-for-migration.md) is deployed.
-**Automatic installation of Azure VM agent (Windows and Linux Agent)** | Supported for Windows Server 2008 R2 onwards. <br/> Supported for RHEL6, RHEL7, CentOS7, Ubuntu 14.04, Ubuntu 16.04, Ubuntu18.04.
+**Automatic installation of Azure VM agent (Windows and Linux Agent)** | Supported for Windows Server 2008 R2 onwards. <br/> Supported for RHEL6, RHEL7, CentOS7, Ubuntu 14.04, Ubuntu 16.04, Ubuntu18.04, Ubuntu 19.04, Ubuntu 19.10, Ubuntu 20.04.
> [!Note] > In addition to the Internet connectivity, for Linux VMs, ensure that the following packages are installed for successful installation of Microsoft Azure Linux agent (waagent):
migrate Prepare For Agentless Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/prepare-for-agentless-migration.md
Azure Migrate automatically handles these configuration changes for the operatin
- Red Hat Enterprise Linux 8, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x - CentOS 8, 7.7, 7.6, 7.5, 7.4, 6.x - SUSE Linux Enterprise Server 15 SP0, 15 SP1, 12, 11-- Ubuntu 19.04, 19.10, 18.04LTS, 16.04LTS, 14.04LTS
+- Ubuntu 20.04, 19.04, 19.10, 18.04LTS, 16.04LTS, 14.04LTS
- Ubuntu 18.04LTS, 16.04LTS - Debian 9, 8, 7 - Oracle Linux 6, 7.7, 7.7-CI
The preparation script executes the following changes based on the OS type of th
Azure Migrate will attempt to install the Microsoft Azure Linux Agent (waagent), a secure, lightweight process that manages Linux & FreeBSD provisioning, and VM interaction with the Azure Fabric Controller. [Learn more](../virtual-machines/extensions/agent-linux.md) about the functionality enabled for Linux and FreeBSD IaaS deployments via the Linux agent.
- Review the list of [required packages](../virtual-machines/extensions/agent-linux.md#requirements) to install Linux VM agent. Azure Migrate installs the Linux VM agent automatically for RHEL6, RHEL7, CentOS7 (6 should be supported like RHEL), Ubuntu 14.04, Ubuntu 16.04, Ubuntu18.04 when using the agentless method of VMware migration. Follow these instructions to [install the Linux Agent manually](../virtual-machines/extensions/agent-linux.md#installation) for other OS versions.
+ Review the list of [required packages](../virtual-machines/extensions/agent-linux.md#requirements) to install Linux VM agent. Azure Migrate installs the Linux VM agent automatically for RHEL6, RHEL7, CentOS7 (6 should be supported like RHEL), Ubuntu 14.04, Ubuntu 16.04, Ubuntu18.04, Ubuntu 19.04, Ubuntu 19.10, and Ubuntu 20.04 when using the agentless method of VMware migration. Follow these instructions to [install the Linux Agent manually](../virtual-machines/extensions/agent-linux.md#installation) for other OS versions.
You can use the command to verify the service status of the Azure Linux Agent to make sure it's running. The service name might be **walinuxagent** or **waagent**. Once the hydration changes are done, the script will unmount all the partitions mounted, deactivate volume groups, and then flush the devices.
migrate Prepare For Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/prepare-for-migration.md
Azure Migrate completes these actions automatically for these versions
- Red Hat Enterprise Linux 8, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x (Azure Linux VM agent is also installed automatically during migration) - Cent OS 8, 7.7, 7.6, 7.5, 7.4, 6.x (Azure Linux VM agent is also installed automatically during migration) - SUSE Linux Enterprise Server 15 SP0, 15 SP1, 12, 11-- Ubuntu 19.04, 19.10, 18.04LTS, 16.04LTS, 14.04LTS (Azure Linux VM agent is also installed automatically during migration)
+- Ubuntu 20.04, 19.04, 19.10, 18.04LTS, 16.04LTS, 14.04LTS (Azure Linux VM agent is also installed automatically during migration)
- Debian 9, 8, 7 - Oracle Linux 6, 7.7, 7.7-CI
The following table summarizes the steps performed automatically for the operati
Learn more about steps for [running a Linux VM on Azure](../virtual-machines/linux/create-upload-generic.md), and get instructions for some of the popular Linux distributions.
-Review the list of [required packages](../virtual-machines/extensions/agent-linux.md#requirements) to install Linux VM agent. Azure Migrate installs the Linux VM agent automatically for RHEL6, RHEL7, CentOS7 (6 should be supported similar to RHEL), Ubuntu 14.04, Ubuntu 16.04, Ubuntu18.04 when using the agentless method of VMware migration.
+Review the list of [required packages](../virtual-machines/extensions/agent-linux.md#requirements) to install Linux VM agent. Azure Migrate installs the Linux VM agent automatically for RHEL6, RHEL7, CentOS7 (6 should be supported similar to RHEL), Ubuntu 14.04, Ubuntu 16.04, Ubuntu18.04, Ubuntu 19.04, Ubuntu 19.10, and Ubuntu 20.04 when using the agentless method of VMware migration.
## Check Azure VM requirements
migrate Troubleshoot Appliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/troubleshoot-appliance.md
You are getting an error in the connectivity check on the appliance.
**Remediation**
-1. Ensure that you can connect to the required [URLs](/azure/migrate/migrate-appliance#url-access) from the appliance
+1. Ensure that you can connect to the required [URLs](/azure/migrate/migrate-appliance#url-access) from the appliance.
1. Check if there is a proxy or firewall blocking access to these URLs. If you are required to create an allowlist, make sure that you include all of the URLs. 1. If there is a proxy server configured on-premises, make sure that you provide the proxy details correctly by selecting **Setup proxy** in the same step. Make sure that you provide the authorization credentials if the proxy needs them. 1. Ensure that the server has not been previously used to set up the [replication appliance](/azure/migrate/migrate-replication-appliance) or that you have the mobility service agent installed on the server.
An error about time synchronization indicates that the server clock might be out
## Getting project key related error during appliance registration
-**Error**
+**Error**
+ You are having issues when you try to register the appliance using the Azure Migrate project key copied from the project. **Remediation**
You are having issues when you try to register the appliance using the Azure Mig
**Error**
-After a successful login with an Azure user account, the appliance registration step fails with the message, "Failed to connect to the Azure Migrate project. Check the error detail and follow the remediation steps by clicking Retry"**.
+After a successful login with an Azure user account, the appliance registration step fails with the message, **"Failed to connect to the Azure Migrate project. Check the error detail and follow the remediation steps by clicking Retry"**.
-This issue happens when the Azure user account that was used to log in from the appliance configuration manager is different from the user account that was used to generate the Azure Migrate project key on the portal.
+This issue happens when the Azure user account that was used to log in from the appliance configuration manager is different from the user account that was used to generate the Azure Migrate project key on the portal.
**Remediation**
-1. To complete the registration of the appliance, use the same Azure user account that generated the Azure Migrate project key on the portal OR
-2. Assign the required roles and [permissions](/azure/migrate/tutorial-prepare-vmware#prepare-azure) to the other Azure user account being used for appliance registration
+1. To complete the registration of the appliance, use the same Azure user account that generated the Azure Migrate project key on the portal
+ OR
+1. Assign the required roles and [permissions](/azure/migrate/tutorial-prepare-vmware#prepare-azure) to the other Azure user account being used for appliance registration
## "Azure Active Directory (AAD) operation failed with status Forbidden" during appliance registration
mysql Quickstart Create Connect Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/quickstart-create-connect-server-vnet.md
+
+ Title: 'Connect to Azure Database for MySQL flexible server with private access in the Azure portal'
+description: This article walks you through using the Azure portal to create and connect to an Azure Database for MySQL flexible server in private access.
+++++ Last updated : 04/18/2021++
+# Connect Azure Database for SQL Flexible Server with private access connectivity method
+
+Azure Database for MySQL Flexible Server is a managed service that you can use to run, manage, and scale highly available MySQL servers in the cloud. This quickstart shows you how to create a flexible server in a virtual network by using the Azure portal.
+
+> [!IMPORTANT]
+> Azure Database for MySQL Flexible Server is currently in public preview.
+
+If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
+
+## Sign in to the Azure portal
+Go to the [Azure portal](https://portal.azure.com/). Enter your credentials to sign in to the portal. The default view is your service dashboard.
+
+## Create an Azure Database for MySQL flexible server
+
+You create a flexible server with a defined set of [compute and storage resources](./concepts-compute-storage.md). You create the server within an [Azure resource group](../../azure-resource-manager/management/overview.md).
+
+Complete these steps to create a flexible server:
+
+1. Search for and select **Azure Database for MySQL servers** in the portal:
+
+ > :::image type="content" source="./media/quickstart-create-connect-server-vnet/search-flexible-server-in-portal.png" alt-text="Screenshot that shows a search for Azure Database for MySQL servers." lightbox="./media/quickstart-create-connect-server-vnet/search-flexible-server-in-portal.png":::
+
+2. Select **Add**.
+
+3. On the **Select Azure Database for MySQL deployment option** page, select **Flexible server** as the deployment option:
+
+ > :::image type="content" source="./media/quickstart-create-connect-server-vnet/deployment-option.png" alt-text="Screenshot that shows the Flexible server option." lightbox="./media/quickstart-create-connect-server-vnet/deployment-option.png":::
+
+4. On the **Basics** tab, enter the **subscription**, **resource group** , **region**, **administrator username** and **administrator password**. With the default values, this will provision a MySQL server of version 5.7 with Burstable Sku using 1 vCore, 2GiB Memory and 32GiB storage. The backup retention is 7 days. You can change the configuration.
+
+ > :::image type="content" source="./media/quickstart-create-connect-server-vnet/mysql-flexible-server-create-portal.png" alt-text="Screenshot that shows the Basics tab of the Flexible server page." lightbox="/media/quickstart-create-connect-server-vnet/mysql-flexible-server-create-portal.png":::
+
+ > [!TIP]
+ > For faster data loads during migration, it is recommended to increase the IOPS to the maximum size supported by compute size and later scale it back to save cost.
+
+5. Go to the **Networking** tab, select **private access**.You can't change the connectivity method after you create the server. Select **Create virtual network** to create new virtual network **vnetenvironment1**.
+
+ > :::image type="content" source="./media/quickstart-create-connect-server-vnet/create-new-vnet-for-mysql-server.png" alt-text="Screenshot that shows the Networking tab with new VNET." lightbox="./media/quickstart-create-connect-server-vnet/create-new-vnet-for-mysql-server.png":::
+
+6. Select **OK** once you have provided the virtual network name and subnet information.
+ > :::image type="content" source="./media/quickstart-create-connect-server-vnet/show-server-vnet-information.png" alt-text="review VNET information":::
+
+7. Select **Review + create** to review your flexible server configuration.
+
+8. Select **Create** to provision the server. Provisioning can take a few minutes.
+
+9. Wait until the deployment is complete and successful.
+
+ > :::image type="content" source="./media/quickstart-create-connect-server-vnet/deployment-success.png" alt-text="Screenshot that shows the Networking settings with new VNET." lightbox="./media/quickstart-create-connect-server-vnet/deployment-success.png":::
+
+9. Select **Go to resource** to view the server's **Overview** page opens.
+
+## Create Azure Linux virtual machine
+
+Since the server is in virtual network, you can only connect to the server from other Azure services in the same virtual network as the server. To connect and manage the server, let's create a Linux virtual machine. The virtual machine must be created in the **same region** and **same subscription**. The Linux virtual machine can be used as SSH tunnel to manage your database server.
+
+1. Go to you resource group in which the server was created. Select **Add**.
+2. Select **Ubuntu Server 18.04 LTS**
+3. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose to **Create new** resource group. Type *myResourceGroup* for the name.
+
+ > :::image type="content" source="../../virtual-machines/linux/media/quick-create-portal/project-details.png" alt-text="Screenshot of the Project details section showing where you select the Azure subscription and the resource group for the virtual machine" lightbox="../../virtual-machines/linux/media/quick-create-portal/project-details.png":::
+
+2. Under **Instance details**, type *myVM* for the **Virtual machine name**, choose the same **Region** as your database server.
+
+ > :::image type="content" source="../../virtual-machines/linux/media/quick-create-portal/instance-details.png" alt-text="Screenshot of the Instance details section where you provide a name for the virtual machine and select its region, image and size]" lightbox="../../virtual-machines/linux/media/quick-create-portal/instance-details.png":::
+
+3. Under **Administrator account**, select **SSH public key**.
+
+4. In **Username** type *azureuser*.
+
+5. For **SSH public key source**, leave the default of **Generate new key pair**, and then type *myKey* for the **Key pair name**.
+
+ > :::image type="content" source="../../virtual-machines/linux/media/quick-create-portal/administrator-account.png" alt-text="Screenshot of the Administrator account section where you select an authentication type and provide the administrator credentials" lightbox="../../virtual-machines/linux/media/quick-create-portal/administrator-account.png":::
+
+6. Under **Inbound port rules** > **Public inbound ports**, choose **Allow selected ports** and then select **SSH (22)** and **HTTP (80)** from the drop-down.
+
+ > :::image type="content" source="../../virtual-machines/linux/media/quick-create-portal/inbound-port-rules.png" alt-text="Screenshot of the inbound port rules section where you select what ports inbound connections are allowed on" lightbox="../../virtual-machines/linux/media/quick-create-portal/inbound-port-rules.png":::
+
+7. Select **Networking** page to configure the virtual network. For virtual network, choose the **vnetenvironment1** created for the database server.
+
+ > :::image type="content" source="./media/quickstart-create-connect-server-vnet/vm-vnet-configuration.png" alt-text="Screenshot of select existing virtual network of the database server" lightbox="./media/quickstart-create-connect-server-vnet/vm-vnet-configuration.png":::
+
+8. Select **Manage subnet configuration** to create a new subnet for the server.
+
+ > :::image type="content" source="./media/quickstart-create-connect-server-vnet/vm-manage-subnet-integration.png" alt-text="Screenshot of manage subnet" lightbox="./media/quickstart-create-connect-server-vnet/vm-manage-subnet-integration.png":::
+
+9. Add new subnet for the virtual machine.
+
+ > :::image type="content" source="./media/quickstart-create-connect-server-vnet/vm-add-new-subnet.png" alt-text="Screenshot of adding a new subnet for virtual machine" lightbox="./media/quickstart-create-connect-server-vnet/vm-add-new-subnet.png":::
+
+10. After the subnet has been created successfully , close the page.
+ > :::image type="content" source="./media/quickstart-create-connect-server-vnet/subnetcreate-success.png" alt-text="Screenshot of success with adding a new subnet for virtual machine" lightbox="./media/quickstart-create-connect-server-vnet/subnetcreate-success.png":::
+
+11. Select **Review + Create**.
+12. Select **Create**. When the **Generate new key pair** window opens, select **Download private key and create resource**. Your key file will be download as **myKey.pem**.
+
+ >[!IMPORTANT]
+ > Make sure you know where the `.pem` file was downloaded, you will need the path to it in the next step.
+
+13. When the deployment is finished, select **Go to resource**.
+ > :::image type="content" source="./media/quickstart-create-connect-server-vnet/vm-create-success.png" alt-text="Screenshot of deployment success" lightbox="./media/quickstart-create-connect-server-vnet/vm-create-success.png":::
+
+11. On the page for your new VM, select the public IP address and copy it to your clipboard.
+ > :::image type="content" source="../../virtual-machines/linux/media/quick-create-portal/ip-address.png" alt-text="Screenshot showing how to copy the IP address for the virtual machine" lightbox="../../virtual-machines/linux/media/quick-create-portal/ip-address.png":::
+
+## Install MySQL client tools
+
+Create an SSH connection with the VM using Bash or PowerShell. At your prompt, open an SSH connection to your virtual machine. Replace the IP address with the one from your VM, and replace the path to the `.pem` with the path to where the key file was downloaded.
+
+```console
+ssh -i .\Downloads\myKey1.pem azureuser@10.111.12.123
+```
+
+> [!TIP]
+> The SSH key you created can be used the next time your create a VM in Azure. Just select the **Use a key stored in Azure** for **SSH public key source** the next time you create a VM. You already have the private key on your computer, so you won't need to download anything.
+
+You need to install mysql-client tool to be able to connect to the server.
+
+```bash
+sude apt-getupdate
+sudo apt-get install mysql-client
+```
+
+Connections to the database are enforced with SSL, hence you need to download the public SSL certificate.
+
+```bash
+wget --no-check-certificate https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem
+```
+
+## Connect to the server from Azure Linux virtual machine
+With [mysql.exe](https://dev.mysql.com/doc/refman/8.0/en/mysql.html) client tool installed, we can now connect to the server from your local environment.
+
+```bash
+mysql -h mydemoserver.mysql.database.azure.com -u mydemouser -p --ssl-mode=REQUIRED --ssl-ca=DigiCertGlobalRootCA.crt.pem
+```
+
+## Clean up resources
+You have now created an Azure Database for MySQL flexible server in a resource group. If you don't expect to need these resources in the future, you can delete them by deleting the resource group, or you can just delete the MySQL server. To delete the resource group, complete these steps:
+
+1. In the Azure portal, search for and select **Resource groups**.
+1. In the list of resource groups, select the name of your resource group.
+1. In the **Overview** page for your resource group, select **Delete resource group**.
+1. In the confirmation dialog box, type the name of your resource group, and then select **Delete**.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Build a PHP (Laravel) web app with MySQL](tutorial-php-database-app.md)
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/whats-new.md
Previously updated : 06/18/2021 Last updated : 07/27/2021 # What's new in Azure Database for MySQL - Flexible Server (Preview)?
Last updated 06/18/2021
This article summarizes new releases and features in Azure Database for MySQL - Flexible Server beginning in January 2021. Listings appear in reverse chronological order, with the most recent updates first.
+## July 2021
+
+This release of Azure Database for MySQL - Flexible Server includes the following updates.
+
+- **Online migration from Single Server to Flexible Server**
+
+ Customers can now migrate an instance of Azure Database for MySQL ΓÇô Single Server to Flexible Server with minimum downtime to their applications by using Data-in Replication. For detailed, step-by-step instructions, see [Migrate Azure Database for MySQL ΓÇô Single Server to Flexible Server with minimal downtime](/howto-migrate-single-flexible-minimum-downtime).
+
+- **High availability within a single zone using same-zone high availability**
+
+ Azure Database for MySQL ΓÇô Flexible Server now provides customers with the flexibility to choose the preferred availability zone for their standby server when they enable high availability. With this feature, customers can place a standby server in the same zone as the primary server, which reduces the replication lag between primary and standby. This also provides for lower latencies between the application server and database server if placed within the same Azure zone. [Learn more](https://aka.ms/SameZone-HA).
+
+- **Private DNS zone integration**
+
+ Azure Database for MySQL- Flexible server now provides integration with an Azure private DNS zone. Integration with Azure private DNS zone allows seamless resolution of private DNS within the current VNet, or any in-region peered VNet to which the private DNS Zone is linked. [Learn more](/concepts-networking#connecting-from-peered-vnets-in-same-azure-region.md).
+
+- **Point-In-Time Restore for a server in a specified virtual network**
+
+ The Point-In-Time Restore experience for Azure Database for MySQL ΓÇô Flexible Server now enables customers to configure networking settings, allowing users to switch between networking options when performing a restore operation. This feature gives customers the flexibility to inject a server being restored into a specified virtual network securing their connection endpoints. [Learn more](/how-to-restore-server-portal.md).
+
+- **Availability in West US and Germany West Central**
+
+ The public preview of Azure Database for MySQL - Flexible Server is now available in the West US and Germany West Central Azure regions.
+ ## June 2021 This release of Azure Database for MySQL - Flexible Server includes the following updates.
network-watcher Connection Monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/connection-monitor-overview.md
To view the trends in RTT and the percentage of failed checks for a test:
Use Log Analytics to create custom views of your monitoring data. All data that the UI displays is from Log Analytics. You can interactively analyze data in the repository. Correlate the data from Agent Health or other solutions that are based in Log Analytics. Export the data to Excel or Power BI, or create a shareable link.
+#### Network Topology in Connection Monitor
+
+Connection Monitor topology is usually built using the result of Trace route command performed by the agent which basically gets all the hops from source to destination.
+However in case when either source/destination lies with in Azure Boundary, Topology is built by merging the results of 2 distinct operations.
+First one is obviously the result of Trace Route command. The 2nd one is the result of an internal command (very similar to Next Hop Diagnostics tool of NW) which identifies a logical route based on (customer) network configuration within Azure Boundary.
+Since the later one is logical and the former one doesn't usually identify any hops with in Azure Boundary, few hops in the merged result (mostly all the hops in the Azure Boundary) will not have latency values.
+ #### Metrics in Azure Monitor In connection monitors that were created before the Connection Monitor experience, all four metrics are available: % Probes Failed, AverageRoundtripMs, ChecksFailedPercent, and RoundTripTimeMs. In connection monitors that were created in the Connection Monitor experience, data is available only for ChecksFailedPercent, RoundTripTimeMs and Test Result metrics.
+Metrics are emitted as per monitoring frequency and describe aspect of a connection monitor at a particular time.
+Connection Monitor metrics also have multiple dimensions such as SourceName, DestinationName, TestConfiguration, TestGroup etc.
+These dimensions can be used to visualize a specific set of data and also to target the same while defining alerts.
+Currently Azure Metrics allows minimum granularity of 1 minute, If frequency is less than 1 minute, aggregated results will be displayed
+ :::image type="content" source="./media/connection-monitor-2-preview/monitor-metrics.png" alt-text="Screenshot showing metrics in Connection Monitor" lightbox="./media/connection-monitor-2-preview/monitor-metrics.png"::: When you use metrics, set the resource type as Microsoft.Network/networkWatchers/connectionMonitors
purview Manage Integration Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/manage-integration-runtimes.md
This article describes how to create and manage a self-hosted integration runtim
:::image type="content" source="media/manage-integration-runtimes/successfully-registered.png" alt-text="successfully registered.":::
+## Networking requirements
+
+Your self-hosted integration runtime machine will need to connect to several resources to work correctly:
+
+* The sources you want to scan using the self-hosted integration runtime.
+* Any Azure Key Vault used to store credentials for the Purview resource.
+* The managed Storage account and Event Hub resources created by Purview.
+
+The managed Storage and Event Hub resources can be found in your subscription under a resource group containing the name of your Purview resource. Azure Purview uses these resources to ingest the results of the scan, among many other things, so the self-hosted integration runtime will need to be able to connect directly with these resources.
+
+Here are the domains and ports that will need to be allowed through corporate and machine firewalls.
+
+> [!NOTE]
+> For domains listed with '\<managed Purview storage account>', you will add the name of the managed storage account associated with your Purview resource. You can find this resource in the Portal. Search your Resource Groups for a group named: managed-rg-\<your Purview Resource name>. For example: managed-rg-contosoPurview. You will use the name of the storage account in this resource group.
+>
+> For domains listed with '\<managed Event Hub resource>', you will add the name of the managed Event Hub associated with your Purview resource. You can find this in the same Resource Group as the managed storage account.
+
+| Domain names | Outbound ports | Description |
+| -- | -- | - |
+| `*.servicebus.windows.net` | 443 | Global infrastructure Purview uses to run its scans. Wildcard required as there is no dedicated resource. |
+| `<managed Event Hub resource>.servicebus.windows.net` | 443 | Purview uses this to connect with the associated service bus. It will be covered by allowing the above domain, but if you are using Private Endpoints, you will need to test access to this single domain.|
+| `*.frontend.clouddatahub.net` | 443 | Global infrastructure Purview uses to run its scans. Wildcard required as there is no dedicated resource. |
+| `<managed Purview storage account>.core.windows.net` | 443 | Used by the self-hosted integration runtime to connect to the managed Azure storage account.|
+| `<managed Purview storage account>.queue.core.windows.net` | 443 | Queues used by purview to run the scan process. |
+| `<your Key Vault Name>.vault.azure.net` | 443 | Required if any credentials are stored in Azure Key Vault. |
+| `download.microsoft.com` | 443 | Optional for SHIR updates. |
+| Various Domains | Dependant | Domains for any other sources the SHIR will connect to. |
+
+
+> [!IMPORTANT]
+> In most environments, you will also need to confirm that your DNS is correctly configured. To confirm you can use **nslookup** from your SHIR machine to check connectivity to each of the above domains. Each nslookup should return the the IP of the resource. If you are using [Private Endpoints](catalog-private-link.md), the private IP should be returned and not the Public IP. If no IP is returned, or if when using Private Endpoints the public IP is returned, you will need to address your DNS/VNET association, or your Private Endpoint/VNET peering.
+ ## Manage a self-hosted integration runtime You can edit a self-hosted integration runtime by navigating to **Integration runtimes** in the **Management center**, selecting the IR and then clicking on edit. You can now update the description, copy the key, or regenerate new keys.
search Cognitive Search Attach Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-attach-cognitive-services.md
Last updated 02/16/2021
# Attach a Cognitive Services resource to a skillset in Azure Cognitive Search
-When configuring an [AI enrichment pipeline](cognitive-search-concept-intro.md) in Azure Cognitive Search, you can enrich a limited number of documents free of charge. For larger and more frequent workloads, you should attach a billable "all-in-one" Cognitive Services resource. An "all-in-one" subscription references "Cognitive Services" as the offering, rather than individual services, with access granted through a single API key.
+When configuring a [AI enrichment pipeline](cognitive-search-concept-intro.md) in Azure Cognitive Search, you can enrich a limited number of documents free of charge. For larger and more frequent workloads, you should attach a billable "all-in-one" Cognitive Services resource. An "all-in-one" subscription references "Cognitive Services" as the offering, rather than individual services, with access granted through a single API key.
-An "all-in-one" Cognitive Services resource drives the [predefined skills](cognitive-search-predefined-skills.md) that you can include in a skillset:
+An "all-in-one" Cognitive Services resource drives the [built-in skills](cognitive-search-predefined-skills.md) that you can include in a skillset:
+ [Computer Vision](https://azure.microsoft.com/services/cognitive-services/computer-vision/) for image analysis and optical character recognition (OCR) + [Text Analytics](https://azure.microsoft.com/services/cognitive-services/text-analytics/) for language detection, entity recognition, sentiment analysis, and key phrase extraction
An "all-in-one" Cognitive Services key is optional in a skillset definition. Whe
Any "all-in-one" resource key is valid. Internally, a search service will use the resource that's co-located in the same physical region, even if the "all-in-one" key is for a resource in a different region. The [product availability](https://azure.microsoft.com/global-infrastructure/services/?products=search) page shows regional availability side by side. > [!NOTE]
-> If you omit predefined skills in a skillset, then Cognitive Services is not accessed, and you won't be charged, even if the skillset specifies a key.
+> If you omit built-in skills in a skillset, then Cognitive Services is not accessed, and you won't be charged, even if the skillset specifies a key.
## How billing works
search Search Features List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-features-list.md
Azure Cognitive Search provides a full-text search engine, persistent storage of
| Category&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Features | |-|-|
-| Data sources | Search indexes can accept text from any source, provided it is submitted as a JSON document. <br/><br/> [**Indexers**](search-indexer-overview.md) automate data ingestion from supported Azure data sources and handle JSON serialization. Connect to [Azure SQL Database](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md), [Azure Cosmos DB](search-howto-index-cosmosdb.md), or [Azure Blob storage](search-howto-indexing-azure-blob-storage.md) to extract searchable content in primary data stores. Azure Blob indexers can perform *document cracking* to [extract text from major file formats](search-howto-indexing-azure-blob-storage.md), including Microsoft Office, PDF, and HTML documents. |
+| Data sources | Search indexes can accept text from any source, provided it is submitted as a JSON document. <br/><br/> [**Indexers**](search-indexer-overview.md) are a feature that automates data import from supported data sources to extract searchable content in primary data stores. Indexers handle JSON serialization for you. You can connect to [various data sources](search-data-sources-gallery.md), including [Azure SQL Database](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md), [Azure Cosmos DB](search-howto-index-cosmosdb.md), or [Azure Blob storage](search-howto-indexing-azure-blob-storage.md). |
| Hierarchical and nested data structures | [**Complex types**](search-howto-complex-data-types.md) and collections allow you to model virtually any type of JSON structure within a search index. One-to-many and many-to-many cardinality can be expressed natively through collections, complex types, and collections of complex types.| | Linguistic analysis | Analyzers are components used for text processing during indexing and search operations. By default, you can use the general-purpose Standard Lucene analyzer, or override the default with a language analyzer, a custom analyzer that you configure, or another predefined analyzer that produces tokens in the format you require. <br/><br/>[**Language analyzers**](index-add-language-analyzers.md) from Lucene or Microsoft are used to intelligently handle language-specific linguistics including verb tenses, gender, irregular plural nouns (for example, 'mouse' vs. 'mice'), word de-compounding, word-breaking (for languages with no spaces), and more. <br/><br/>[**Custom lexical analyzers**](index-add-custom-analyzers.md) are used for complex query forms such as phonetic matching and regular expressions.<br/><br/> |
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-security-rbac.md
Azure resources have the concept of [control plane and data plane](../azure-reso
## Configure Search for data plane authentication
-If you are using any of the preview data plane roles (Search Index Data Contributor or Search Index Data Reader) and Azure AD authentication, your search service must be configured to recognize an **authorization** header on data requests that provides an OAuth2 access token.
+If you are using any of the preview data plane roles (Search Index Data Contributor or Search Index Data Reader) and Azure AD authentication, your search service must be configured to recognize an **authorization** header on data requests that provides an OAuth2 access token. This section provides instructions for configuring your search service.
-You can skip this step if you are using API keys only.
+Before you start, [sign up](https://aka.ms/azure-cognitive-search/rbac-preview) for the RBAC preview. Your subscription must be enrolled into the program before you can use this feature. It can take up to two business days for preview enrollment. You'll receive an email when your service is ready.
### [**Azure portal**](#tab/config-svc-portal)
-Set the feature flag on the portal URL to work with the preview roles: Search Service Contributor, Search Index Data Contributor, and Search Index Data Reader.
- 1. Open the portal with this syntax: [https://ms.portal.azure.com/?feature.enableRbac=true](https://ms.portal.azure.com/?feature.enableRbac=true). 1. Navigate to your search service. 1. Select **Keys** in the left navigation pane.
-1. Choose an **API access control** mechanism:
+1. Choose an **API access control** mechanism. If you don't see these options, make sure you are enrolled and that you used the URL from the first step.
| Option | Status | Description | |--|--|-|
sentinel Cef Name Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/cef-name-mapping.md
Previously updated : 04/12/2021 Last updated : 07/26/2021 # CEF and CommonSecurityLog field mapping
The following tables map Common Event Format (CEF) field names to the names they
For more information, see [Connect your external solution using Common Event Format](connect-common-event-format.md).
+> [!NOTE]
+> An Azure Sentinel workspace is required in order to [ingest CEF data](connect-common-event-format.md#prerequisites) into Log Analytics.
+>
+ ## A - C |CEF key name |CommonSecurityLog field name |Description |
sentinel Connect Cef Solution Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-cef-solution-config.md
Title: Configure your security solution to connect CEF data to Azure Sentinel Preview| Microsoft Docs
+ Title: Configure your security solution to connect CEF data to Azure Sentinel | Microsoft Docs
description: Learn how to configure your security solution to connect CEF data to Azure Sentinel. documentationcenter: na
sentinel Connect Common Event Format https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-common-event-format.md
Title: Connect CEF data to Azure Sentinel Preview| Microsoft Docs
+ Title: Connect CEF data to Azure Sentinel | Microsoft Docs
description: Connect an external solution that sends Common Event Format (CEF) messages to Azure Sentinel, using a Linux machine as a log forwarder. documentationcenter: na
ms.devlang: na
na Previously updated : 10/01/2020 Last updated : 07/26/2021
To use TLS communication between the Syslog source and the Syslog Forwarder, you
## Prerequisites
+An Azure Sentinel workspace is required in order to ingest CEF data into Log Analytics.
+ Make sure the Linux machine you use as a log forwarder is running one of the following operating systems: - 64-bit
sentinel False Positives https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/false-positives.md
This article describes two methods for avoiding false positives:
- **Automation rules** create exceptions without modifying analytics rules. - **Scheduled analytics rules modifications** permit more detailed and permanent exceptions.
-
+ The following table describes characteristics of each method:
-
+ |Method|Characteristic| |-|-| |**Automation rules**|<ul><li>Can apply to several analytics rules.</li><li>Keep an audit trail. Exceptions prevent incident creation, but alerts are still recorded for audit purposes.</li><li>Are often generated by analysts.</li><li>Allow applying exceptions for a limited time. For example, maintenance work might trigger false positives that outside the maintenance timeframe would be true incidents.</li></ul>|
To add an automation rule to handle a false positive:
1. In the **Create new automation rule** sidebar, optionally modify the new rule name to identify the exception, rather than just the alert rule name. 1. Under **Conditions**, optionally add more **Analytic rule name**s to apply the exception to. 1. The sidebar presents the specific entities in the current incident that might have caused the false positive. Keep the automatic suggestions, or modify them to fine-tune the exception. For example, you could change a condition on an IP address to apply to an entire subnet.
-
+ :::image type="content" source="media/false-positives/create-rule.png" alt-text="Screenshot showing how to create an automation rule for an incident in Azure Sentinel.":::
-
+ 1. After you define the trigger, you can continue to define what the rule does:
-
+ :::image type="content" source="media/false-positives/apply-rule.png" alt-text="Screenshot showing how to finish creating and applying an automation rule in Azure Sentinel.":::
-
+ - The rule is already configured to close an incident that meets the exception criteria. - You can add a comment to the automatically closed incident that explains the exception. For example, you could specify that the incident originated from known administrative activity. - By default, the rule is set to expire automatically after 24 hours. This expiration might be what you want, and reduces the chance of false negative errors. If you want a longer exception, set **Rule expiration** to a later time.
-
+ 1. Select **Apply** to activate the exception. > [!TIP]
let subnets = _GetWatchlist('subnetallowlist');
## Next steps For more information, see:
+- [Use UEBA data to analyze false positives](investigate-with-ueba.md#use-ueba-data-to-analyze-false-positives)
- [Automate incident handling in Azure Sentinel with automation rules](automate-incident-handling-with-automation-rules.md) - [Create custom analytics rules to detect threats](tutorial-detect-threats-custom.md) - [Use Azure Sentinel watchlists](watchlists.md)
sentinel Investigate With Ueba https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/investigate-with-ueba.md
ms.devlang: na
na Previously updated : 07/15/2021 Last updated : 07/27/2021
The user entity page is also linked from the [incident page](tutorial-investigat
> In the **Hunting** area, run the **Anomalous Geo Location Logon** query. For more information, see [Hunt for threats with Azure Sentinel](hunting.md). >
+### Embed IdentityInfo data in your analytics rules (Public Preview)
+
+As attackers often use the organization's own user and service accounts, data about those user accounts, including the user identification and privileges, are crucial for the analysts in the process of an investigation.
+
+Embed data from the **IdentityInfo table** to fine-tune your analytics rules to fit your use cases, reducing false positives, and possibly speeding up your investigation process.
+
+For example:
+
+- To correlate security events with the **IdentityInfo** table in an an alert that's triggered if a server is accessed by someone outside the **IT** department:
+
+ ```kusto
+ SecurityEvent
+ | where EventID in ("4624","4672")
+ | where Computer == "My.High.Value.Asset"
+ | join kind=inner (
+ IdentityInfo
+ | summarize arg_max(TimeGenerated, *) by AccountObjectId) on $left.SubjectUserSid == $right.AccountSID
+ | where Department != "IT"
+ ```
+
+- To correlate Azure AD sign-in logs with the **IdentityInfo** table in an alert that's triggered if an application is accessed by someone who isn't a member of a specific security group:
+
+ ```kusto
+ SigninLogs
+ | where AppDisplayName == "GithHub.Com"
+ | join kind=inner (
+ IdentityInfo
+ | summarize arg_max(TimeGenerated, *) by AccountObjectId) on $left.UserId == $right.AccountObjectId
+ | where GroupMembership !contains "Developers"
+ ```
+
+The **IdentityInfo** table synchronizes with your Azure AD workspace to create a snapshot of your user profile data, such as user metadata, group information, and Azure AD roles assigned to each user. For more information, see [IdentityInfo table](ueba-enrichments.md#identityinfo-table-public-preview) in the UEBA enrichments reference.
+ ## Identify password spray and spear phishing attempts Without multi-factor authentication (MFA) enabled, user credentials are vulnerable to attackers looking to compromise attacks with [password spraying](https://www.microsoft.com/security/blog/2020/04/23/protecting-organization-password-spray-attacks/) or [spear phishing](https://www.microsoft.com/security/blog/2019/12/02/spear-phishing-campaigns-sharper-than-you-think/) attempts.
sentinel Sap Solution Deploy Alternate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-solution-deploy-alternate.md
x509pkicert = <SET_YOUR_X509_PKI_CERTIFICATE>
appserver = <SET_YOUR_SAPCTRL_SERVER IP OR FQDN> instance = <SET_YOUR_SAP_INSTANCE NUMBER, example 10> abapseverity = <SET_ABAP_SEVERITY 0 = All logs ; 1 = Warning ; 2 = Error>
-abaptz = <SET_ABAP_TZ for example GMT-3>
+abaptz = <SET_ABAP_TZ --Use ONLY GMT FORMAT-- example - For OS Timezone = NZST use abaptz = GMT+12>
[File Extraction JAVA] javaosuser = <SET_YOUR_JAVAADM_LIKE_USER>
javax509pkicert = <SET_YOUR_X509_PKI_CERTIFICATE>
javaappserver = <SET_YOUR_JAVA_SAPCTRL_SERVER IP ADDRESS OR FQDN> javainstance = <SET_YOUR_JAVA_SAP_INSTANCE for example 10> javaseverity = <SET_JAVA_SEVERITY 0 = All logs ; 1 = Warning ; 2 = Error>
-javatz = <SET_JAVA_TZ for example GMT-3>
+javatz = <SET_JAVA_TZ --Use ONLY GMT FORMAT-- example - For OS Timezone = NZST use javatz = GMT+12>
``` ### Define the SAP logs that are sent to Azure Sentinel
sentinel Sap Solution Log Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-solution-log-reference.md
This article is intended for advanced SAP users.
## ABAP Gateway log -- **Name in Azure Sentinel**: `GW_CL`
+- **Name in Azure Sentinel**: `ABAPOS_GW_CL`
- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/62b4de4187cb43668d15dac48fc00732/7.5.7/en-US/48b2a710ca1c3079e10000000a42189b.html) - **Log purpose**: Monitors Gateway activities. Available by the SAP Control Web Service. This log is generated with data across all clients.
-### GW_CL log schema
+### ABAPOS_GW_CL log schema
| Field | Description | | | - |
This article is intended for advanced SAP users.
## ABAP ICM log -- **Name in Azure Sentinel**: `ICM_CL`
+- **Name in Azure Sentinel**: `ABAPOS_ICM_CL`
- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/683d6a1797a34730a6e005d1e8de6f22/7.52.4/en-US/a10ec40d01e740b58d0a5231736c434e.html)
This article is intended for advanced SAP users.
Available by the SAP Control Web Service. This log is generated with data across all clients.
-### ICM_CL log schema
+### ABAPOS_ICM_CL log schema
| Field | Description | | | - |
This article is intended for advanced SAP users.
## ABAP SysLog -- **Name in Azure Sentinel**: `SysLog_CL`
+- **Name in Azure Sentinel**: `ABAPOS_Syslog_CL`
- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/56bf1265a92e4b4d9a72448c579887af/7.5.7/en-US/c769bcbaf36611d3a6510000e835363f.html)
This article is intended for advanced SAP users.
Available by the SAP Control Web Service. This log is generated with data across all clients.
-### SysLog_CL log schema
+### ABAPOS_Syslog_CL log schema
| Field | Description |
This article is intended for advanced SAP users.
## ABAP WorkProcess log -- **Name in Azure Sentinel**: `WP_CL`
+- **Name in Azure Sentinel**: `ABAPOS_WP_CL`
- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/d0739d980ecf42ae9f3b4c19e21a4b6e/7.3.15/en-US/46fb763b6d4c5515e10000000a1553f6.html)
This article is intended for advanced SAP users.
Available by the SAP Control Web Service. This log is generated with data across all clients.
-### WP_CL log schema
+### ABAPOS_WP_CL log schema
| Field | Description |
sentinel Ueba Enrichments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/ueba-enrichments.md
The [ActivityInsights](#activityinsights-field) field contains entity informatio
<a name="baseline-explained"></a>User activities are analyzed against a baseline that is dynamically compiled each time it is used. Each activity has its defined lookback period from which the dynamic baseline is derived. The lookback period is specified in the [**Baseline**](#activityinsights-field) column in this table.
-> [!NOTE]
+> [!NOTE]
> The **Enrichment name** column in all the [entity enrichment field](#entity-enrichments-dynamic-fields) tables displays two rows of information.
->
+>
> - The first, in **bold**, is the "friendly name" of the enrichment. > - The second *(in italics and parentheses)* is the field name of the enrichment as stored in the [**Behavior Analytics table**](#behavioranalytics-table).
+> [!IMPORTANT]
+> Noted features are currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
## BehaviorAnalytics table The following table describes the behavior analytics data displayed on each [entity details page](identify-threats-with-entity-behavior-analytics.md#how-to-use-entity-pages) in Azure Sentinel. | Field | Type | Description | |||--|
-| **TenantId** | string | unique ID number of the tenant |
-| **SourceRecordId** | string | unique ID number of the EBA event |
-| **TimeGenerated** | datetime | timestamp of the activity's occurrence |
-| **TimeProcessed** | datetime | timestamp of the activity's processing by the EBA engine |
-| **ActivityType** | string | high-level category of the activity |
-| **ActionType** | string | normalized name of the activity |
-| **UserName** | string | username of the user that initiated the activity |
-| **UserPrincipalName** | string | full username of the user that initiated the activity |
-| **EventSource** | string | data source that provided the original event |
-| **SourceIPAddress** | string | IP address from which activity was initiated |
-| **SourceIPLocation** | string | country from which activity was initiated, enriched from IP address |
-| **SourceDevice** | string | hostname of the device that initiated the activity |
-| **DestinationIPAddress** | string | IP address of the target of the activity |
-| **DestinationIPLocation** | string | country of the target of the activity, enriched from IP address |
-| **DestinationDevice** | string | name of the target device |
-| **UsersInsights** | dynamic | contextual enrichments of involved users ([details below](#usersinsights-field)) |
-| **DevicesInsights** | dynamic | contextual enrichments of involved devices ([details below](#devicesinsights-field)) |
-| **ActivityInsights** | dynamic | contextual analysis of activity based on our profiling ([details below](#activityinsights-field)) |
-| **InvestigationPriority** | int | anomaly score, between 0-10 (0=benign, 10=highly anomalous) |
-|
+| **TenantId** | string | The unique ID number of the tenant. |
+| **SourceRecordId** | string | The unique ID number of the EBA event. |
+| **TimeGenerated** | datetime | The timestamp of the activity's occurrence. |
+| **TimeProcessed** | datetime | The timestamp of the activity's processing by the EBA engine. |
+| **ActivityType** | string | The high-level category of the activity. |
+| **ActionType** | string | The normalized name of the activity. |
+| **UserName** | string | The username of the user that initiated the activity. |
+| **UserPrincipalName** | string | The full username of the user that initiated the activity. |
+| **EventSource** | string | The data source that provided the original event. |
+| **SourceIPAddress** | string | The IP address from which activity was initiated. |
+| **SourceIPLocation** | string | The country from which activity was initiated, enriched from IP address. |
+| **SourceDevice** | string | The hostname of the device that initiated the activity. |
+| **DestinationIPAddress** | string | The IP address of the target of the activity. |
+| **DestinationIPLocation** | string | The country of the target of the activity, enriched from IP address. |
+| **DestinationDevice** | string | The name of the target device. |
+| **UsersInsights** | dynamic | The contextual enrichments of involved users ([details below](#usersinsights-field)). |
+| **DevicesInsights** | dynamic | The contextual enrichments of involved devices ([details below](#devicesinsights-field)). |
+| **ActivityInsights** | dynamic | The contextual analysis of activity based on our profiling ([details below](#activityinsights-field)). |
+| **InvestigationPriority** | int | The anomaly score, between 0-10 (0=benign, 10=highly anomalous). |
++ ## Entity enrichments dynamic fields
The following tables describe the enrichments featured in the **ActivityInsights
| **Unusual number of users added to group**<br>*(UnusualNumberOfUsersAddedToGroup)* | 5 | A user added an unusual number of users to a group. | True, False | | +
+## IdentityInfo table (Public Preview)
+
+After you [enable UEBA](enable-entity-behavior-analytics.md) for your Azure Sentinel workspace, data from your Azure Active Directory is synchronized to the **IdentityInfo** table in Log Analytics for use in Azure Sentinel. You can embed user data synchronized from your Azure AD from the in your analytics rules to enhance your analytics to fit your use cases and reduce false positives.
+
+While the initial synchronization may take a few days, once the data is fully synchronized:
+
+- Changes made to your user profiles in Azure AD are updated in the **IdentityInfo** table within 15 minutes.
+
+- Group and role information is synchronized between the **IdentityInfo** table and Azure AD daily.
+
+- Every 21 days, Azure Sentinel re-synchronizes with your entire Azure AD to ensure that stale records are fully updated.
+
+- Default retention time in the **IdentityInfo** table is 30 days.
++
+> [!NOTE]
+> Currently, only built-in roles are supported.
+>
+> Data about deleted groups, where a user was removed from a group, is not currently supported.
+>
+
+The following table describes the user identity data included in the **IdentityInfo** table in Log Analytics.
+
+| Field | Type | Description |
+| | -- | -- |
+| **AccountCloudSID** | string | The Azure AD security identifier of the account. |
+| **AccountCreationTime** | datetime | The date the user account was created (UTC). |
+| **AccountDisplayName** | string | The display name of the user account. |
+| **AccountDomain** | string | The domain name of the user account. |
+| **AccountName** | string | The user name of the user account. |
+| **AccountObjectId** | string | The Azure Active Directory object ID for the user account. |
+| **AccountSID** | string | The on-premises security identifier of the user account. |
+| **AccountTenantId** | string | The Azure Active Directory tenant ID of the user account. |
+| **AccountUPN** | string | The user principal name of the user account. |
+| **AdditionalMailAddresses** | dynamic | The additional email addresses of the user. |
+| **AssignedRoles** | dynamic | The Azure AD roles the user account is assigned to. |
+| **City** | string | The city of the user account. |
+| **Country** | string | The country of the user account. |
+| **DeletedDateTime** | datetime | The date and time the user was deleted. |
+| **Department** | string | The department of the user account. |
+| **GivenName** | string | The given name of the user account. |
+| **GroupMembership** | dynamic | Azure AD Groups where the user account is a member. |
+| **IsAccountEnabled** | bool | An indication as to whether the user account is enabled in Azure AD or not. |
+| **JobTitle** | string | The job title of the user account. |
+| **MailAddress** | string | The primary email address of the user account. |
+| **Manager** | string | The manager alias of the user account. |
+| **OnPremisesDistinguishedName** | string | The Azure AD distinguished name (DN). A distinguished name is a sequence of relative distinguished names (RDN), connected by commas. |
+| **Phone** | string | The phone number of the user account. |
+| **SourceSystem** | string | The system where the user data originated. |
+| **State** | string | The geographical state of the user account. |
+| **StreetAddress** | string | The office street address of the user account. |
+| **Surname** | string | The surname of the user. account. |
+| **TenantId** | string | The tenant ID of the user. |
+| **TimeGenerated** | datetime | The time when the event was generated (UTC). |
+| **Type** | string | The name of the table. |
+| **UserState** | string | The current state of the user account in Azure AD (Active/Disabled/Dormant/Lockout). |
+| **UserStateChangedOn** | datetime | The date of the last time the account state was changed (UTC). |
+| **UserType** | string | The user type. |
++ ## Next steps This document described the Azure Sentinel entity behavior analytics table schema.
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/whats-new.md
Previously updated : 07/21/2021 Last updated : 07/27/2021 # What's new in Azure Sentinel
If you're looking for items older than six months, you'll find them in the [Arch
## July 2021
+- [Use Azure AD data with Azure Sentinel's IdentityInfo table (Public preview)](#use-azure-ad-data-with-azure-sentinels-identityinfo-table-public-preview)
- [Enrich Entities with geolocation data via API (Public preview)](#enrich-entities-with-geolocation-data-via-api-public-preview) - [Support for ADX cross-resource queries (Public preview)](#support-for-adx-cross-resource-queries-public-preview) - [Watchlists are in general availability](#watchlists-are-in-general-availability) - [Support for data residency in more geos](#support-for-data-residency-in-more-geos) - [Bidirectional sync in Azure Defender connector (Public preview)](#bidirectional-sync-in-azure-defender-connector-public-preview)
+### Use Azure AD data with Azure Sentinel's IdentityInfo table (Public preview)
+
+As attackers often use the organization's own user and service accounts, data about those user accounts, including the user identification and privileges, are crucial for the analysts in the process of an investigation.
+
+Now, having [UEBA enabled](enable-entity-behavior-analytics.md) in your Azure Sentinel workspace also synchronizes Azure AD data into the new **IdentityInfo** table in Log Analytics. Synchronizations between your Azure AD and the **IdentifyInfo** table create a snapshot of your user profile data that includes user metadata, group information, and the Azure AD roles assigned to each user.
+
+Use the **IdentityInfo** table during investigations and when fine-tuning analytics rules for your organization to reduce false positives.
+
+For more information, see [IdentityInfo table](ueba-enrichments.md#identityinfo-table-public-preview) in the UEBA enrichments reference and [Use UEBA data to analyze false positives](investigate-with-ueba.md#use-ueba-data-to-analyze-false-positives).
+ ### Enrich entities with geolocation data via API (Public preview) Azure Sentinel now offers an API to enrich your data with geolocation information. Geolocation data can then be used to analyze and investigate security incidents.
Although Log Analytics remains the primary data storage location for performing
To query data stored in ADX clusters, use the adx() function to specify the ADX cluster, database name, and desired table. You can then query the output as you would any other table. See more information in the pages linked above. ++ ### Watchlists are in general availability The [watchlists](watchlists.md) feature is now generally available. Use watchlists to enrich alerts with business data, to create allowlists or blocklists against which to check access events, and to help investigate threats and reduce alert fatigue.
service-bus-messaging Service Bus To Event Grid Integration Example https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-to-event-grid-integration-example.md
description: This article provides steps for handling Service Bus events via Eve
documentationcenter: .net Previously updated : 10/16/2020 Last updated : 07/26/2021
In this step, you create an Azure logic app that receives Service Bus events via
6. Select **Review + Create**. 1. On the **Review + Create** page, select **Create** to create the logic app. 1. On the **Logic Apps Designer** page, select **Blank Logic App** under **Templates**. +
+### Add a step receive messages from Service Bus via Event Grid
1. On the designer, do the following steps: 1. Search for **Event Grid**. 2. Select **When a resource event occurs - Azure Event Grid**.
In this step, you create an Azure logic app that receives Service Bus events via
8. Select your **topic** and **subscription**. ![Screenshot that shows where you select your topic and subscription.](./media/service-bus-to-event-grid-integration-example/logic-app-select-topic-subscription.png)
-7. Select **+ New step**, and do the following steps:
- 1. Select **Service Bus**.
+
+### Add a step to process and complete received messages
+In this step, you'll add steps to send the received message in an email and then complete the message. In a real-world scenario, you'll process a message in the logic app before completing the message.
+
+#### Add a foreach loop
+1. Select **+ New step**.
+1. Search for and then select **Control**.
+
+ :::image type="content" source="./media/service-bus-to-event-grid-integration-example/select-control.png" alt-text="Image showing selection of Control category":::
+1. In the **Actions** list, select **For each**.
+
+ :::image type="content" source="./media/service-bus-to-event-grid-integration-example/select-for-each.png" alt-text="Image showing selection of For each control":::
+1. For **Select an output from previous steps** (click inside text box if needed), select **Body** under **Get messages from a topic subscription (peek-lock)**.
+
+ :::image type="content" source="./media/service-bus-to-event-grid-integration-example/select-input-for-each.png" alt-text="Image showing the selection of input to For each":::
+
+#### Add a step inside the foreach loop to send an email with the message body
+
+1. Within **For Each** loop, select **Add an action**.
+
+ :::image type="content" source="./media/service-bus-to-event-grid-integration-example/select-add-action.png" alt-text="Image showing the selection of add an action button inside the for each loop":::
+1. In the **Search connectors and actions** text box, enter **Office 365**.
+1. Select **Office 365 Outlook** in the search results.
+1. In the list of actions, select **Send an email (V2)**.
+1. Select inside the text box for **Body**, and follow these steps:
+ 1. Switch to **Expression**.
+ 1. Enter `base64ToString(items('For_each')?['ContentData'])`.
+ 1. Select **OK**.
+
+ :::image type="content" source="./media/service-bus-to-event-grid-integration-example/specify-expression-email.png" alt-text="Image showing the expression for Body of the Send an email activity":::
+1. For **Subject**, enter **Message received from Service Bus topic's subscription**.
+1. For **To**, enter an email address.
+
+ :::image type="content" source="./media/service-bus-to-event-grid-integration-example/send-email-configured.png" alt-text="Image showing the Send email activity configured":::
+
+#### Add another action in the foreach loop to complete the message
+1. Within **For Each** loop, select **Add an action**.
+ 1. Select **Service Bus** in the **Recent** list.
2. Select **Complete the message in a topic subscription** from the list of actions. 3. Select your Service Bus **topic**. 4. Select the second **subscription** to the topic.
In this step, you create an Azure logic app that receives Service Bus events via
8. Select **Save** on the toolbar on the Logic Apps Designer to save the logic app. :::image type="content" source="./media/service-bus-to-event-grid-integration-example/save-logic-app.png" alt-text="Save logic app":::+
+## Test the app
1. If you haven't already sent test messages to the topic, follow instructions in the [Send messages to the Service Bus topic](#send-messages-to-the-service-bus-topic) section to send messages to the topic. 1. Switch to the **Overview** page of your logic app. You see the logic app runs in the **Runs history** for the messages sent. It could take a few minutes before you see the logic app runs. Select **Refresh** on the toolbar to refresh the page. ![Logic Apps Designer - logic app runs](./media/service-bus-to-event-grid-integration-example/logic-app-runs.png) 1. Select a logic app run to see the details. Notice that it processed 5 messages in the for loop.
- :::image type="content" source="./media/service-bus-to-event-grid-integration-example/logic-app-run-details.png" alt-text="Logic app run details":::
+ :::image type="content" source="./media/service-bus-to-event-grid-integration-example/logic-app-run-details.png" alt-text="Logic app run details":::
+2. You should get an email for each message that's received by the logic app.
## Troubleshoot If you don't see any invocations after waiting and refreshing for sometime, follow these steps:
service-fabric Service Fabric Cluster Fabric Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-cluster-fabric-settings.md
The following is a list of Fabric settings that you can customize, organized by
| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** | | | | | | |DeployedState |wstring, default is L"Disabled" |Static |2-stage removal of CSS. |
+|UpdateEncryptionCertificateTimeout |TimeSpan, default is Common::TimeSpan::MaxValue |Static |Specify timespan in seconds. The default has changed to TimeSpan::MaxValue; but overrides are still respected. May be deprecated in the future. |
## ClusterManager
The following is a list of Fabric settings that you can customize, organized by
| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** | | | | | | |AllowCreateUpdateMultiInstancePerNodeServices |Bool, default is false |Dynamic|Allows creation of multiple stateless instances of a service per node. This feature is currently in preview. |
+|EnableAuxiliaryReplicas |Bool, default is false |Dynamic|Enable creation or update of auxiliary replicas on services. If true; upgrades from SF version 8.1+ to lower targetVersion will be blocked. |
|PerfMonitorInterval |Time in seconds, default is 1 |Dynamic|Specify timespan in seconds. Performance monitoring interval. Setting to 0 or negative value disables monitoring. | ## DefragmentationEmptyNodeDistributionPolicy
The following is a list of Fabric settings that you can customize, organized by
| **Parameter** | **Allowed Values** |**Upgrade Policy**| **Guidance or Short Description** | | | | | | |EnablePartitionedQuery|bool, default is FALSE|Static|The flag to enable support for DNS queries for partitioned services. The feature is turned off by default. For more information, see [Service Fabric DNS Service.](service-fabric-dnsservice.md)|
+|ForwarderPoolSize|Int, default is 20|Static|The number of forwarders in the forwarding pool.|
+|ForwarderPoolStartPort|Int, default is 16700|Static|The start address for the forwarding pool that is used for recursive queries.|
|InstanceCount|int, default is -1|Static|Default value is -1 which means that DnsService is running on every node. OneBox needs this to be set to 1 since DnsService uses well known port 53, so it cannot have multiple instances on the same machine.| |IsEnabled|bool, default is FALSE|Static|Enables/Disables DnsService. DnsService is disabled by default and this config needs to be set to enable it. | |PartitionPrefix|string, default is "--"|Static|Controls the partition prefix string value in DNS queries for partitioned services. The value : <ul><li>Should be RFC-compliant as it will be part of a DNS query.</li><li>Should not contain a dot, '.', as dot interferes with DNS suffix behavior.</li><li>Should not be longer than 5 characters.</li><li>Cannot be an empty string.</li><li>If the PartitionPrefix setting is overridden, then PartitionSuffix must be overridden, and vice-versa.</li></ul>For more information, see [Service Fabric DNS Service.](service-fabric-dnsservice.md).| |PartitionSuffix|string, default is ""|Static|Controls the partition suffix string value in DNS queries for partitioned services.The value : <ul><li>Should be RFC-compliant as it will be part of a DNS query.</li><li>Should not contain a dot, '.', as dot interferes with DNS suffix behavior.</li><li>Should not be longer than 5 characters.</li><li>If the PartitionPrefix setting is overridden, then PartitionSuffix must be overridden, and vice-versa.</li></ul>For more information, see [Service Fabric DNS Service.](service-fabric-dnsservice.md). |
-|RetryTransientFabricErrors|Bool, default is true|Static|The setting controls the retry capabilities when calling Service Fabric APIs from DnsService. When enabled, it retries up to 3 times if a transient error occurs.|
+|TransientErrorMaxRetryCount|Int, default is 3|Static|Controls the number of times SF DNS will retry when a transient error occurs while calling SF APIs (e.g. when retrieving names and endpoints).|
+|TransientErrorRetryIntervalInMillis|Int, default is 0|Static|Sets the delay in milliseconds between retries for when SF DNS calls SF APIs.|
## EventStoreService
The following is a list of Fabric settings that you can customize, organized by
|DeploymentRetryBackoffInterval| TimeSpan, default is Common::TimeSpan::FromSeconds(10)|Dynamic|Specify timespan in seconds. Back-off interval for the deployment failure. On every continuous deployment failure the system will retry the deployment for up to the MaxDeploymentFailureCount. The retry interval is a product of continuous deployment failure and the deployment backoff interval. | |DisableContainers|bool, default is FALSE|Static|Config for disabling containers - used instead of DisableContainerServiceStartOnContainerActivatorOpen which is deprecated config | |DisableDockerRequestRetry|bool, default is FALSE |Dynamic| By default SF communicates with DD (docker dameon) with a timeout of 'DockerRequestTimeout' for each http request sent to it. If DD does not responds within this time period; SF resends the request if top level operation still has remaining time. With hyperv container; DD sometimes take much more time to bring up the container or deactivate it. In such cases DD request times out from SF perspective and SF retries the operation. Sometimes this seems to adds more pressure on DD. This config allows to disable this retry and wait for DD to respond. |
+|DisableLivenessProbes | wstring, default is L"" | Static | Config to disable Liveness probes in cluster. You can specify any non-empty value for SF to disable probes. |
+|DisableReadinessProbes | wstring, default is L"" | Static | Config to disable Readiness probes in cluster. You can specify any non-empty value for SF to disable probes. |
|DnsServerListTwoIps | Bool, default is FALSE | Static | This flags adds the local dns server twice to help alleviate intermittent resolve issues. | | DockerTerminateOnLastHandleClosed | bool, default is TRUE | Static | By default if FabricHost is managing the 'dockerd' (based on: SkipDockerProcessManagement == false) this setting configures what happens when either FabricHost or dockerd crash. When set to `true` if either process crashes all running containers will be forcibly terminated by the HCS. If set to `false` the containers will continue to keep running. Note: Previous to 8.0 this behavior was unintentionally the equivalent of `false`. The default setting of `true` here is what we expect to happen by default moving forward for our cleanup logic to be effective on restart of these processes. | | DoNotInjectLocalDnsServer | bool, default is FALSE | Static | Prevents the runtime to injecting the local IP as DNS server for containers. |
The following is a list of Fabric settings that you can customize, organized by
|ConstraintFixPartialDelayAfterNewNode | Time in seconds, default is 120 |Dynamic| Specify timespan in seconds. DDo not Fix FaultDomain and UpgradeDomain constraint violations within this period after adding a new node. | |ConstraintFixPartialDelayAfterNodeDown | Time in seconds, default is 120 |Dynamic| Specify timespan in seconds. Do not Fix FaultDomain and UpgradeDomain constraint violations within this period after a node down event. | |ConstraintViolationHealthReportLimit | Int, default is 50 |Dynamic| Defines the number of times constraint violating replica has to be persistently unfixed before diagnostics are conducted and health reports are emitted. |
+|DecisionOperationalTracingEnabled | bool, default is FALSE |Dynamic| Config that enables CRM Decision operational structural trace in the event store. |
|DetailedConstraintViolationHealthReportLimit | Int, default is 200 |Dynamic| Defines the number of times constraint violating replica has to be persistently unfixed before diagnostics are conducted and detailed health reports are emitted. | |DetailedDiagnosticsInfoListLimit | Int, default is 15 |Dynamic| Defines the number of diagnostic entries (with detailed information) per constraint to include before truncation in Diagnostics.| |DetailedNodeListLimit | Int, default is 15 |Dynamic| Defines the number of nodes per constraint to include before truncation in the Unplaced Replica reports. |
The following is a list of Fabric settings that you can customize, organized by
|TraceCRMReasons |Bool, default is true |Dynamic|Specifies whether to trace reasons for CRM issued movements to the operational events channel. | |UpgradeDomainConstraintPriority | Int, default is 1| Dynamic|Determines the priority of upgrade domain constraint: 0: Hard; 1: Soft; negative: Ignore. | |UseMoveCostReports | Bool, default is false | Dynamic|Instructs the LB to ignore the cost element of the scoring function; resulting potentially large number of moves for better balanced placement. |
+|UseSeparateAuxiliaryLoad | Bool, default is true | Dynamic|Setting which determines if PLB should use different load for auxiliary on each node If UseSeparateAuxiliaryLoad is turned off: - Reported load for auxiliary on one node will result in overwriting load for each auxiliary (on all other nodes) If UseSeparateAuxiliaryLoad is turned on: - Reported load for auxiliary on one node will take effect only on that auxiliary (no effect on auxiliaries on other nodes) - If replica crash happens - new replica is created with average load of all the rest auxiliaries - If PLB moves existing replica - load goes with it. |
+|UseSeparateAuxiliaryMoveCost | Bool, default is false | Dynamic|Setting which determines if PLB should use different move cost for auxiliary on each node If UseSeparateAuxiliaryMoveCost is turned off: - Reported move cost for auxiliary on one node will result in overwritting move cost for each auxiliary (on all other nodes) If UseSeparateAuxiliaryMoveCost is turned on: - Reported move cost for auxiliary on one node will take effect only on that auxiliary (no effect on auxiliaries on other nodes) - If replica crash happens - new replica is created with default move cost specified on service level - If PLB moves existing replica - move cost goes with it. |
|UseSeparateSecondaryLoad | Bool, default is true | Dynamic|Setting which determines if separate load should be used for secondary replicas. | |UseSeparateSecondaryMoveCost | Bool, default is true | Dynamic|Setting which determines if PLB should use different move cost for secondary on each node. If UseSeparateSecondaryMoveCost is turned off: - Reported move cost for secondary on one node will result in overwritting move cost for each secondary (on all other nodes) If UseSeparateSecondaryMoveCost is turned on: - Reported move cost for secondary on one node will take effect only on that secondary (no effect on secondaries on other nodes) - If replica crash happens - new replica is created with default move cost specified on service level - If PLB moves existing replica - move cost goes with it. | |ValidatePlacementConstraint | Bool, default is true |Dynamic| Specifies whether or not the PlacementConstraint expression for a service is validated when a service's ServiceDescription is updated. |
The following is a list of Fabric settings that you can customize, organized by
|MaxSecondaryReplicationQueueMemorySize |Uint, default is 0 | Static |This is the maximum value of the secondary replication queue in bytes. | |MaxSecondaryReplicationQueueSize |Uint, default is 16384 | Static |This is the maximum number of operations that could exist in the secondary replication queue. Note that it must be a power of 2. | |ReplicatorAddress |string, default is "localhost:0" | Static | The endpoint in form of a string -'IP:Port' which is used by the Windows Fabric Replicator to establish connections with other replicas in order to send/receive operations. |
+|ShouldAbortCopyForTruncation |bool, default is FALSE | Static | Allow pending log truncation to go through during copy. With this enabled the copy stage of builds can be cancelled if the log is full and they are block truncation. |
## Transport | **Parameter** | **Allowed Values** |**Upgrade policy** |**Guidance or Short Description** |
static-web-apps Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/configuration.md
Based on the above configuration, review the following scenarios.
## Restrictions
-The following restrictions exist for the _staticwebapps.config.json_ file.
+The following restrictions exist for the _staticwebapp.config.json_ file.
- Max file size is 100 KB - Max of 50 distinct roles
storage File Sync Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/file-sync/file-sync-monitoring.md
The following metrics for Azure File Sync are available in Azure Monitor:
| Metric name | Description | |-|-| | Bytes synced | Size of data transferred (upload and download).<br><br>Unit: Bytes<br>Aggregation Type: Sum<br>Applicable dimensions: Server Endpoint Name, Sync Direction, Sync Group Name |
-| Cloud tiering recall | Size of data recalled.<br><br>**Note**: This metric will be removed in the future. Use the Cloud tiering recall size metric to monitor size of data recalled.<br><br>Unit: Bytes<br>Aggregation Type: Sum<br>Applicable dimension: Server Name |
+| Cloud tiering cache hit rate | Percentage of bytes, not whole files, that have been served from the cache vs. recalled from the cloud.<br><br>Unit: Percentage<br>Aggregation Type: Average<br>Applicable dimensions: Server Endpoint Name, Server Name, Sync Group Name |
| Cloud tiering recall size | Size of data recalled.<br><br>Unit: Bytes<br>Aggregation Type: Sum<br>Applicable dimensions: Server Name, Sync Group Name | | Cloud tiering recall size by application | Size of data recalled by application.<br><br>Unit: Bytes<br>Aggregation Type: Sum<br>Applicable dimensions: Application Name, Server Name, Sync Group Name |
+| Cloud tiering recall success rate | Percentage of recall requests that were successful.<br><br>Unit: Percentage<br>Aggregation Type: Average<br>Applicable dimensions: Server Endpoint Name, Server Name, Sync Group Name |
| Cloud tiering recall throughput | Size of data recall throughput.<br><br>Unit: Bytes<br>Aggregation Type: Sum<br>Applicable dimensions: Server Name, Sync Group Name | | Files not syncing | Count of files that are failing to sync.<br><br>Unit: Count<br>Aggregation Types: Average, Sum<br>Applicable dimensions: Server Endpoint Name, Sync Direction, Sync Group Name | | Files synced | Count of files transferred (upload and download).<br><br>Unit: Count<br>Aggregation Type: Sum<br>Applicable dimensions: Server Endpoint Name, Sync Direction, Sync Group Name |
+| Server cache size | Size of data cached on the server.<br><br>Unit: Bytes<br>Aggregation Type: Average<br>Applicable dimension: Server Endpoint Name, Server Name, Sync Group Name |
| Server online status | Count of heartbeats received from the server.<br><br>Unit: Count<br>Aggregation Type: Maximum<br>Applicable dimension: Server Name | | Sync session result | Sync session result (1=successful sync session; 0=failed sync session)<br><br>Unit: Count<br>Aggregation Types: Maximum<br>Applicable dimensions: Server Endpoint Name, Sync Direction, Sync Group Name |
This section provides some example alerts for Azure File Sync.
- [Planning for an Azure File Sync deployment](file-sync-planning.md) - [Consider firewall and proxy settings](file-sync-firewall-and-proxy.md) - [Deploy Azure File Sync](file-sync-deployment-guide.md)-- [Troubleshoot Azure File Sync](file-sync-troubleshoot.md)
+- [Troubleshoot Azure File Sync](file-sync-troubleshoot.md)
storage Storage Files Identity Ad Ds Configure Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-identity-ad-ds-configure-permissions.md
The following table contains the Azure RBAC permissions related to this configur
| | Read & execute | Read & execute | | | Read | Read | | | Write | Write |
-|Storage File Data SMB Share Elevated Contributor | Full control | Modify, Read, Write, Edit, Execute |
+|Storage File Data SMB Share Elevated Contributor | Full control | Modify, Read, Write, Edit (Change permissions), Execute |
| | Modify | Modify | | | Read & execute | Read & execute | | | Read | Read |
storage Storage Files Identity Ad Ds Update Password https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-identity-ad-ds-update-password.md
If you registered the Active Directory Domain Services (AD DS) identity/account
To trigger password rotation, you can run the `Update-AzStorageAccountADObjectPassword` command from the [AzFilesHybrid module](https://github.com/Azure-Samples/azure-files-samples/releases). This command must be run in an on-premises AD DS-joined environment using a hybrid user with owner permission to the storage account and AD DS permissions to change the password of the identity representing the storage account. The command performs actions similar to storage account key rotation. Specifically, it gets the second Kerberos key of the storage account, and uses it to update the password of the registered account in AD DS. Then, it regenerates the target Kerberos key of the storage account, and updates the password of the registered account in AD DS. You must run this command in an on-premises AD DS-joined environment.
+To prevent password rotation, during the onboarding of the Azure Storage account in the domain, make sure to place the Azure Storage Account into a separate organizational unit in AD DS. Disable Group Policy inheritance on this organizational unit to prevent default domain policies or specific password policies to be applied.
+ ```PowerShell # Update the password of the AD DS account registered for the storage account # You may use either kerb1 or kerb2
synapse-analytics Tutorial Automl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/tutorial-automl.md
If you don't have an Azure subscription, [create a free account before you begin
## Prerequisites - An [Azure Synapse Analytics workspace](../get-started-create-workspace.md). Ensure that it has an Azure Data Lake Storage Gen2 storage account configured as the default storage. For the Data Lake Storage Gen2 file system that you work with, ensure that you're the *Storage Blob Data Contributor*.-- An Apache Spark pool in your Azure Synapse Analytics workspace. For details, see [Quickstart: Create a serverless Apache Spark pool using Synapse Studio](../quickstart-create-apache-spark-pool-studio.md).
+- An Apache Spark pool (version 2.4) in your Azure Synapse Analytics workspace. For details, see [Quickstart: Create a serverless Apache Spark pool using Synapse Studio](../quickstart-create-apache-spark-pool-studio.md).
- An Azure Machine Learning linked service in your Azure Synapse Analytics workspace. For details, see [Quickstart: Create a new Azure Machine Learning linked service in Azure Synapse Analytics](quickstart-integrate-azure-machine-learning.md). ## Sign in to the Azure portal
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 07/22/2021 Last updated : 07/26/2021
In this section, you can find information in how to configure SSO with most of t
In this section, you find documents about Microsoft Power BI integration into SAP data sources as well as Azure Data Factory integration into SAP BW. ## Change Log
+- July 26, 2021: Change in [Setting up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) and [Setting up Pacemaker on SLES in Azure](./high-availability-guide-suse-pacemaker.md) to replace role assignment instructions with links to the RBAC documentation in the sections describing the set up for Azure Fence Agent
- July 22, 2021: Change in [HA for SAP NW on Azure VMs on RHEL for SAP applications](./high-availability-guide-rhel.md), [HA for SAP NW on Azure VMs on RHEL with ANF](./high-availability-guide-rhel-netapp-files.md) and [HA for SAP NW on Azure VMs on RHEL multi-SID guide](./high-availability-guide-rhel-multi-sid.md) to remove `failure-timeout` for the ASCS cluster resource (ENSA2 only) - July 16, 2021: Restructuring of the SAP on Azure documentation Table of contents(TOC) for more streamlined navigation - July 2, 2021: Change in [Backup and restore of SAP HANA on HANA Large Instances](./hana-backup-restore.md) to remove duplicate content for azacsnap tool and backup and restore of HANA Large Instances
virtual-machines High Availability Guide Rhel Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-pacemaker.md
vm-windows Previously updated : 07/02/2021+ Last updated : 07/26/2021
Use the following content for the input file. You need to adapt the content to y
### **[A]** Assign the custom role to the Service Principal
-Assign the custom role "Linux Fence Agent Role" that was created in the last chapter to the Service Principal. Do not use the Owner role anymore!
-
-1. Go to https://portal.azure.com
-1. Open the All resources blade
-1. Select the virtual machine of the first cluster node
-1. Click Access control (IAM)
-1. Click Add role assignment
-1. Select the role "Linux Fence Agent Role"
-1. Enter the name of the application you created above
-1. Click Save
-
-Repeat the steps above for the second cluster node.
-
+Assign the custom role "Linux Fence Agent Role" that was created in the last chapter to the Service Principal. Do not use the Owner role anymore! For detailed steps, see [Assign Azure roles using the Azure portal](../../../role-based-access-control/role-assignments-portal.md).
+Make sure to assign the role for both cluster nodes.
+
### **[1]** Create the STONITH devices After you edited the permissions for the virtual machines, you can configure the STONITH devices in the cluster.
virtual-machines High Availability Guide Suse Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md
vm-windows Previously updated : 05/13/2021+ Last updated : 07/26/2021
Use the following content for the input file. You need to adapt the content to y
### **[A]** Assign the custom role to the Service Principal
-Assign the custom role "Linux Fence Agent Role" that was created in the last chapter to the Service Principal. Don't use the Owner role anymore!
-
-1. Go to [https://portal.azure.com](https://portal.azure.com)
-1. Open the All resources blade
-1. Select the virtual machine of the first cluster node
-1. Click Access control (IAM)
-1. Click Add role assignment
-1. Select the role "Linux Fence Agent Role"
-1. Enter the name of the application you created above
-1. Click Save
-
-Repeat the steps above for the second cluster node.
+Assign the custom role "Linux Fence Agent Role" that was created in the last chapter to the Service Principal. Do not use the Owner role anymore! For detailed steps, see [Assign Azure roles using the Azure portal](../../../role-based-access-control/role-assignments-portal.md).
+Make sure to assign the role for both cluster nodes.
### **[1]** Create the STONITH devices
virtual-machines Sap Certifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-certifications.md
vm-linux Previously updated : 04/21/2020 Last updated : 07/21/2021
References:
| SAP Product | Supported OS | Azure Offerings | | | | |
-| SAP HANA Developer Edition (including the HANA client software comprised of SQLODBC, ODBO-Windows only, ODBC, JDBC drivers, HANA studio, and HANA database) | Red Hat Enterprise Linux, SUSE Linux Enterprise | D-Series VM family |
-| Business One on HANA | SUSE Linux Enterprise | DS14_v2, M32ts, M32ls, M64ls, M64s <br /> [SAP HANA Certified IaaS Platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure%23SAP%20Business%20One) |
-| SAP S/4 HANA | Red Hat Enterprise Linux, SUSE Linux Enterprise | Controlled Availability for GS5. Full support for M64s, M64ms, M128s, M128ms, M64ls, M32ls, M32ts, M208s_v2, M208ms_v2, M416s_v2, M416ms_v2, <br /> SAP HANA on Azure (Large instances) [SAP HANA Certified IaaS Platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure) |
-| Suite on HANA, OLTP | Red Hat Enterprise Linux, SUSE Linux Enterprise | M64s, M64ms, M128s, M128ms, M64ls, M32ls, M32ts, M208s_v2, M208ms_v2, <br /> M416s_v2, M416ms_v2, SAP HANA on Azure (Large instances) [SAP HANA Certified IaaS Platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure) |
-| HANA Enterprise for BW, OLAP | Red Hat Enterprise Linux, SUSE Linux Enterprise | GS5, M64s, M64ms, M128s, M128ms, M64ls, M32ls, M32ts, M208s_v2, M208ms_v2, <br /> M416s_v2, M416ms_v2, SAP HANA on Azure (Large instances) [SAP HANA Certified IaaS Platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure) |
-| SAP BW/4 HANA | Red Hat Enterprise Linux, SUSE Linux Enterprise | GS5, M64s, M64ms, M128s, M128ms, M64ls, M32ls, M32ts, M208s_v2, M208ms_v2, <br /> M416s_v2, M416ms_v2, SAP HANA on Azure (Large instances) <br /> [SAP HANA Certified IaaS Platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure) |
+| Business One on HANA | SUSE Linux Enterprise | [SAP HANA Certified IaaS Platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=iaas;ve:24;v:120) |
+| SAP S/4 HANA | Red Hat Enterprise Linux, SUSE Linux Enterprise | [SAP HANA Certified IaaS Platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=iaas;ve:24) |
+| Suite on HANA, OLTP | Red Hat Enterprise Linux, SUSE Linux Enterprise | [SAP HANA Certified IaaS Platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=iaas;ve:24;v:125) |
+| HANA Enterprise for BW, OLAP | Red Hat Enterprise Linux, SUSE Linux Enterprise | [SAP HANA Certified IaaS Platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=iaas;ve:24;v:105) |
+| SAP BW/4 HANA | Red Hat Enterprise Linux, SUSE Linux Enterprise | [SAP HANA Certified IaaS Platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=iaas;ve:24;v:105) |
-Be aware that SAP uses the term 'clustering' in [SAP HANA Certified IaaS Platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure) as synonym for 'scale-out' and NOT for high availability 'clustering'
## SAP NetWeaver certifications Microsoft Azure is certified for the following SAP products, with full support from Microsoft and SAP.
References:
| SAP Product | Guest OS | RDBMS | Virtual Machine Types | | | | | |
-| SAP Business Suite Software | Windows, SUSE Linux Enterprise, Red Hat Enterprise Linux, Oracle Linux |SQL Server, Oracle (Windows and Oracle Linux only), DB2, SAP ASE |A5 to A11, D11 to D14, DS11 to DS14, DS11_v2 to DS15_v2, GS1 to GS5, D2s_v3 to D64s_v3, D2as_v4 to D64as_v4, E2s_v3 to E64s_v3, E2as_v4 to E64as_v4, M64s, M64ms, M128s, M128ms, M64ls, M32ls, M32ts, M208s_v2, M208ms_v2, M416s_v2, M416ms_v2 |
-| SAP Business All-in-One | Windows, SUSE Linux Enterprise, Red Hat Enterprise Linux, Oracle Linux |SQL Server, Oracle (Windows and Oracle Linux only), DB2, SAP ASE |A5 to A11, D11 to D14, DS11 to DS14, DS11_v2 to DS15_v2, GS1 to GS5, D2s_v3 to D64s_v3, D2as_v4 to D64as_v4, E2s_v3 to E64s_v3, E2as_v4 to E64as_v4, M64s, M64ms, M128s, M128ms, M64ls, M32ls, M32ts, M208s_v2, M208ms_v2, M416s_v2, M416ms_v2 |
-| SAP BusinessObjects BI | Windows |N/A |A5 to A11, D11 to D14, DS11 to DS14, DS11_v2 to DS15_v2, GS1 to GS5, D2s_v3 to D64s_v3, D2as_v4 to D64as_v4, E2s_v3 to E64s_v3, E2as_v4 to E64as_v4, M64s, M64ms, M128s, M128ms, M64ls, M32ls, M32ts, M208s_v2, M208ms_v2, M416s_v2, M416ms_v2 |
-| SAP NetWeaver | Windows, SUSE Linux Enterprise, Red Hat Enterprise Linux, Oracle Linux |SQL Server, Oracle (Windows and Oracle Linux only), DB2, SAP ASE |A5 to A11, D11 to D14, DS11 to DS14, DS11_v2 to DS15_v2, GS1 to GS5, D2s_v3 to D64s_v3, D2as_v4 to D64as_v4, E2s_v3 to E64s_v3, E2as_v4 to E64as_v4, M64s, M64ms, M128s, M128ms, M64ls, M32ls, M32ts, M208s_v2, M208ms_v2, M416s_v2, M416ms_v2 |
+| SAP Business Suite Software | Windows, SUSE Linux Enterprise, Red Hat Enterprise Linux, Oracle Linux |SQL Server, Oracle (Windows and Oracle Linux only), DB2, SAP ASE |A5 to A11, D11 to D14, DS11 to DS14, DS11_v2 to DS15_v2, GS1 to GS5, D2s_v3 to D64s_v3, D2as_v4 to D64as_v4, E2s_v3 to E64s_v3, E2as_v4 to E64as_v4, M64s, M64ms, M128s, M128ms, M64ls, M32ls, M32ts, M208s_v2, M208ms_v2, M416s_v2, M416ms_v2, M32(d)ms_v2, M64(d)s_v2, M64(d)ms_v2, M128(d)s_v2, M128(d)ms_v2, M192i(d)s_v2, M192i(d)ms_v2 |
+| SAP Business All-in-One | Windows, SUSE Linux Enterprise, Red Hat Enterprise Linux, Oracle Linux |SQL Server, Oracle (Windows and Oracle Linux only), DB2, SAP ASE |A5 to A11, D11 to D14, DS11 to DS14, DS11_v2 to DS15_v2, GS1 to GS5, D2s_v3 to D64s_v3, D2as_v4 to D64as_v4, E2s_v3 to E64s_v3, E2as_v4 to E64as_v4, M64s, M64ms, M128s, M128ms, M64ls, M32ls, M32ts, M208s_v2, M208ms_v2, M416s_v2, M416ms_v2, M32(d)ms_v2, M64(d)s_v2, M64(d)ms_v2, M128(d)s_v2, M128(d)ms_v2, M192i(d)s_v2, M192i(d)ms_v2 |
+| SAP BusinessObjects BI | Windows |N/A |A5 to A11, D11 to D14, DS11 to DS14, DS11_v2 to DS15_v2, GS1 to GS5, D2s_v3 to D64s_v3, D2as_v4 to D64as_v4, E2s_v3 to E64s_v3, E2as_v4 to E64as_v4, M64s, M64ms, M128s, M128ms, M64ls, M32ls, M32ts, M208s_v2, M208ms_v2, M416s_v2, M416ms_v2, M32(d)ms_v2, M64(d)s_v2, M64(d)ms_v2, M128(d)s_v2, M128(d)ms_v2, M192i(d)s_v2, M192i(d)ms_v2 |
+| SAP NetWeaver | Windows, SUSE Linux Enterprise, Red Hat Enterprise Linux, Oracle Linux |SQL Server, Oracle (Windows and Oracle Linux only), DB2, SAP ASE |A5 to A11, D11 to D14, DS11 to DS14, DS11_v2 to DS15_v2, GS1 to GS5, D2s_v3 to D64s_v3, D2as_v4 to D64as_v4, E2s_v3 to E64s_v3, E2as_v4 to E64as_v4, M64s, M64ms, M128s, M128ms, M64ls, M32ls, M32ts, M208s_v2, M208ms_v2, M416s_v2, M416ms_v2, M32(d)ms_v2, M64(d)s_v2, M64(d)ms_v2, M128(d)s_v2, M128(d)ms_v2, M192i(d)s_v2, M192i(d)ms_v2 |
## Other SAP Workload supported on Azure
virtual-wan Scenario Isolate Virtual Networks Branches https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/scenario-isolate-virtual-networks-branches.md
Title: 'Scenario: Custom isolation for virtual networks and branches'
description: Learn about Virtual WAN routing scenarios to prevent selected VNets and branches from being able to reach each other. -+ Last updated 04/27/2021-+ # Scenario: Custom Isolation for Virtual Networks and Branches When working with Virtual WAN virtual hub routing, there are quite a few available scenarios. In a custom isolation scenario for both Virtual Networks (VNets) and branches, the goal is to prevent a specific set of VNets from reaching another set of VNets. Likewise, branches (VPN/ER/User VPN) are only allowed to reach certain sets of VNets.
-We also introduce the additional requirement that Azure Firewall should inspect branch-to-VNet and Branch-to VNet-traffic, but **not** VNet-to VNet-traffic.
+We also introduce the additional requirement that Azure Firewall should inspect branch-to-VNet and VNet-to-branch, but **not** VNet-to VNet-traffic.
For more information about virtual hub routing, see [About virtual hub routing](about-virtual-hub-routing.md).
vpn-gateway Active Active Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/active-active-portal.md
Create a virtual network gateway using the following values:
[!INCLUDE [Create a vpn gateway](../../includes/vpn-gateway-add-gw-portal-include.md)] [!INCLUDE [Configure PIP settings](../../includes/vpn-gateway-add-gw-pip-active-portal-include.md)]
-A gateway can take up to 45 minutes to fully create and deploy. You can see the deployment status on the Overview page for your gateway. After the gateway is created, you can view the IP address that has been assigned to it by looking at the virtual network in the portal. The gateway appears as a connected device.
+You can see the deployment status on the Overview page for your gateway. After the gateway is created, you can view the IP address that has been assigned to it by looking at the virtual network in the portal. The gateway appears as a connected device.
[!INCLUDE [NSG warning](../../includes/vpn-gateway-no-nsg-include.md)]
vpn-gateway Bgp How To Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/bgp-how-to-cli.md
az network public-ip create -n GWPubIP -g TestBGPRG1 --allocation-method Dynamic
#### 2. Create the VPN gateway with the AS number
-Create the virtual network gateway for TestVNet1. BGP requires a Route-Based VPN gateway. You also need the additional parameter `-Asn` to set the autonomous system number (ASN) for TestVNet1. Creating a gateway can take a while (45 minutes or more) to complete.
+Create the virtual network gateway for TestVNet1. BGP requires a Route-Based VPN gateway. You also need the additional parameter `-Asn` to set the autonomous system number (ASN) for TestVNet1. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU.
If you run this command by using the `--no-wait` parameter, you don't see any feedback or output. The `--no-wait` parameter allows the gateway to be created in the background. It does not mean that the VPN gateway is created immediately.
vpn-gateway Bgp Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/bgp-howto.md
Previously updated : 03/22/2021 Last updated : 07/26/2021
In this step, you create a VPN gateway with the corresponding BGP parameters.
> * When APIPA addresses are used on Azure VPN gateways, the gateways do not initiate BGP peering sessions with APIPA source IP addresses. The on-premises VPN device must initiate BGP peering connections. >
-1. Select **Review + create** to run validation. Once validation passes, select **Create** to deploy the VPN gateway. A gateway can take up to 45 minutes to fully create and deploy. You can see the deployment status on the Overview page for your gateway.
+1. Select **Review + create** to run validation. Once validation passes, select **Create** to deploy the VPN gateway. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. You can see the deployment status on the Overview page for your gateway.
### 3. Obtain the Azure BGP Peer IP addresses
vpn-gateway Create Routebased Vpn Gateway Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/create-routebased-vpn-gateway-cli.md
This article helps you quickly create a route-based Azure VPN gateway using the Azure CLI. A VPN gateway is used when creating a VPN connection to your on-premises network. You can also use a VPN gateway to connect VNets.
-The steps in this article will create a VNet, a subnet, a gateway subnet, and a route-based VPN gateway (virtual network gateway). A virtual network gateway can take 45 minutes or more to create. Once the gateway creation has completed, you can then create connections. These steps require an Azure subscription.
+The steps in this article will create a VNet, a subnet, a gateway subnet, and a route-based VPN gateway (virtual network gateway). Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. Once the gateway creation has completed, you can then create connections. These steps require an Azure subscription.
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
vpn-gateway Create Routebased Vpn Gateway Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/create-routebased-vpn-gateway-powershell.md
$gwipconfig = New-AzVirtualNetworkGatewayIpConfig -Name gwipconfig1 -SubnetId $s
``` ## <a name="CreateGateway"></a>Create the VPN gateway
-A VPN gateway can take 45 minutes or more to create. Once the gateway has completed, you can create a connection between your virtual network and another VNet. Or, create a connection between your virtual network and an on-premises location. Create a VPN gateway using the [New-AzVirtualNetworkGateway](/powershell/module/az.network/New-azVirtualNetworkGateway) cmdlet.
+Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. Once the gateway has completed, you can create a connection between your virtual network and another VNet. Or, create a connection between your virtual network and an on-premises location. Create a VPN gateway using the [New-AzVirtualNetworkGateway](/powershell/module/az.network/New-azVirtualNetworkGateway) cmdlet.
```azurepowershell-interactive New-AzVirtualNetworkGateway -Name VNet1GW -ResourceGroupName TestRG1 `
vpn-gateway Howto Point To Site Multi Auth https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/howto-point-to-site-multi-auth.md
In this step, you create the virtual network gateway for your VNet. Creating a g
[!INCLUDE [Create a vpn gateway](../../includes/vpn-gateway-add-gw-portal-include.md)] [!INCLUDE [Configure PIP settings](../../includes/vpn-gateway-add-gw-pip-portal-include.md)]
-You can see the deployment status on the Overview page for your gateway. A gateway can take up to 45 minutes to fully create and deploy. After the gateway is created, you can view the IP address that has been assigned to it by looking at the virtual network in the portal. The gateway appears as a connected device.
+You can see the deployment status on the Overview page for your gateway. A gateway can often take 45 minutes or more to fully create and deploy. After the gateway is created, you can view the IP address that has been assigned to it by looking at the virtual network in the portal. The gateway appears as a connected device.
[!INCLUDE [NSG warning](../../includes/vpn-gateway-no-nsg-include.md)]
vpn-gateway Point To Site How To Radius Ps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/point-to-site-how-to-radius-ps.md
The [Network Policy Server (NPS)](/windows-server/networking/technologies/nps/np
Configure and create the VPN gateway for your VNet. * The -GatewayType must be 'Vpn' and the -VpnType must be 'RouteBased'.
-* A VPN gateway can take up to 45 minutes to complete, depending on the [gateway SKU](vpn-gateway-about-vpn-gateway-settings.md#gwsku) you select.
+* A VPN gateway can take 45 minutes or more to complete, depending on the [gateway SKU](vpn-gateway-about-vpn-gateway-settings.md#gwsku) you select.
```azurepowershell-interactive New-AzVirtualNetworkGateway -Name $GWName -ResourceGroupName $RG `
vpn-gateway Tutorial Create Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/tutorial-create-gateway-portal.md
Create a virtual network gateway using the following values:
[!INCLUDE [Create a vpn gateway](../../includes/vpn-gateway-add-gw-portal-include.md)] [!INCLUDE [Configure PIP settings](../../includes/vpn-gateway-add-gw-pip-portal-include.md)]
-A gateway can take up to 45 minutes to fully create and deploy. You can see the deployment status on the Overview page for your gateway. After the gateway is created, you can view the IP address that has been assigned to it by looking at the virtual network in the portal. The gateway appears as a connected device.
+A gateway can take 45 minutes or more to fully create and deploy. You can see the deployment status on the Overview page for your gateway. After the gateway is created, you can view the IP address that has been assigned to it by looking at the virtual network in the portal. The gateway appears as a connected device.
[!INCLUDE [NSG warning](../../includes/vpn-gateway-no-nsg-include.md)]
vpn-gateway Vpn Gateway About Vpn Gateway Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md
Previously updated : 04/28/2021 Last updated : 07/26/2021
az network vnet-gateway create --name VNet1GW --public-ip-address VNet1GWPIP --r
### <a name="resizechange"></a>Resizing or changing a SKU
-If you have a VPN gateway and you want to use a different gateway SKU, your options are to either resize your gateway SKU, or to change to another SKU. When you change to another gateway SKU, you delete the existing gateway entirely and build a new one. A gateway can take up to 45 minutes to build. In comparison, when you resize a gateway SKU, there is not much downtime because you do not have to delete and rebuild the gateway. If you have the option to resize your gateway SKU, rather than change it, you will want to do that. However, there are rules regarding resizing:
+If you have a VPN gateway and you want to use a different gateway SKU, your options are to either resize your gateway SKU, or to change to another SKU. When you change to another gateway SKU, you delete the existing gateway entirely and build a new one. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. In comparison, when you resize a gateway SKU, there is not much downtime because you do not have to delete and rebuild the gateway. If you have the option to resize your gateway SKU, rather than change it, you will want to do that. However, there are rules regarding resizing:
1. With the exception of the Basic SKU, you can resize a VPN gateway SKU to another VPN gateway SKU within the same generation (Generation1 or Generation2). For example, VpnGw1 of Generation1 can be resized to VpnGw2 of Generation1 but not to VpnGw2 of Generation2. 2. When working with the old gateway SKUs, you can resize between Basic, Standard, and HighPerformance SKUs.
vpn-gateway Vpn Gateway About Vpngateways https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-about-vpngateways.md
A virtual network gateway is composed of two or more VMs that are deployed to a
When you configure a virtual network gateway, you configure a setting that specifies the gateway type. The gateway type determines how the virtual network gateway will be used and the actions that the gateway takes. The gateway type 'Vpn' specifies that the type of virtual network gateway created is a 'VPN gateway'. This distinguishes it from an ExpressRoute gateway, which uses a different gateway type. A virtual network can have two virtual network gateways; one VPN gateway and one ExpressRoute gateway. For more information, see [Gateway types](vpn-gateway-about-vpn-gateway-settings.md#gwtype).
-Creating a virtual network gateway can take up to 45 minutes to complete. When you create a virtual network gateway, gateway VMs are deployed to the gateway subnet and configured with the settings that you specify. After you create a VPN gateway, you can create an IPsec/IKE VPN tunnel connection between that VPN gateway and another VPN gateway (VNet-to-VNet), or create a cross-premises IPsec/IKE VPN tunnel connection between the VPN gateway and an on-premises VPN device (Site-to-Site). You can also create a Point-to-Site VPN connection (VPN over OpenVPN, IKEv2, or SSTP), which lets you connect to your virtual network from a remote location, such as from a conference or from home.
+Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. When you create a virtual network gateway, gateway VMs are deployed to the gateway subnet and configured with the settings that you specify. After you create a VPN gateway, you can create an IPsec/IKE VPN tunnel connection between that VPN gateway and another VPN gateway (VNet-to-VNet), or create a cross-premises IPsec/IKE VPN tunnel connection between the VPN gateway and an on-premises VPN device (Site-to-Site). You can also create a Point-to-Site VPN connection (VPN over OpenVPN, IKEv2, or SSTP), which lets you connect to your virtual network from a remote location, such as from a conference or from home.
## <a name="configuring"></a>Configuring a VPN Gateway
vpn-gateway Vpn Gateway Activeactive Rm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-activeactive-rm-powershell.md
The other properties are the same as the non-active-active gateways.
### Before you begin * Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial/).
-* You'll need to install the Azure Resource Manager PowerShell cmdlets if you don't want to use CloudShell in your browser. See [Overview of Azure PowerShell](/powershell/azure/) for more information about installing the PowerShell cmdlets.
+* You'll need to install the Azure Resource Manager PowerShell cmdlets if you don't want to use Cloud Shell in your browser. See [Overview of Azure PowerShell](/powershell/azure/) for more information about installing the PowerShell cmdlets.
### Step 1 - Create and configure VNet1 #### 1. Declare your variables
$gw1ipconf2 = New-AzVirtualNetworkGatewayIpConfig -Name $GW1IPconf2 -Subnet $sub
``` #### 2. Create the VPN gateway with active-active configuration
-Create the virtual network gateway for TestVNet1. Note that there are two GatewayIpConfig entries, and the EnableActiveActiveFeature flag is set. Creating a gateway can take a while (45 minutes or more to complete).
+Create the virtual network gateway for TestVNet1. Note that there are two GatewayIpConfig entries, and the EnableActiveActiveFeature flag is set. Creating a gateway can take a while (45 minutes or more to complete, depending on the selected SKU).
```azurepowershell-interactive New-AzVirtualNetworkGateway -Name $GWName1 -ResourceGroupName $RG1 -Location $Location1 -IpConfigurations $gw1ipconf1,$gw1ipconf2 -GatewayType Vpn -VpnType RouteBased -GatewaySku VpnGw1 -Asn $VNet1ASN -EnableActiveActiveFeature -Debug
vpn-gateway Vpn Gateway Create Site To Site Rm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md
New-AzVirtualNetworkGateway -Name VNet1GW -ResourceGroupName TestRG1 `
-VpnType RouteBased -GatewaySku VpnGw1 ```
-After running this command, it can take up to 45 minutes for the gateway configuration to complete.
+Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU.
## <a name="ConfigureVPNDevice"></a>6. Configure your VPN device
vpn-gateway Vpn Gateway Forced Tunneling Rm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-forced-tunneling-rm.md
Install the latest version of the Azure Resource Manager PowerShell cmdlets. See
Set-AzVirtualNetworkSubnetConfig -Name "Backend" -VirtualNetwork $vnet -AddressPrefix "10.1.2.0/24" -RouteTable $rt Set-AzVirtualNetwork -VirtualNetwork $vnet ```
-6. Create the virtual network gateway. This step takes some time to complete, sometimes 45 minutes or more, because you are creating and configuring the gateway. If you see ValidateSet errors regarding the GatewaySKU value, verify that you have installed the [latest version of the PowerShell cmdlets](#before). The latest version of the PowerShell cmdlets contains the new validated values for the latest Gateway SKUs.
+6. Create the virtual network gateway. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. If you see ValidateSet errors regarding the GatewaySKU value, verify that you have installed the [latest version of the PowerShell cmdlets](#before). The latest version of the PowerShell cmdlets contains the new validated values for the latest Gateway SKUs.
```powershell $pip = New-AzPublicIpAddress -Name "GatewayIP" -ResourceGroupName "ForcedTunneling" -Location "North Europe" -AllocationMethod Dynamic
vpn-gateway Vpn Gateway Howto Point To Site Resource Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md
In this step, you create the virtual network gateway for your VNet. Creating a g
[!INCLUDE [Create a vpn gateway](../../includes/vpn-gateway-add-gw-portal-include.md)] [!INCLUDE [Configure PIP settings](../../includes/vpn-gateway-add-gw-pip-portal-include.md)]
-You can see the deployment status on the Overview page for your gateway. A gateway can take up to 45 minutes to fully create and deploy. After the gateway is created, you can view the IP address that has been assigned to it by looking at the virtual network in the portal. The gateway appears as a connected device.
+You can see the deployment status on the Overview page for your gateway. After the gateway is created, you can view the IP address that has been assigned to it by looking at the virtual network in the portal. The gateway appears as a connected device.
[!INCLUDE [NSG warning](../../includes/vpn-gateway-no-nsg-include.md)]
vpn-gateway Vpn Gateway Howto Point To Site Rm Ps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md
In this step, you configure and create the virtual network gateway for your VNet
* The -GatewayType must be **Vpn** and the -VpnType must be **RouteBased**. * The -VpnClientProtocol is used to specify the types of tunnels that you would like to enable. The tunnel options are **OpenVPN, SSTP**, and **IKEv2**. You can choose to enable one of them or any supported combination. If you want to enable multiple types, then specify the names separated by a comma. OpenVPN and SSTP cannot be enabled together. The strongSwan client on Android and Linux and the native IKEv2 VPN client on iOS and macOS will use only the IKEv2 tunnel to connect. Windows clients try IKEv2 first and if that doesnΓÇÖt connect, they fall back to SSTP. You can use the OpenVPN client to connect to OpenVPN tunnel type. * The virtual network gateway 'Basic' SKU does not support IKEv2, OpenVPN, or RADIUS authentication. If you are planning on having Mac clients connect to your virtual network, do not use the Basic SKU.
-* A VPN gateway can take up to 45 minutes to complete, depending on the [gateway sku](vpn-gateway-about-vpn-gateway-settings.md) you select. This example uses IKEv2.
+* A VPN gateway can take 45 minutes or more to complete, depending on the [gateway sku](vpn-gateway-about-vpn-gateway-settings.md) you select.
1. Configure and create the virtual network gateway for your VNet. It takes approximately 45 minutes for the gateway to create.
vpn-gateway Vpn Gateway Howto Site To Site Resource Manager Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-cli.md
Previously updated : 10/23/2020 Last updated : 07/26/2021
az network public-ip create --name VNet1GWIP --resource-group TestRG1 --allocati
## <a name="CreateGateway"></a>7. Create the VPN gateway
-Create the virtual network VPN gateway. Creating a VPN gateway can take up to 45 minutes or more to complete.
+Create the virtual network VPN gateway. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU.
Use the following values:
Use the following values:
* The *--vpn-type* can be *RouteBased* (referred to as a Dynamic Gateway in some documentation), or *PolicyBased* (referred to as a Static Gateway in some documentation). The setting is specific to requirements of the device that you are connecting to. For more information about VPN gateway types, see [About VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md#vpntype). * Select the Gateway SKU that you want to use. There are configuration limitations for certain SKUs. For more information, see [Gateway SKUs](vpn-gateway-about-vpn-gateway-settings.md#gwsku).
-Create the VPN gateway using the [az network vnet-gateway create](/cli/azure/network/vnet-gateway) command. If you run this command using the '--no-wait' parameter, you don't see any feedback or output. This parameter allows the gateway to create in the background. It takes around 45 minutes to create a gateway.
+Create the VPN gateway using the [az network vnet-gateway create](/cli/azure/network/vnet-gateway) command. If you run this command using the '--no-wait' parameter, you don't see any feedback or output. This parameter allows the gateway to create in the background. It takes 45 minutes or more to create a gateway.
```azurecli-interactive az network vnet-gateway create --name VNet1GW --public-ip-address VNet1GWIP --resource-group TestRG1 --vnet TestVNet1 --gateway-type Vpn --vpn-type RouteBased --sku VpnGw1 --no-wait 
vpn-gateway Vpn Gateway Howto Vnet Vnet Resource Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md
In this step, you create the virtual network gateway for your VNet. Creating a g
[!INCLUDE [Create a vpn gateway](../../includes/vpn-gateway-add-gw-portal-include.md)] [!INCLUDE [Configure PIP settings](../../includes/vpn-gateway-add-gw-pip-portal-include.md)]
-You can see the deployment status on the Overview page for your gateway. A gateway can take up to 45 minutes to fully create and deploy. After the gateway is created, you can view the IP address that has been assigned to it by looking at the virtual network in the portal. The gateway appears as a connected device.
+You can see the deployment status on the Overview page for your gateway. A gateway can take 45 minutes or more to fully create and deploy. After the gateway is created, you can view the IP address that has been assigned to it by looking at the virtual network in the portal. The gateway appears as a connected device.
[!INCLUDE [NSG warning](../../includes/vpn-gateway-no-nsg-include.md)]
vpn-gateway Vpn Gateway Vnet Vnet Rm Ps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-vnet-vnet-rm-ps.md
For this exercise, you can combine configurations, or just choose the one that y
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-* Because it takes up to 45 minutes to create a gateway, Azure Cloud Shell will timeout periodically during this exercise. You can restart Cloud Shell by clicking in the upper left of the terminal. Be sure to redeclare any variables when you restart the terminal.
+* Because it takes 45 minutes or more to create a gateway, Azure Cloud Shell will timeout periodically during this exercise. You can restart Cloud Shell by clicking in the upper left of the terminal. Be sure to redeclare any variables when you restart the terminal.
* If you would rather install latest version of the Azure PowerShell module locally, see [How to install and configure Azure PowerShell](/powershell/azure/).
We use the following values in the examples:
-VpnType RouteBased -GatewaySku VpnGw1 ```
-After you finish the commands, it will take up to 45 minutes to create this gateway. If you are using Azure Cloud Shell, you can restart your Cloud Shell session by clicking in the upper left of the Cloud Shell terminal, then configure TestVNet4. You don't need to wait until the TestVNet1 gateway completes.
+After you finish the commands, it will take 45 minutes or more to create this gateway. If you are using Azure Cloud Shell, you can restart your Cloud Shell session by clicking in the upper left of the Cloud Shell terminal, then configure TestVNet4. You don't need to wait until the TestVNet1 gateway completes.
### Step 3 - Create and configure TestVNet4