Updates from: 07/03/2021 03:04:52
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/azure-monitor.md
The workbook will display reports in the form of a dashboard.
Alerts are created by alert rules in Azure Monitor and can automatically run saved queries or custom log searches at regular intervals. You can create alerts based on specific performance metrics or when certain events are created, absence of an event, or a number of events are created within a particular time window. For example, alerts can be used to notify you when average number of sign-in exceeds a certain threshold. For more information, see [Create alerts](../azure-monitor/alerts/alerts-log.md).
-Use the following instructions to create a new Azure Alert, which will send an [email notification](../azure-monitor/alerts/action-groups.md#configure-notifications) whenever there is a 25% drop in the **Total Requests** compare to previous period. Alert will run every 5 minutes and look for the drop within last 24 hours windows. The alerts are created using Kusto query language.
+Use the following instructions to create a new Azure Alert, which will send an [email notification](../azure-monitor/alerts/action-groups.md#configure-notifications) whenever there is a 25% drop in the **Total Requests** compare to previous period. Alert will run every 5 minutes and look for the drop in the last hour compared to the hour before that. The alerts are created using Kusto query language.
1. From **Log Analytics workspace**, select **Logs**. 1. Create a new **Kusto query** by using the query below. ```kusto
- let start = ago(24h);
+ let start = ago(2h);
let end = now(); let threshold = -25; //25% decrease in total requests. AuditLogs | serialize TimeGenerated, CorrelationId, Result
- | make-series TotalRequests=dcount(CorrelationId) on TimeGenerated in range(start, end, 1h)
+ | make-series TotalRequests=dcount(CorrelationId) on TimeGenerated from start to end step 1h
| mvexpand TimeGenerated, TotalRequests
- | where TotalRequests > 0
- | serialize TotalRequests, TimeGenerated, TimeGeneratedFormatted=format_datetime(todatetime(TimeGenerated), 'yyyy-M-dd [hh:mm:ss tt]')
+ | serialize TotalRequests, TimeGenerated, TimeGeneratedFormatted=format_datetime(todatetime(TimeGenerated), 'yyyy-MM-dd [HH:mm:ss]')
| project TimeGeneratedFormatted, TotalRequests, PercentageChange= ((toreal(TotalRequests) - toreal(prev(TotalRequests,1)))/toreal(prev(TotalRequests,1)))*100
- | order by TimeGeneratedFormatted
+ | order by TimeGeneratedFormatted desc
| where PercentageChange <= threshold //Trigger's alert rule if matched. ```
-1. Select **Run**, to test the query. You should see the results if there is a drop of 25% or more in the total requests within the past 24 hours.
+1. Select **Run**, to test the query. You should see the results if there is a drop of 25% or more in the total requests within the past hour.
1. To create an alert rule based on the query above, use the **+ New alert rule** option available in the toolbar. 1. On the **Create an alert rule** page, select **Condition name** 1. On the **Configure signal logic** page, set following values and then use **Done** button to save the changes. * Alert logic: Set **Number of results** **Greater than** **0**.
- * Evaluation based on: Select **1440** for Period (in minutes) and **5** for Frequency (in minutes)
+ * Evaluation based on: Select **120** for Period (in minutes) and **5** for Frequency (in minutes)
![Create a alert rule condition](./media/azure-monitor/alert-create-rule-condition.png)
active-directory-b2c B2clogin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/b2clogin.md
When you set up an identity provider for sign-up and sign-in in your Azure Active Directory B2C (Azure AD B2C) application, you need to specify a redirect URL. You should no longer reference *login.microsoftonline.com* in your applications and APIs for authenticating users with Azure AD B2C. Instead, use *b2clogin.com* for all new applications, and migrate existing applications from *login.microsoftonline.com* to *b2clogin.com*.
-## Deprecation of login.microsoftonline.com
-
-**October 2020 update:** We're extending a grace period for tenants who are unable to meet the originally announced deprecation date of 04 December 2020. Retirement of login.microsoftonline.com will now occur no earlier than **14 January 2021.**
-
-**Background**: On 04 December 2019, we originally [announced](https://azure.microsoft.com/updates/b2c-deprecate-msol/) the scheduled retirement of login.microsoftonline.com support in Azure AD B2C on 04 December 2020. This provided existing tenants one (1) year to migrate to b2clogin.com. New tenants created after 04 December 2019 will not accept requests from login.microsoftonline.com. All functionality remains the same on the b2clogin.com endpoint.
-
-The deprecation of login.microsoftonline.com does not impact Azure Active Directory tenants. Only Azure Active Directory B2C tenants are affected by this change.
- ## What endpoints does this apply to The transition to b2clogin.com only applies to authentication endpoints that use Azure AD B2C policies (user flows or custom policies) to authenticate users. These endpoints have a `<policy-name>` parameter which specifies the policy Azure AD B2C should use. [Learn more about Azure AD B2C policies](technical-overview.md#identity-experiences-user-flows-or-custom-policies).
active-directory-b2c Partner Trusona https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-trusona.md
You should now see Trusona as a **new OpenID Connect Identity Provider** listed
1. Select **OK**.
-### Test the Policy
+### Test the policy
-1. Select your newly created policy.
+1. Select the policy you created.
-2. Select **Run user flow**.
+1. Select **Run user flow**, and then select the settings:
-3. In the form, enter the Replying URL.
+ 1. **Application**: Select the registered app.
+
+ 1. **Reply URL**: Select the redirect URL.
+
+1. Select **Run user flow**. You should be redirected to the Trusona OIDC gateway. On the Trusona gateway, scan the displayed Secure QR code with the Trusona app or with a custom app using the Trusona mobile SDK.
-4. Select **Run user flow**. You should be redirected to the Trusona OIDC gateway. On the Trusona gateway, scan the displayed Secure QR code with the Trusona app or with a custom app using the Trusona mobile SDK.
-
-5. After scanning the Secure QR code, you should be redirected to the Reply URL you defined in step 3.
+1. After you scan the Secure QR code, you should be redirected to the Reply URL you defined.
## Next steps
active-directory Functions For Customizing Application Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/functions-for-customizing-application-data.md
Previously updated : 05/11/2021 Last updated : 07/02/2021
ToLower(source, culture)
**Description:** Takes a *source* string value and converts it to lower case using the culture rules that are specified. If there is no *culture* info specified, then it will use Invariant culture.
+If you would like to set existing values in the target system to lower case, [update the schema for your target application](./customize-application-attributes.md#editing-the-list-of-supported-attributes) and set the property caseExact to 'true' for the attribute that you are interested in.
+ **Parameters:** | Name | Required/ Repeating | Type | Notes |
ToUpper(source, culture)
**Description:** Takes a *source* string value and converts it to upper case using the culture rules that are specified. If there is no *culture* info specified, then it will use Invariant culture.
+If you would like to set existing values in the target system to upper case, please [update the schema for your target application](./customize-application-attributes.md#editing-the-list-of-supported-attributes) and set the property caseExact to 'true' for the attribute that you are interested in.
+ **Parameters:** | Name | Required/ Repeating | Type | Notes |
active-directory On Premises Scim Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-scim-provisioning.md
Previously updated : 05/28/2021 Last updated : 07/01/2021
The Azure AD provisioning service supports a [SCIM 2.0](https://techcommunity.mi
## Steps for on-premises app provisioning to SCIM-enabled apps Use the steps below to provision to SCIM-enabled apps.
- 1. Add the "Agent-based SCIM provisioning" app from the [gallery](../../active-directory/manage-apps/add-application-portal.md).
+ 1. Add the "On-premises SCIM app" from the [gallery](../../active-directory/manage-apps/add-application-portal.md).
2. Navigate to your app > Provisioning > Download the provisioning agent. 3. Click on on-premises connectivity and download the provisioning agent. 4. Copy the agent onto the virtual machine or server that your SCIM endpoint is hosted on.
active-directory Skip Out Of Scope Deletions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/skip-out-of-scope-deletions.md
In the URL below replace [servicePrincipalId] with the **ServicePrincipalId** e
```http PUT https://graph.microsoft.com/beta/servicePrincipals/[servicePrincipalId]/synchronization/secrets ```
-Copy the updated text from Step 3 into the "Request Body" and set the header "Content-Type" to "application/json" in "Request Headers".
+Copy the updated text from Step 3 into the "Request Body".
![PUT request](./media/skip-out-of-scope-deletions/skip-05.png)
active-directory Tutorial Ecma Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/tutorial-ecma-sql-connector.md
Previously updated : 06/04/2021 Last updated : 07/01/2021
The Generic SQL Connector is a DSN file to connect to the SQL server. First we n
2. In the portal, navigate to Azure Active Directory, **Enterprise Applications**. 3. Click on **New Application**. ![Add new application](.\media\on-premises-ecma-configure\configure-4.png)
-4. Search the gallery for the test application **on-premises provisioning** and click **Create**.
- ![Create new application](.\media\tutorial-ecma-sql-connector\app-1.png)
+4. Search the gallery for **On-premises ECMA app** and click **Create**.
## Step 8 - Configure the application and test 1. Once it has been created, click he **Provisioning page**.
active-directory Quickstart V2 Aspnet Core Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp.md
In this quickstart, you download and run a code sample that demonstrates how an
> 1. Open the solution in Visual Studio 2019. > 1. Open the *appsettings.json* file and modify the following code: >
-> ```json
-> "Domain": "Enter the domain of your tenant, e.g. contoso.onmicrosoft.com",
-> "ClientId": "Enter_the_Application_Id_here",
-> "TenantId": "common",
-> ```
>+
+ :::code language="json" source="~/sample-active-directory-aspnetcore-webapp-openidconnect-v2/appsettings.json" range="4,5,6":::
+ > - Replace `Enter_the_Application_Id_here` with the application (client) ID of the application that you registered in the Azure portal. You can find the **Application (client) ID** value on the app's **Overview** page. > - Replace `common` with one of the following: > - If your application supports **Accounts in this organizational directory only**, replace this value with the directory (tenant) ID (a GUID) or the tenant name (for example, `contoso.onmicrosoft.com`). You can find the **Directory (tenant) ID** value on the app's **Overview** page.
This section gives an overview of the code required to sign in users. This overv
The *Microsoft.AspNetCore.Authentication* middleware uses a `Startup` class that's run when the hosting process starts:
-```csharp
- public void ConfigureServices(IServiceCollection services)
- {
- services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
- .AddMicrosoftIdentityWebApp(Configuration.GetSection("AzureAd"));
-
- services.AddControllersWithViews(options =>
- {
- var policy = new AuthorizationPolicyBuilder()
- .RequireAuthenticatedUser()
- .Build();
- options.Filters.Add(new AuthorizeFilter(policy));
- });
- services.AddRazorPages()
- .AddMicrosoftIdentityUI();
- }
-```
+
+ :::code language="csharp" source="~/sample-active-directory-aspnetcore-webapp-openidconnect-v2/Startup.cs" id="Configure_service_ref_for_docs_ms" highlight="3,4":::
+ The `AddAuthentication()` method configures the service to add cookie-based authentication. This authentication is used in browser scenarios and to set the challenge to OpenID Connect.
The line that contains `.AddMicrosoftIdentityWebApp` adds Microsoft identity pla
The `Configure()` method contains two important methods, `app.UseAuthentication()` and `app.UseAuthorization()`, that enable their named functionality. Also in the `Configure()` method, you must register Microsoft Identity Web routes with at least one call to `endpoints.MapControllerRoute()` or a call to `endpoints.MapControllers()`:
-```csharp
-app.UseAuthentication();
-app.UseAuthorization();
-
-app.UseEndpoints(endpoints =>
-{
-
- endpoints.MapControllerRoute(
- name: "default",
- pattern: "{controller=Home}/{action=Index}/{id?}");
- endpoints.MapRazorPages();
-});
+ :::code language="csharp" source="~/sample-active-directory-aspnetcore-webapp-openidconnect-v2/Startup.cs" id="endpoint_map_ref_for_docs_ms":::
-// endpoints.MapControllers(); // REQUIRED if MapControllerRoute() isn't called.
-```
### Attribute for protecting a controller or methods
active-directory Reference Claims Mapping Policy Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-claims-mapping-policy-type.md
Previously updated : 06/03/2021 Last updated : 07/01/2021
Based on the method chosen, a set of inputs and outputs is expected. Define the
| ExtractMailPrefix | None | | Join | The suffix being joined must be a verified domain of the resource tenant. |
-### Cross-tenant scenarios
-
-Claims mapping policies do not apply to guest users. If a guest user tries to access an application with a claims mapping policy assigned to its service principal, the default token is issued (the policy has no effect).
-- ## Next steps - To learn how to customize the claims emitted in tokens for a specific application in their tenant using PowerShell, see [How to: Customize claims emitted in tokens for a specific app in a tenant](active-directory-claims-mapping.md)
active-directory Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/conditional-access.md
There are various factors that influence CA policies for B2B guest users.
### Device-based Conditional Access
-In CA, there's an option to require a userΓÇÖs [device to be Compliant or Hybrid Azure AD joined](../conditional-access/concept-conditional-access-conditions.md#device-state-preview). B2B guest users can only satisfy compliance if the resource tenant can manage their device. Devices cannot be managed by more than one organization at a time. B2B guest users can't satisfy the Hybrid Azure AD join because they don't have an on-premises AD account. Only if the guest userΓÇÖs device is unmanaged, they can register or enroll their device in the resource tenant and then make the device compliant. The user can then satisfy the grant control.
+In CA, there's an option to require a userΓÇÖs [device to be Compliant or Hybrid Azure AD joined](../conditional-access/concept-conditional-access-conditions.md#device-state-preview). B2B guest users can only satisfy compliance if the resource tenant can manage their device. Devices cannot be managed by more than one organization at a time. B2B guest users can't satisfy the Hybrid Azure AD join because they don't have an on-premises AD account.
>[!Note] >It is not recommended to require a managed device for external users. ### Mobile application management policies
-The CA grant controls such as **Require approved client apps** and **Require app protection policies** need the device to be registered in the tenant. These controls can only be applied to [iOS and Android devices](../conditional-access/concept-conditional-access-conditions.md#device-platforms). However, neither of these controls can be applied to B2B guest users if the userΓÇÖs device is already being managed by another organization. A mobile device cannot be registered in more than one tenant at a time. If the mobile device is managed by another organization, the user will be blocked. Only if the guest userΓÇÖs device is unmanaged, they can register their device in the resource tenant. The user can then satisfy the grant control.
+The CA grant controls such as **Require approved client apps** and **Require app protection policies** need the device to be registered in the tenant. These controls can only be applied to [iOS and Android devices](../conditional-access/concept-conditional-access-conditions.md#device-platforms). However, neither of these controls can be applied to B2B guest users if the userΓÇÖs device is already being managed by another organization. A mobile device cannot be registered in more than one tenant at a time. If the mobile device is managed by another organization, the user will be blocked.
>[!NOTE] >It is not recommended to require an app protection policy for external users.
active-directory Google Federation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/google-federation.md
After you've added Google as one of your application's sign-in options, on the *
> Google federation is designed specifically for Gmail users. To federate with G Suite domains, use [SAML/WS-Fed identity provider federation](direct-federation.md). > [!IMPORTANT]
-> **Starting in the second half of 2021**, Google is [deprecating web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If youΓÇÖre using Google federation for B2B invitations or [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md), or if you're using self-service sign-up with Gmail, Google Gmail users won't be able to sign in if your apps authenticate users with an embedded web-view. [Learn more](#deprecation-of-web-view-sign-in-support).
+> **Starting September 30, 2021**, Google is [deprecating web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If youΓÇÖre using Google federation for B2B invitations or [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md), or if you're using self-service sign-up with Gmail, Google Gmail users won't be able to sign in if your apps authenticate users with an embedded web-view. [Learn more](#deprecation-of-web-view-sign-in-support).
## What is the experience for the Google user?
You can also give Google guest users a direct link to an application or resource
## Deprecation of web-view sign-in support
-Starting in the second half of 2021, Google is [deprecating embedded web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If youΓÇÖre using Google federation for B2B or [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md), or if you're using [self-service sign-up with Gmail](identity-providers.md), if your apps authenticate users with an embedded web-view, Google Gmail users won't be able to authenticate.
+Starting September 30, 2021, Google is [deprecating embedded web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If youΓÇÖre using Google federation for B2B or [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md), or if you're using [self-service sign-up with Gmail](identity-providers.md), if your apps authenticate users with an embedded web-view, Google Gmail users won't be able to authenticate.
The following are known scenarios that will impact Gmail users: - Windows apps that use the [WebView](/windows/communitytoolkit/controls/wpf-winforms/webview) control, [WebView2](/microsoft-edge/webview2/), or the older WebBrowser control, for authentication. These apps should migrate to using the Web Account Manager (WAM) flow.
WeΓÇÖre confirming with Google whether this change affects the following:
- Windows apps that use the Web Account Manager (WAM) or Web Authentication Broker (WAB). WeΓÇÖre continuing to test various platforms and scenarios, and will update this article accordingly.+ ### Action needed for embedded web-views+ Modify your apps to use the system browser for sign-in. For details, see [Embedded vs System Web UI](../develop/msal-net-web-browsers.md#embedded-vs-system-web-ui) in the MSAL.NET documentation. All MSAL SDKs use the system web-view by default.+ ### What to expect
-Before Google puts these changes into place in the second half of 2021, Microsoft will deploy a workaround for apps still using embedded web-views to ensure that authentication isn't blocked.
+
+Before Google puts these changes into place on September 30, 2021, Microsoft will deploy a workaround for apps still using embedded web-views to ensure that authentication isn't blocked. Users who sign in with a Gmail account in an embedded web-view will be prompted to enter a code in a separate browser to finish signing in.
Applications that are migrated to an allowed web-view for authentication won't be affected, and users will be allowed to authenticate via Google as usual.
If applications are not migrated to an allowed web-view for authentication, then
We will update this document as dates and further details are shared by Google. ### Distinguishing between CEF/Electron and embedded web-views
-In addition to the [deprecation of embedded web-view and framework sign-in support](#deprecation-of-web-view-sign-in-support), Google is also [deprecating Chromium Embedded Framework (CEF) based Gmail authentication](https://developers.googleblog.com/2020/08/guidance-for-our-effort-to-block-less-secure-browser-and-apps.html). For applications built on CEF, such as Electron apps, Google will disable authentication on June 30, 2021. Impacted applications have received notice from Google directly, and are not covered in this documentation. This document pertains to the embedded web-views described above, which Google will restrict at a separate date later in 2021.
+
+In addition to the [deprecation of embedded web-view and framework sign-in support](#deprecation-of-web-view-sign-in-support), Google is also [deprecating Chromium Embedded Framework (CEF) based Gmail authentication](https://developers.googleblog.com/2020/08/guidance-for-our-effort-to-block-less-secure-browser-and-apps.html). For applications built on CEF, such as Electron apps, Google will disable authentication on June 30, 2021. Impacted applications have received notice from Google directly, and are not covered in this documentation. This document pertains to the embedded web-views described above, which Google will restrict at a separate date on September 30, 2021.
### Action needed for embedded frameworks+ Follow [GoogleΓÇÖs guidance](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html) to determine if your apps are affected. ## Step 1: Configure a Google developer project+ First, create a new project in the Google Developers Console to obtain a client ID and a client secret that you can later add to Azure Active Directory (Azure AD). 1. Go to the Google APIs at https://console.developers.google.com, and sign in with your Google account. We recommend that you use a shared team Google account. 2. Accept the terms of service if you're prompted to do so.
First, create a new project in the Google Developers Console to obtain a client
![Screenshot that shows the OAuth client ID and client secret.](media/google-federation/google-auth-client-id-secret.png) ## Step 2: Configure Google federation in Azure AD + You'll now set the Google client ID and client secret. You can use the Azure portal or PowerShell to do so. Be sure to test your Google federation configuration by inviting yourself. Use a Gmail address and try to redeem the invitation with your invited Google account. **To configure Google federation in the Azure portal**
You'll now set the Google client ID and client secret. You can use the Azure por
> Use the client ID and client secret from the app you created in "Step 1: Configure a Google developer project." For more information, see [New-AzureADMSIdentityProvider](/powershell/module/azuread/new-azureadmsidentityprovider?view=azureadps-2.0-preview&preserve-view=true). ## How do I remove Google federation?+ You can delete your Google federation setup. If you do so, Google guest users who have already redeemed their invitation won't be able to sign in. But you can give them access to your resources again by [resetting their redemption status](reset-redemption-status.md). **To delete Google federation in the Azure AD portal**
You can delete your Google federation setup. If you do so, Google guest users wh
`Remove-AzureADMSIdentityProvider -Id Google-OAUTH` > [!NOTE]
- > For more information, see [Remove-AzureADMSIdentityProvider](/powershell/module/azuread/Remove-AzureADMSIdentityProvider?view=azureadps-2.0-preview&preserve-view=true).
+ > For more information, see [Remove-AzureADMSIdentityProvider](/powershell/module/azuread/Remove-AzureADMSIdentityProvider?view=azureadps-2.0-preview&preserve-view=true).
active-directory Identity Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/identity-providers.md
Previously updated : 03/02/2021 Last updated : 07/01/2021
In addition to Azure AD accounts, External Identities offers a variety of identi
- **Google**: Google federation allows external users to redeem invitations from you by signing in to your apps with their own Gmail accounts. Google federation can also be used in your self-service sign-up user flows. See how to [add Google as an identity provider](google-federation.md). > [!IMPORTANT]
- > **Starting in the second half of 2021**, Google is [deprecating web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If youΓÇÖre using Google federation for B2B invitations or [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md), or if you're using self-service sign-up with Gmail, Google Gmail users won't be able to sign in if your apps authenticate users with an embedded web-view. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support).
+ > **Starting September 30th, 2021**, Google is [deprecating web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If youΓÇÖre using Google federation for B2B invitations or [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md), or if you're using self-service sign-up with Gmail, Google Gmail users won't be able to sign in if your apps authenticate users with an embedded web-view. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support).
- **Facebook**: When building an app, you can configure self-service sign-up and enable Facebook federation so that users can sign up for your app using their own Facebook accounts. Facebook can only be used for self-service sign-up user flows and isn't available as a sign-in option when users are redeeming invitations from you. See how to [add Facebook as an identity provider](facebook-federation.md).
active-directory Invitation Email Elements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/invitation-email-elements.md
The subject of the email follows this pattern:
We use a LinkedIn-like pattern for the From address. This pattern should make it clear that although the email comes from invites@microsoft.com, the invitation is from another organization. The format is: Microsoft InvitationsΓÇ»<invites@microsoft.com> or Microsoft invitations on behalf of &lt;tenantname&gt;ΓÇ»<invites@microsoft.com>. > [!NOTE]
-> For the Azure service operated by 21Vianet in China, the sender address is Invites@oe.21vianet.com.
+> For the Azure service operated by 21Vianet in China, the sender address is Invites@oe.21vianet.com.
+> For the Azure AD Government, the sender address is invites@azuread.us.
### Reply To
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/redemption-experience.md
Previously updated : 05/27/2021 Last updated : 07/01/2021
This article describes the ways guest users can access your resources and the co
When you add a guest user to your directory, the guest user account has a consent status (viewable in PowerShell) thatΓÇÖs initially set to **PendingAcceptance**. This setting remains until the guest accepts your invitation and agrees to your privacy policy and terms of use. After that, the consent status changes to **Accepted**, and the consent pages are no longer presented to the guest. > [!IMPORTANT]
- > - **Starting in the second half of 2021**, Google is [deprecating web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If youΓÇÖre using Google federation for B2B invitations or [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md), or if you're using self-service sign-up with Gmail, Google Gmail users won't be able to sign in if your apps authenticate users with an embedded web-view. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support).
+ > - **Starting September 30th, 2021**, Google is [deprecating web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If youΓÇÖre using Google federation for B2B invitations or [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md), or if you're using self-service sign-up with Gmail, Google Gmail users won't be able to sign in if your apps authenticate users with an embedded web-view. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support).
> - **Starting October 2021**, Microsoft will no longer support the redemption of invitations by creating unmanaged Azure AD accounts and tenants for B2B collaboration scenarios. In preparation, we encourage customers to opt into [email one-time passcode authentication](one-time-passcode.md), which is now generally available. ## Redemption and sign-in through a common endpoint
active-directory Self Service Sign Up Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/self-service-sign-up-add-api-connector.md
Previously updated : 03/02/2021 Last updated : 07/01/2021
To use an [API connector](api-connectors-overview.md), you first create the API connector and then enable it in a user flow. > [!IMPORTANT]
-> **Starting in the second half of 2021**, Google is [deprecating web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If youΓÇÖre using Google federation for B2B invitations or [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md), or if you're using self-service sign-up with Gmail, Google Gmail users won't be able to sign in if your apps authenticate users with an embedded web-view. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support).
+> **Starting September 30th, 2021**, Google is [deprecating web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If youΓÇÖre using Google federation for B2B invitations or [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md), or if you're using self-service sign-up with Gmail, Google Gmail users won't be able to sign in if your apps authenticate users with an embedded web-view. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support).
## Create an API connector
active-directory Self Service Sign Up Add Approvals https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/self-service-sign-up-add-approvals.md
Previously updated : 03/02/2021 Last updated : 07/01/2021
This article gives an example of how to integrate with an approval system. In th
- Trigger a manual review. If the request is approved, the approval system uses Microsoft Graph to provision the user account. The approval system can also notify the user that their account has been created. > [!IMPORTANT]
-> **Starting in the second half of 2021**, Google is [deprecating web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If youΓÇÖre using Google federation for B2B invitations or [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md), or if you're using self-service sign-up with Gmail, Google Gmail users won't be able to sign in if your apps authenticate users with an embedded web-view. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support).
+> **Starting September 30th, 2021**, Google is [deprecating web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If youΓÇÖre using Google federation for B2B invitations or [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md), or if you're using self-service sign-up with Gmail, Google Gmail users won't be able to sign in if your apps authenticate users with an embedded web-view. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support).
## Register an application for your approval system
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/troubleshoot.md
Previously updated : 05/27/2021 Last updated : 07/01/2021 tags: active-directory - - it-pro - seo-update-azuread-jan"
Here are some remedies for common problems with Azure Active Directory (Azure AD) B2B collaboration. > [!IMPORTANT]
- > - **Starting in the second half of 2021**, Google is [deprecating web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If youΓÇÖre using Google federation for B2B invitations or [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md), or if you're using self-service sign-up with Gmail, Google Gmail users won't be able to sign in if your apps authenticate users with an embedded web-view. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support).
+ > - **Starting September 30th, 2021**, Google is [deprecating web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If youΓÇÖre using Google federation for B2B invitations or [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md), or if you're using self-service sign-up with Gmail, Google Gmail users won't be able to sign in if your apps authenticate users with an embedded web-view. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support).
> - **Starting October 2021**, Microsoft will no longer support the redemption of invitations by creating unmanaged Azure AD accounts and tenants for B2B collaboration scenarios. In preparation, we encourage customers to opt into [email one-time passcode authentication](one-time-passcode.md), which is now generally available. ## IΓÇÖve added an external user but do not see them in my Global Address Book or in the people picker
active-directory What Is B2b https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/what-is-b2b.md
Previously updated : 05/27/2021 Last updated : 07/01/2021 -
Azure Active Directory (Azure AD) business-to-business (B2B) collaboration is a feature within External Identities that lets you invite guest users to collaborate with your organization. With B2B collaboration, you can securely share your company's applications and services with guest users from any other organization, while maintaining control over your own corporate data. Work safely and securely with external partners, large or small, even if they don't have Azure AD or an IT department. A simple invitation and redemption process lets partners use their own credentials to access your company's resources. Developers can use Azure AD business-to-business APIs to customize the invitation process or write applications like self-service sign-up portals. For licensing and pricing information related to guest users, refer to [Azure Active Directory pricing](https://azure.microsoft.com/pricing/details/active-directory/). > [!IMPORTANT]
-> - **Starting in the second half of 2021**, Google is [deprecating web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If youΓÇÖre using Google federation for B2B invitations or [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md), or if you're using self-service sign-up with Gmail, Google Gmail users won't be able to sign in if your apps authenticate users with an embedded web-view. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support).
+> - **Starting September 30th, 2021**, Google is [deprecating web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If youΓÇÖre using Google federation for B2B invitations or [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md), or if you're using self-service sign-up with Gmail, Google Gmail users won't be able to sign in if your apps authenticate users with an embedded web-view. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support).
> - **Starting October 2021**, Microsoft will no longer support the redemption of invitations by creating unmanaged Azure AD accounts and tenants for B2B collaboration scenarios. In preparation, we encourage customers to opt into [email one-time passcode authentication](one-time-passcode.md), which is now generally available. ## Collaborate with any partner using their identities
active-directory 6 Secure Access Entitlement Managment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/6-secure-access-entitlement-managment.md
An [access package](../governance/entitlement-management-overview.md) is the fou
* enterprise applications including your custom in-house and SaaS apps like Salesforce.
-* Microsoft Teams channels.
+* Microsoft Teams.
* Microsoft 365 Groups.
active-directory Customize Branding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/customize-branding.md
You can customize your Azure AD sign-in pages, which appear when users sign in t
Your custom branding won't immediately appear when your users go to sites such as, www\.office.com. Instead, the user has to sign-in before your customized branding appears. After the user has signed in, the branding may take 15 minutes or longer to appear. > [!NOTE]
-> All branding elements are optional. For example, if you specify a banner logo with no background image, the sign-in page will show your logo with a default background image from the destination site (for example, Microsoft 365).<br><br>Additionally, sign-in page branding doesn't carry over to personal Microsoft accounts. If your users or business guests sign in using a personal Microsoft account, the sign-in page won't reflect the branding of your organization.
+> **All branding elements are optional and will remain default when unchanged.** For example, if you specify a banner logo with no background image, the sign-in page will show your logo with a default background image from the destination site such as Microsoft 365.<br><br>Additionally, sign-in page branding doesn't carry over to personal Microsoft accounts. If your users or business guests sign in using a personal Microsoft account, the sign-in page won't reflect the branding of your organization.
### To customize your branding 1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global administrator account for the directory.
active-directory Entitlement Management Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-assignments.md
To use Azure AD entitlement management and assign users to access packages, you
1. To download a CSV file of the filtered list, click **Download**.
-### Viewing assignments programmatically
-
+## View assignments programmatically
+### View assignments with Microsoft Graph
You can also retrieve assignments in an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the API to [list accessPackageAssignments](/graph/api/accesspackageassignment-list?view=graph-rest-beta&preserve-view=true). While an identity governance administrator can retrieve access packages from multiple catalogs, if user is assigned only to catalog-specific delegated administrative roles, the request must supply a filter to indicate a specific access package, such as: `$filter=accessPackage/id eq 'a914b616-e04e-476b-aa37-91038f0b165b'`. An application that has the application permission `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can also use this API.
+### View assignments with PowerShell
+
+You can perform this query in PowerShell with the `Get-MgEntitlementManagementAccessPackageAssignment` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.6.0 or later. This cmdlet takes as a parameter the access package ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackage` cmdlet.
+
+```powershell
+Connect-MgGraph -Scopes "EntitlementManagement.Read.All"
+Select-MgProfile -Name "beta"
+$accesspackage = Get-MgEntitlementManagementAccessPackage -DisplayNameEq "Marketing Campaign"
+$assignments = Get-MgEntitlementManagementAccessPackageAssignment -AccessPackageId $accesspackage.Id -ExpandProperty target -All -ErrorAction Stop
+$assignments | ft Id,AssignmentState,TargetId,{$_.Target.DisplayName}
+```
+ ## Directly assign a user In some cases, you might want to directly assign specific users to an access package so that users don't have to go through the process of requesting the access package. To directly assign users, the access package must have a policy that allows administrator direct assignments.
In some cases, you might want to directly assign specific users to an access pac
After a few moments, click **Refresh** to see the users in the Assignments list.
-### Directly assigning users programmatically
-
+## Directly assigning users programmatically
+### Assign a user to an access package with Microsoft Graph
You can also directly assign a user to an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission, or an application with that application permission, can call the API to [create an accessPackageAssignmentRequest](/graph/api/accesspackageassignmentrequest-post?view=graph-rest-beta&preserve-view=true). In this request, the value of the `requestType` property should be `AdminAdd`, and the `accessPackageAssignment` property is a structure that contains the `targetId` of the user being assigned.
+### Assign a user to an access package with PowerShell
+
+You can assign a user to an access package in PowerShell with the `New-MgEntitlementManagementAccessPackageAssignmentRequest` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.6.0 or later. This cmdlet takes as parameters
+* the access package ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackage` cmdlet,
+* the access package assignment policy ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackageAssignmentPolicy`cmdlet,
+* the object ID of the target user.
+
+```powershell
+Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All"
+Select-MgProfile -Name "beta"
+$accesspackage = Get-MgEntitlementManagementAccessPackage -DisplayNameEq "Marketing Campaign" -ExpandProperty "accessPackageAssignmentPolicies"
+$policy = $accesspackage.AccessPackageAssignmentPolicies[0]
+$req = New-MgEntitlementManagementAccessPackageAssignmentRequest -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -TargetId "a43ee6df-3cc5-491a-ad9d-ea964ef8e464"
+```
+ ## Remove an assignment **Prerequisite role:** Global administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
You can also directly assign a user to an access package using Microsoft Graph.
A notification will appear informing you that the assignment has been removed.
-### Removing an assignment programmatically
-
+## Remove an assignment programmatically
+### Remove an assignment with Microsoft Graph
You can also remove an assignment of a user to an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission, or an application with that application permission, can call the API to [create an accessPackageAssignmentRequest](/graph/api/accesspackageassignmentrequest-post?view=graph-rest-beta&preserve-view=true). In this request, the value of the `requestType` property should be `AdminRemove`, and the `accessPackageAssignment` property is a structure that contains the `id` property identifying the `accessPackageAssignment` being removed.
+### Remove an assignment with PowerShell
+
+You can remove a user's assignment in PowerShell with the `New-MgEntitlementManagementAccessPackageAssignmentRequest` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.6.0 or later.
+
+```powershell
+Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All"
+Select-MgProfile -Name "beta"
+$assignments = Get-MgEntitlementManagementAccessPackageAssignment -Filter "accessPackageId eq '9f573551-f8e2-48f4-bf48-06efbb37c7b8' and assignmentState eq 'Delivered'" -All -ErrorAction Stop
+$toRemove = $assignments | Where-Object {$_.targetId -eq '76fd6e6a-c390-42f0-879e-93ca093321e7'}
+$req = New-MgEntitlementManagementAccessPackageAssignmentRequest -AccessPackageAssignmentId $toRemove.Id -RequestType "AdminRemove"
+```
+ ## Next steps - [Change request and settings for an access package](entitlement-management-access-package-request-policy.md)
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-catalog-create.md
A catalog is a container of resources and access packages. You create a catalog
1. Click **Create** to create the catalog.
-### Creating a catalog programmatically
+## Create a catalog programmatically
+### Create a catalog with Microsoft Graph
You can also create a catalog using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission, or an application with that application permission, can call the API to [create an accessPackageCatalog](/graph/api/accesspackagecatalog-post?view=graph-rest-beta&preserve-view=true).
+### Create a catalog with PowerShell
+
+You can create a catalog in PowerShell with the `New-MgEntitlementManagementAccessPackageCatalog` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.6.0 or later.
+
+```powershell
+Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All"
+Select-MgProfile -Name "beta"
+$catalog = New-MgEntitlementManagementAccessPackageCatalog -DisplayName "Marketing"
+```
+ ## Add resources to a catalog To include resources in an access package, the resources must exist in a catalog. The types of resources you can add are groups, applications, and SharePoint Online sites. The groups can be cloud-created Microsoft 365 Groups or cloud-created Azure AD security groups. The applications can be Azure AD enterprise applications, including both SaaS applications and your own applications federated to Azure AD. The sites can be SharePoint Online sites or SharePoint Online site collections.
active-directory Quickstart Azure Monitor Route Logs To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md
na Previously updated : 04/18/2019 Last updated : 05/05/2021
To use this feature, you need:
1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select **Azure Active Directory** > **Activity** > **Audit logs**.
+2. Select **Azure Active Directory** > **Monitoring** > **Audit logs**.
-3. Select **Export Settings**.
+3. Select **Export Data Settings**.
4. In the **Diagnostics settings** pane, do either of the following:
- * To change existing settings, select **Edit setting**.
- * To add new settings, select **Add diagnostics setting**.
- You can have up to three settings.
+ 1. To change existing setting, select **Edit setting** next to the diagnostic setting you want to update.
+ 1. To add new settings, select **Add diagnostic setting**.
- ![Export settings](./media/quickstart-azure-monitor-route-logs-to-storage-account/ExportSettings.png)
+ You can have up to three settings.
-5. Enter a friendly name for the setting to remind you of its purpose (for example, *Send to Azure storage account*).
+ ![Export settings](./media/quickstart-azure-monitor-route-logs-to-storage-account/ExportSettings.png)
-6. Select the **Archive to a storage account** check box, and then select **Storage account**.
+5. Once in the **Diagnostic setting** pane if you are creating a new setting, enter a name for the setting to remind you of its purpose (for example, *Send to Azure storage account*). You can't change the name of an existing setting.
-7. Select the Azure subscription and storage account that you want to route the logs to.
-
-8. Select **OK** to exit the configuration.
+6. Under **Destination Details** Select the **Archive to a storage account** check box.
-9. Do either or both of the following:
- * To send audit logs to the storage account, select the **AuditLogs** check box.
- * To send sign-in logs to the storage account, select the **SignInLogs** check box.
+7. Select the Azure subscription in the **Subscription** drop down menu and storage account in the **Storage account** drop down menu that you want to route the logs to.
-10. Use the slider to set the retention of your log data. By default, this value is *0*, which means that logs are retained in the storage account indefinitely. If you set a different value, events older than the number of days selected are automatically cleaned up.
+8. Select all the relevant categories in under **Category details**:
-11. Select **Save** to save the setting.
+ Do either or both of the following:
+ 1. select the **AuditLogs** check box to send audit logs to the storage account.
+
+ 1. select the **SignInLogs** check box to send sign-in logs to the storage account.
![Diagnostics settings](./media/quickstart-azure-monitor-route-logs-to-storage-account/DiagnosticSettings.png)
-12. After about 15 minutes, verify that the logs are pushed to your storage account. Go to the [Azure portal](https://portal.azure.com), select **Storage accounts**, select the storage account that you used earlier, and then select **Blobs**. For **Audit logs**, select **insights-log-audit**. For **Sign-in logs**, select **insights-logs-signin**.
+9. After the categories have been selected, in the **Retention days** field, type in the number of days of retention you need of your log data. By default, this value is *0*, which means that logs are retained in the storage account indefinitely. If you set a different value, events older than the number of days selected are automatically cleaned up.
+
+10. Select **Save** to save the setting.
- ![Storage account](./media/quickstart-azure-monitor-route-logs-to-storage-account/StorageAccount.png)
+11. Close the window to return to the Diagnostic settings pane.
## Next steps * [Interpret audit logs schema in Azure Monitor](./overview-reports.md) * [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md)
-* [Frequently asked questions and known issues](concept-activity-logs-azure-monitor.md#frequently-asked-questions)
+* [Frequently asked questions and known issues](concept-activity-logs-azure-monitor.md#frequently-asked-questions)
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/azure-files-csi.md
If your Azure Files resources are protected with a private endpoint, you must cr
* `storageAccount`: The storage account name. * `server`: The FQDN of the storage account's private endpoint (for example, `<storage account name>.privatelink.file.core.windows.net`).
-Create a file named *private-azure-file-sc.yaml*, and then paste the following example manifest in the file. Replace the valules for `<resourceGroup>` and `<storageAccountName>`.
+Create a file named *private-azure-file-sc.yaml*, and then paste the following example manifest in the file. Replace the values for `<resourceGroup>` and `<storageAccountName>`.
```yaml apiVersion: storage.k8s.io/v1
kubectl apply -f private-pvc.yaml
## NFS file shares
-[Azure Files now has support for NFS v4.1 protocol](../storage/files/storage-files-how-to-create-nfs-shares.md). NFS 4.1 support for Azure Files provides you with a fully managed NFS file system as a service built on a highly available and highly durable distributed resilient storage platform.
+[Azure Files supports the NFS v4.1 protocol](../storage/files/storage-files-how-to-create-nfs-shares.md). NFS 4.1 support for Azure Files provides you with a fully managed NFS file system as a service built on a highly available and highly durable distributed resilient storage platform.
This option is optimized for random access workloads with in-place data updates and provides full POSIX file system support. This section shows you how to use NFS shares with the Azure File CSI driver on an AKS cluster.
-Make sure to check the [limitations](../storage/files/files-nfs-protocol.md#limitations) and [region availability](../storage/files/files-nfs-protocol.md#regional-availability) during the preview phase.
+Make sure to check the [Support for Azure Storage features](../storage/files/files-nfs-protocol.md#support-for-azure-storage-features) and [region availability](../storage/files/files-nfs-protocol.md#regional-availability) sections during the preview phase.
### Create NFS file share storage class
app-service Configure Authentication Oauth Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-authentication-oauth-tokens.md
From your server code, the provider-specific tokens are injected into the reques
| Twitter | `X-MS-TOKEN-TWITTER-ACCESS-TOKEN` <br/> `X-MS-TOKEN-TWITTER-ACCESS-TOKEN-SECRET` | |||
+> [!NOTE]
+> Different language frameworks may present these headers to the app code in different formats, such as lowercase or title case.
+ From your client code (such as a mobile app or in-browser JavaScript), send an HTTP `GET` request to `/.auth/me` ([token store](overview-authentication-authorization.md#token-store) must be enabled). The returned JSON has the provider-specific tokens. > [!NOTE]
app-service Configure Authentication User Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-authentication-user-identities.md
For all language frameworks, App Service makes the claims in the incoming token
Code that is written in any language or framework can get the information that it needs from these headers.
+> [!NOTE]
+> Different language frameworks may present these headers to the app code in different formats, such as lowercase or title case.
+ For ASP.NET 4.6 apps, App Service populates [ClaimsPrincipal.Current](/dotnet/api/system.security.claims.claimsprincipal.current) with the authenticated user's claims, so you can follow the standard .NET code pattern, including the `[Authorize]` attribute. Similarly, for PHP apps, App Service populates the `_SERVER['REMOTE_USER']` variable. For Java apps, the claims are [accessible from the Tomcat servlet](configure-language-java.md#authenticate-users-easy-auth). For [Azure Functions](../azure-functions/functions-overview.md), `ClaimsPrincipal.Current` is not populated for .NET code, but you can still find the user claims in the request headers, or get the `ClaimsPrincipal` object from the request context or even through a binding parameter. See [working with client identities in Azure Functions](../azure-functions/functions-bindings-http-webhook-trigger.md#working-with-client-identities) for more information.
application-gateway Ingress Controller Install New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/ingress-controller-install-new.md
Kubernetes. We will leverage it to install the `application-gateway-kubernetes-i
``` Values:
- - `verbosityLevel`: Sets the verbosity level of the AGIC logging infrastructure. See [Logging Levels](https://github.com/Azure/application-gateway-kubernetes-ingress/blob/463a87213bbc3106af6fce0f4023477216d2ad78/docs/troubleshooting.yml#logging-levels) for possible values.
+ - `verbosityLevel`: Sets the verbosity level of the AGIC logging infrastructure. See [Logging Levels](https://github.com/Azure/application-gateway-kubernetes-ingress/blob/463a87213bbc3106af6fce0f4023477216d2ad78/docs/troubleshooting.md#logging-levels) for possible values.
- `appgw.subscriptionId`: The Azure Subscription ID in which Application Gateway resides. Example: `a123b234-a3b4-557d-b2df-a0bc12de1234` - `appgw.resourceGroup`: Name of the Azure Resource Group in which Application Gateway was created. Example: `app-gw-resource-group` - `appgw.name`: Name of the Application Gateway. Example: `applicationgatewayd0f0`
azure-arc Privacy Data Collection And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/privacy-data-collection-and-reporting.md
There are three resource types:
- Arc enabled SQL Managed Instance - Arc enabled PostgreSQL Hyperscale server group -- Arc enabled SQL Server
+- SQL Server on Azure Arc-enabled servers
- Data controller The following sections show the properties, types, and descriptions that are collected and stored about each type of resource:
-### Arc enabled SQL Server
+### SQL Server on Azure Arc-enabled servers
- SQL Server edition. - `string: Edition` - Resource ID of the container resource (Azure Arc for Servers).
azure-cache-for-redis Cache Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-best-practices.md
Title: Best practices for Azure Cache for Redis description: Learn how to use your Azure Cache for Redis effectively by following these best practices.
+reviewer: shpathak
Last updated 01/06/2020
By following these best practices, you can help maximize the performance and cos
* **Use TLS encryption** - Azure Cache for Redis requires TLS encrypted communications by default. TLS versions 1.0, 1.1 and 1.2 are currently supported. However, TLS 1.0 and 1.1 are on a path to deprecation industry-wide, so use TLS 1.2 if at all possible. If your client library or tool doesn't support TLS, then enabling unencrypted connections can be done [through the Azure portal](cache-configure.md#access-ports) or [management APIs](/rest/api/redis/redis/update). In such cases where encrypted connections aren't possible, placing your cache and client application into a virtual network would be recommended. For more information about which ports are used in the virtual network cache scenario, see this [table](cache-how-to-premium-vnet.md#outbound-port-requirements).
-* **Idle Timeout** - Azure Cache for Redis currently has 10-minute idle timeout for connections, so your setting should be to less than 10 minutes. Most common client libraries have a configuration setting that allows client libraries to send Redis PING commands to a Redis server automatically and periodically. However, when using client libraries without this type of setting, customer applications themselves are responsible for keeping the connection alive.
+* **Idle Timeout** - Azure Cache for Redis currently has 10-minute idle timeout for connections, so your setting should be to less than 10 minutes. Most common client libraries have a configuration setting that allows client libraries to send Redis `PING` commands to a Redis server automatically and periodically. However, when using client libraries without this type of setting, customer applications themselves are responsible for keeping the connection alive.
+<!-- Most common client libraries have keep-alive configuration that pings Azure Redis automatically. However, in clients that don't have a keep-alive setting, customer applications are responsible for keeping the connection alive.
+ -->
## Memory management There are several things related to memory usage within your Redis server instance that you may want to consider. Here are a few:
azure-functions Functions Bindings Expressions Patterns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-expressions-patterns.md
module.exports = function (context, info) {
### Dot notation
-If some of the properties in your JSON payload are objects with properties, you can refer to those directly by using dot notation. For example, suppose your JSON looks like this:
+If some of the properties in your JSON payload are objects with properties, you can refer to those directly by using dot notation. The dot notation does not work or [Cosmos DB](./functions-bindings-cosmosdb-v2.md) or [Table storage](./functions-bindings-storage-table-output.md) bindings.
+
+For example, suppose your JSON looks like this:
```json {
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-service-bus.md
This section describes the global configuration settings available for this bind
"messageWaitTimeout": "00:00:30", "maxAutoRenewDuration": "00:55:00", "maxConcurrentSessions": 16
+ },
+ "batchOptions": {
+ "maxMessageCount": 1000,
+ "operationTimeout": "00:01:00"
+ "autoComplete": "true"
} } } } ```
-If you have `isSessionsEnabled` set to `true`, the `sessionHandlerOptions` will be honored. If you have `isSessionsEnabled` set to `false`, the `messageHandlerOptions` will be honored.
+If you have `isSessionsEnabled` set to `true`, the `sessionHandlerOptions` is honored. If you have `isSessionsEnabled` set to `false`, the `messageHandlerOptions` is honored.
|Property |Default | Description | |||| |prefetchCount|0|Gets or sets the number of messages that the message receiver can simultaneously request.|
-|maxAutoRenewDuration|00:05:00|The maximum duration within which the message lock will be renewed automatically.|
-|autoComplete|true|Whether the trigger should automatically call complete after processing, or if the function code will manually call complete.<br><br>Setting to `false` is only supported in C#.<br><br>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br><br>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br><br>In non-C# functions, exceptions in the function results in the runtime calls `abandonAsync` in the background. If no exception occurs, then `completeAsync` is called in the background. |
-|maxConcurrentCalls|16|The maximum number of concurrent calls to the callback that the message pump should initiate per scaled instance. By default, the Functions runtime processes multiple messages concurrently.|
-|maxConcurrentSessions|2000|The maximum number of sessions that can be handled concurrently per scaled instance.|
+|messageHandlerOptions.maxAutoRenewDuration|00:05:00|The maximum duration within which the message lock will be renewed automatically.|
+|messageHandlerOptions.autoComplete|true|Whether the trigger should automatically call complete after processing, or if the function code will manually call complete.<br><br>Setting to `false` is only supported in C#.<br><br>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br><br>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br><br>In non-C# functions, exceptions in the function results in the runtime calls `abandonAsync` in the background. If no exception occurs, then `completeAsync` is called in the background. |
+|messageHandlerOptions.maxConcurrentCalls|16|The maximum number of concurrent calls to the callback that the message pump should initiate per scaled instance. By default, the Functions runtime processes multiple messages concurrently.|
+|sessionHandlerOptions.maxConcurrentSessions|2000|The maximum number of sessions that can be handled concurrently per scaled instance.|
+|batchOptions.maxMessageCount|1000| The maximum number of messages sent to the function when triggered. |
+|batchOptions.operationTimeout|00:01:00| A time span value expressed in `hh:mm:ss`. |
+|batchOptions.autoComplete|true| See the above description for `messageHandlerOptions.autoComplete`. |
### Additional settings for version 5.x+
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference.md
An identity-based connection for an Azure service accepts the following properti
| Service URI | Azure Blob<sup>1</sup>, Azure Queue | `<CONNECTION_NAME_PREFIX>__serviceUri` | The data plane URI of the service to which you are connecting. | | Fully Qualified Namespace | Event Hubs, Service Bus | `<CONNECTION_NAME_PREFIX>__fullyQualifiedNamespace` | The fully qualified Event Hubs and Service Bus namespace. | | Token Credential | (Optional) | `<CONNECTION_NAME_PREFIX>__credential` | Defines how a token should be obtained for the connection. Recommended only when specifying a user-assigned identity, when it should be set to "managedidentity". This is only valid when hosted in the Azure Functions service. |
-| Client ID | (Optional) | `<CONNECTION_NAME_PREFIX>__clientId` | When `credential` is set to "managedidentity", this property pecifies the user-assigned identity to be used when obtaining a token. The property accepts a client ID corresponding to a user-assigned identity assigned to the application. If not specified, the system-assigned identity will be used. This property is used differently in [local development scenarios](#local-development-with-identity-based-connections), when `credential` should not be set. |
+| Client ID | (Optional) | `<CONNECTION_NAME_PREFIX>__clientId` | When `credential` is set to "managedidentity", this property specifies the user-assigned identity to be used when obtaining a token. The property accepts a client ID corresponding to a user-assigned identity assigned to the application. If not specified, the system-assigned identity will be used. This property is used differently in [local development scenarios](#local-development-with-identity-based-connections), when `credential` should not be set. |
<sup>1</sup> Both blob and queue service URI's are required for Azure Blob.
azure-functions Functions Test A Function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-test-a-function.md
The content that follows is split into two different sections meant to target di
- [C# in Visual Studio with xUnit](#c-in-visual-studio) - [JavaScript in VS Code with Jest](#javascript-in-vs-code)
+- [Python using pytest](./functions-reference-python.md?tabs=application-level#unit-testing)
The sample repository is available on [GitHub](https://github.com/Azure-Samples/azure-functions-tests).
Now that you've learned how to write automated tests for your functions, continu
- [Manually run a non HTTP-triggered function](./functions-manually-run-non-http.md) - [Azure Functions error handling](./functions-bindings-error-pages.md)-- [Azure Function Event Grid Trigger Local Debugging](./functions-debug-event-grid-trigger-local.md)
+- [Azure Function Event Grid Trigger Local Debugging](./functions-debug-event-grid-trigger-local.md)
azure-government Documentation Government Overview Itar https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-overview-itar.md
Title: Azure support for export controls
description: Customer guidance for Azure export control support Previously updated : 02/25/2021 Last updated : 07/01/2021
**Disclaimer:** Customers are wholly responsible for ensuring their own compliance with all applicable laws and regulations. Information provided in this article does not constitute legal advice, and customers should consult their legal advisors for any questions regarding regulatory compliance.
-To help Azure customers navigate export control rules, Microsoft has published the [Microsoft Azure Export Controls](https://aka.ms/Azure-Export-Paper) whitepaper. It describes US export controls particularly as they apply to software and technical data, reviews potential sources of export control risks, and offers specific guidance to help customers assess their obligations under these controls.
+To help you navigate export control rules, Microsoft has published the [Microsoft Azure Export Controls](https://aka.ms/Azure-Export-Paper) whitepaper. It describes US export controls particularly as they apply to software and technical data, reviews potential sources of export control risks, and offers specific guidance to help you assess your obligations under these controls.
## Overview of export control laws
-Export related definitions vary somewhat among various export control regulations. In simplified terms, an export often implies a transfer of restricted information, materials, equipment, software, etc., to a foreign person or foreign destination by any means. US export control policy is enforced through export control laws and regulations administered primarily by the Department of Commerce, Department of State, Department of Energy, Nuclear Regulatory Commission, and Department of Treasury. Respective agencies within each department are responsible for specific areas of export control based on their historical administration, as shown in Table 1.
+Export related definitions vary somewhat among various export control regulations. In simplified terms, an export often implies a transfer of restricted information, materials, equipment, software, and so on, to a foreign person or foreign destination by any means. US export control policy is enforced through export control laws and regulations administered primarily by the Department of Commerce, Department of State, Department of Energy, Nuclear Regulatory Commission, and Department of Treasury. Respective agencies within each department are responsible for specific areas of export control based on their historical administration, as shown in Table 1.
**Table 1.** US export control laws and regulations
This article contains a review of the current US export control regulations, con
The US Department of Commerce is responsible for enforcing the [Export Administration Regulations](https://www.bis.doc.gov/index.php/regulations/export-administration-regulations-ear) (EAR) through the [Bureau of Industry and Security](https://www.bis.doc.gov/) (BIS). According to BIS [definitions](https://www.bis.doc.gov/index.php/documents/regulation-docs/412-part-734-scope-of-the-export-administration-regulations/file), export is the transfer of protected technology or information to a foreign destination or release of protected technology or information to a foreign person in the United States (also known as deemed export). Items subject to the EAR can be found on the [Commerce Control List](https://www.bis.doc.gov/index.php/regulations/commerce-control-list-ccl) (CCL), and each item has a unique [Export Control Classification Number](https://www.bis.doc.gov/index.php/licensing/commerce-control-list-classification/export-control-classification-number-eccn) (ECCN) assigned. Items not listed on the CCL are designated as EAR99 and most EAR99 commercial products do not require a license to be exported. However, depending on the destination, end user, or end use of the item, even an EAR99 item may require a BIS export license.
-The EAR is applicable to dual-use items that have both commercial and military applications and to items with purely commercial application. The BIS has provided guidance that cloud service providers (CSP) are not exporters of customersΓÇÖ data due to the customersΓÇÖ use of cloud services. Moreover, in the [final rule](https://www.federalregister.gov/documents/2016/06/03/2016-12734/revisions-to-definitions-in-the-export-administration-regulations) published on 3 June 2016, BIS clarified that EAR licensing requirements would not apply if the transmission and storage of unclassified technical data and software were encrypted end-to-end using Federal Information Processing Standard (FIPS) 140-2 validated cryptographic modules and not intentionally stored in a military-embargoed country (that is, Country Group D:5 as described in [Supplement No. 1 to Part 740](https://ecfr.io/Title-15/pt15.2.740#ap15.2.740_121.1) of the EAR) or in the Russian Federation. The US Department of Commerce has made it clear that, when data or software is uploaded to the cloud, the customer, not the cloud provider, is the ΓÇ£exporterΓÇ¥ who has the responsibility to ensure that transfers, storage, and access to that data or software complies with the EAR.
+The EAR is applicable to dual-use items that have both commercial and military applications and to items with purely commercial application. The BIS has provided guidance that cloud service providers (CSP) are not exporters of customersΓÇÖ data due to the customersΓÇÖ use of cloud services. Moreover, in the [final rule](https://www.federalregister.gov/documents/2016/06/03/2016-12734/revisions-to-definitions-in-the-export-administration-regulations) published on 3 June 2016, BIS clarified that EAR licensing requirements would not apply if the transmission and storage of unclassified technical data and software were encrypted end-to-end using Federal Information Processing Standard (FIPS) 140 validated cryptographic modules and not intentionally stored in a military-embargoed country (that is, Country Group D:5 as described in [Supplement No. 1 to Part 740](https://ecfr.io/Title-15/pt15.2.740#ap15.2.740_121.1) of the EAR) or in the Russian Federation. The US Department of Commerce has made it clear that, when data or software is uploaded to the cloud, the customer, not the cloud provider, is the ΓÇ£exporterΓÇ¥ who has the responsibility to ensure that transfers, storage, and access to that data or software complies with the EAR.
-Both Azure and Azure Government can help customers subject to the EAR meet their compliance requirements. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters are not located in proscribed countries or in the Russian Federation. Azure and Azure Government rely on FIPS 140-2 validated cryptographic modules in the underlying operating system, and provide customers with a [wide range of options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md), which can store encryption keys in FIPS 140-2 validated hardware security modules (HSMs) under customer control - known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. Moreover, Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract customer keys.
+Both Azure and Azure Government can help you meet your EAR compliance requirements. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters are not located in proscribed countries or in the Russian Federation. Azure and Azure Government rely on FIPS 140 validated cryptographic modules in the underlying operating system, and provide you with a [wide range of options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md), which can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control - known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. Moreover, Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract your keys.
-Customers are responsible for choosing Azure or Azure Government regions for deploying their applications and data. Moreover, customers are responsible for designing their applications to apply end-to-end data encryption that meets EAR requirements. Microsoft does not inspect or approve customer applications deployed on Azure or Azure Government.
+You are responsible for choosing Azure or Azure Government regions for deploying your applications and data. Moreover, you are responsible for designing your applications to apply end-to-end data encryption that meets EAR requirements. Microsoft does not inspect, approve, or monitor your applications deployed on Azure or Azure Government.
-Azure Government provides an extra layer of protection to customers through contractual commitments regarding storage of customer data in the United States and limiting potential access to systems processing customer data to [screened US persons](./documentation-government-plan-security.md#screening).
+Azure Government provides you with an extra layer of protection through contractual commitments regarding storage of your data in the United States and limiting potential access to systems processing your data to [screened US persons](./documentation-government-plan-security.md#screening).
## ITAR The US Department of State has export control authority over defense articles, services, and related technologies under the [International Traffic in Arms Regulations](https://www.ecfr.gov/cgi-bin/text-idx?SID=8870638858a2595a32dedceb661c482c&mc=true&tpl=/ecfrbrowse/Title22/22CIsubchapM.tpl) (ITAR) managed by the [Directorate of Defense Trade Controls](http://www.pmddtc.state.gov/) (DDTC). Items under ITAR protection are documented on the [United States Munitions List](https://www.ecfr.gov/cgi-bin/text-idx?rgn=div5&node=22:1.0.1.13.58) (USML). Customers who are manufacturers, exporters, and brokers of defense articles, services, and related technologies as defined on the USML must be registered with DDTC, must understand and abide by ITAR, and must self-certify that they operate in accordance with ITAR.
-DDTC [revised the ITAR rules](https://www.federalregister.gov/documents/2019/12/26/2019-27438/international-traffic-in-arms-regulations-creation-of-definition-of-activities-that-are-not-exports) effective 25 March 2020 to align them more closely with the EAR. These ITAR revisions introduced an end-to-end data encryption carve-out that incorporated many of the same terms that the Commerce Department adopted in 2016 for the EAR. Specifically, the revised ITAR rules state that activities that do not constitute exports, re-exports, re-transfers, or temporary imports include (among other activities) the sending, taking, or storing of technical data that is 1) unclassified, 2) secured using end-to-end encryption, 3) secured using FIPS 140-2 compliant cryptographic modules as prescribed in the regulations, 4) not intentionally sent to a person in or stored in a [country proscribed in § 126.1](https://ecfr.io/Title-22/pt22.1.126#se22.1.126_11) or the Russian Federation, and 5) not sent from a country proscribed in § 126.1 or the Russian Federation. Moreover, DDTC clarified that data in-transit via the Internet is not deemed to be stored. End-to-end encryption implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption is not provided to any third party.
+DDTC [revised the ITAR rules](https://www.federalregister.gov/documents/2019/12/26/2019-27438/international-traffic-in-arms-regulations-creation-of-definition-of-activities-that-are-not-exports) effective 25 March 2020 to align them more closely with the EAR. These ITAR revisions introduced an end-to-end data encryption carve-out that incorporated many of the same terms that the Commerce Department adopted in 2016 for the EAR. Specifically, the revised ITAR rules state that activities that do not constitute exports, re-exports, re-transfers, or temporary imports include (among other activities) the sending, taking, or storing of technical data that is 1) unclassified, 2) secured using end-to-end encryption, 3) secured using FIPS 140 compliant cryptographic modules as prescribed in the regulations, 4) not intentionally sent to a person in or stored in a [country proscribed in § 126.1](https://ecfr.io/Title-22/pt22.1.126#se22.1.126_11) or the Russian Federation, and 5) not sent from a country proscribed in § 126.1 or the Russian Federation. Moreover, DDTC clarified that data in-transit via the Internet is not deemed to be stored. End-to-end encryption implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption is not provided to any third party.
-There is no ITAR compliance certification; however, both Azure and Azure Government can help customers subject to ITAR meet their compliance obligations. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters are not located in proscribed countries or in the Russian Federation. Azure and Azure Government rely on FIPS 140-2 validated cryptographic modules in the underlying operating system, and provide customers with a [wide range of options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md), which can store encryption keys in FIPS 140-2 validated hardware security modules (HSMs) under customer control - known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. Moreover, Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract customer keys.
+There is no ITAR compliance certification; however, both Azure and Azure Government can help you meet your ITAR compliance obligations. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters are not located in proscribed countries or in the Russian Federation. Azure and Azure Government rely on FIPS 140 validated cryptographic modules in the underlying operating system, and provide you with a [wide range of options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md), which can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control - known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. Moreover, Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract your keys.
-Customers are responsible for choosing Azure or Azure Government regions for deploying their applications and data. Moreover, customers are responsible for designing their applications to apply end-to-end data encryption that meets ITAR requirements. Microsoft does not inspect or approve customer applications deployed on Azure or Azure Government.
+You are responsible for choosing Azure or Azure Government regions for deploying your applications and data. Moreover, you are responsible for designing your applications to apply end-to-end data encryption that meets ITAR requirements. Microsoft does not inspect, approve, or monitor your applications deployed on Azure or Azure Government.
-Azure Government provides an extra layer of protection to customers through contractual commitments regarding storage of customer data in the United States and limiting potential access to systems processing customer data to [screened US persons](./documentation-government-plan-security.md#screening).
+Azure Government provides you with an extra layer of protection through contractual commitments regarding storage of your data in the United States and limiting potential access to systems processing your data to [screened US persons](./documentation-government-plan-security.md#screening).
## DoE 10 CFR Part 810 The US Department of Energy (DoE) export control regulation [10 CFR Part 810](http://www.gpo.gov/fdsys/pkg/FR-2015-02-23/pdf/2015-03479.pdf) implements section 57b.(2) of the [Atomic Energy Act of 1954](https://www.nrc.gov/docs/ML1327/ML13274A489.pdf) (AEA), as amended by section 302 of the [Nuclear Nonproliferation Act of 1978](http://www.nrc.gov/docs/ML1327/ML13274A492.pdf#page=19) (NNPA). It is administered by the [National Nuclear Security Administration](https://www.energy.gov/nnsa/national-nuclear-security-administration) (NNSA). The revised Part 810 (final rule) became effective on 25 March 2015, and, among other things, it controls the export of unclassified nuclear technology and assistance. It enables peaceful nuclear trade by helping to assure that nuclear technologies exported from the United States will not be used for non-peaceful purposes. Paragraph 810.7 (b) states that specific DoE authorization is required for providing or transferring sensitive nuclear technology to any foreign entity.
-**Azure Government can accommodate customers subject to DoE 10 CFR Part 810** export control requirements because it is designed to meet specific controls that restrict access to information and systems to [US persons](./documentation-government-plan-security.md#screening) among Azure operations personnel. Customers deploying data to Azure Government are responsible for their own security classification process. For data subject to DoE export controls, the classification system is augmented by the [Unclassified Controlled Nuclear Information](https://www.energy.gov/sites/prod/files/hss/Classification/docs/UCNI-Tri-fold.pdf) (UCNI) controls established by Section 148 of the AEA.
+**Azure Government can help you meet your DoE 10 CFR Part 810 export control requirements** because it is designed to implement specific controls that restrict access to information and systems to [US persons](./documentation-government-plan-security.md#screening) among Azure operations personnel. If you are deploying data to Azure Government, you are responsible for your own security classification process. For data subject to DoE export controls, the classification system is augmented by the [Unclassified Controlled Nuclear Information](https://www.energy.gov/sites/prod/files/hss/Classification/docs/UCNI-Tri-fold.pdf) (UCNI) controls established by Section 148 of the AEA.
## NRC 10 CFR Part 110
The [Office of Foreign Assets Control](https://www.treasury.gov/about/organizati
The OFAC defines prohibited transactions as trade or financial transactions and other dealings in which US persons may not engage unless authorized by OFAC or expressly exempted by statute. For web-based interactions, see [FAQ No. 73](https://home.treasury.gov/policy-issues/financial-sanctions/faqs/73) for general guidance released by OFAC, which specifies for example that &#8220;Firms that facilitate or engage in e-commerce should do their best to know their customers directly.&#8221;
-As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/dpa), &#8220;Microsoft does not control or limit the regions from which customer or customerΓÇÖs end users may access or move customer data.&#8221; For Microsoft online services, Microsoft conducts due diligence to prevent transactions with entities from OFAC embargoed countries, for example, a sanctions target is not allowed to provision Azure services. OFAC has not issued guidance (similar to the guidance provided by BIS for the Export Administration Regulations) that draws a distinction between cloud service providers and customers when it comes to deemed export. Therefore, it would be the **responsibility of Microsoft customers to exclude sanctions targets from online transactions** involving customer applications (including web sites) deployed on Azure. Azure does not block network traffic to customer sites. Even though OFAC mentions that customers can restrict access based in IP table ranges, they also acknowledge that this approach does not fully address an internetΓÇÖs firm compliance risks. Therefore, OFAC recommends that e-commerce firms should know their customers directly. Microsoft is not responsible for and does not have the means to know directly the end users that interact with applications deployed by customers on Azure.
+As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/dpa) (DPA), &#8220;Microsoft does not control or limit the regions from which customer or customerΓÇÖs end users may access or move customer data.&#8221; For Microsoft online services, Microsoft conducts due diligence to prevent transactions with entities from OFAC embargoed countries, for example, a sanctions target is not allowed to provision Azure services. OFAC has not issued guidance (like the guidance provided by BIS for the EAR) that draws a distinction between cloud service providers and customers when it comes to deemed export. Therefore, it would be **your responsibility to exclude sanctions targets from online transactions** involving your applications (including web sites) deployed on Azure. Microsoft does not block network traffic to your web sites deployed on Azure. Even though OFAC mentions that customers can restrict access based in IP table ranges, they also acknowledge that this approach does not fully address an internetΓÇÖs firm compliance risks. Therefore, OFAC recommends that e-commerce firms should know their customers directly. Microsoft is not responsible for and does not have the means to know directly the end users that interact with your applications deployed on Azure.
-OFAC sanctions are in place to prevent &#8220;conducting business with a sanctions target&#8221;, that is, preventing transactions involving trade, payments, financial instruments, etc. OFAC sanctions are not about preventing a resident of a proscribed country from viewing a customerΓÇÖs public web site.
+OFAC sanctions are in place to prevent &#8220;conducting business with a sanctions target&#8221;, that is, preventing transactions involving trade, payments, financial instruments, and so on. OFAC sanctions are not intended to prevent a resident of a proscribed country from viewing a public web site.
## Managing export control requirements
-Customers should assess carefully how their use of Azure may implicate US export controls and determine whether any of the data they want to store or process in the cloud may be subject to export controls. Microsoft provides customers with contractual commitments, operational processes, and technical features to help them meet their export control obligations when using Azure. The following Azure features are available to customers to manage potential export control risks:
+You should assess carefully how your use of Azure may implicate US export controls and determine whether any of the data you want to store or process in the cloud may be subject to export controls. Microsoft provides you with contractual commitments, operational processes, and technical features to help you meet your export control obligations when using Azure. The following Azure features are available to help you manage potential export control risks:
-- **Ability to control data location** - Customers have visibility as to where their [data is stored](https://azure.microsoft.com/global-infrastructure/data-residency/), and robust tools to restrict data storage to a single geography, region, or country. For example, a customer may therefore ensure that data is stored in the United States or their country of choice and minimize transfer of controlled technology/technical data outside the target country. Customer data is not *intentionally stored* in a non-conforming location, consistent with the EAR and ITAR rules.-- **End-to-end encryption** - Implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption is not provided to any third party. Azure relies on FIPS 140-2 validated cryptographic modules in the underlying operating system, and provides customers with a [wide range of options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md), which can store encryption keys in FIPS 140-2 validated hardware security modules (HSMs) under customer control ([customer-managed keys](../security/fundamentals/encryption-models.md), CMK). Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract customer keys.-- **Control over access to data** - Customers can know and control who can access their data and on what terms. Microsoft technical support personnel do not need and do not have default access to customer data. For those rare instances where resolving customer support requests requires elevated access to customer data, [Customer Lockbox for Azure](../security/fundamentals/customer-lockbox-overview.md) puts customers in charge of approving or denying customer data access requests.
+- **Ability to control data location** - You have visibility as to where your [data is stored](https://azure.microsoft.com/global-infrastructure/data-residency/), and robust tools to restrict data storage to a single geography, region, or country. For example, you may therefore ensure that data is stored in the United States or your country of choice and minimize transfer of controlled technology/technical data outside the target country. Your data is not *intentionally stored* in a non-conforming location, consistent with the EAR and ITAR rules.
+- **End-to-end encryption** - Implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption is not provided to any third party. Azure relies on FIPS 140 validated cryptographic modules in the underlying operating system, and provides you with a [wide range of options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md), which can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control ([customer-managed keys](../security/fundamentals/encryption-models.md), CMK). Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract your keys.
+- **Control over access to data** - You can know and control who can access your data and on what terms. Microsoft technical support personnel do not need and do not have default access to your data. For those rare instances where resolving your support requests requires elevated access to your data, [Customer Lockbox for Azure](../security/fundamentals/customer-lockbox-overview.md) puts you in charge of approving or denying data access requests.
- **Tools and protocols to prevent unauthorized deemed export/re-export** - Apart from the EAR and ITAR *end-to-end encryption* safe harbor for physical storage locations, the use of encryption also helps protect against a potential deemed export (or deemed re-export), because even if a non-US person has access to the encrypted data, nothing is revealed to non-US person who cannot read or understand the data while it is encrypted and thus there is no release of any controlled data. However, ITAR requires some authorization before granting foreign persons with access information that would enable them to decrypt ITAR technical data. Azure offers a wide range of encryption capabilities and solutions, flexibility to choose among encryption options, and robust tools for managing encryption. ## Location of customer data
-Microsoft provides [strong customer commitments](https://www.microsoft.com/trust-center/privacy/data-location) regarding [cloud services data residency and transfer policies](https://azure.microsoft.com/global-infrastructure/data-residency/). Most Azure services are deployed regionally and enable the customer to specify the region into which the service will be deployed, for example, United States. This commitment helps ensure that [customer data](https://www.microsoft.com/trust-center/privacy/customer-data-definitions) stored in a US region will remain in the United States and will not be moved to another region outside the United States.
+Microsoft provides [strong customer commitments](https://www.microsoft.com/trust-center/privacy/data-location) regarding [cloud services data residency and transfer policies](https://azure.microsoft.com/global-infrastructure/data-residency/). Most Azure services are deployed regionally and enable you to specify the region into which the service will be deployed, for example, United States. This commitment helps ensure that [customer data](https://www.microsoft.com/trust-center/privacy/customer-data-definitions) stored in a US region will remain in the United States and will not be moved to another region outside the United States.
## Data encryption
-Azure has extensive support to safeguard customer data using [data encryption](../security/fundamentals/encryption-overview.md), including various encryption models:
+Azure has extensive support to safeguard your data using [data encryption](../security/fundamentals/encryption-overview.md), including various encryption models:
- Server-side encryption that uses service-managed keys, customer-managed keys (CMK) in Azure, or CMK in customer-controlled hardware.-- Client-side encryption that enables customers to manage and store keys on-premises or in another secure location.
+- Client-side encryption that enables you to manage and store keys on-premises or in another secure location.
-Data encryption provides isolation assurances that are tied directly to encryption key access. Since Azure uses strong ciphers for data encryption, only entities with access to encryption keys can have access to data. Deleting or revoking encryption keys renders the corresponding data inaccessible.
+Data encryption provides isolation assurances that are tied directly to encryption key access. Since Azure uses strong ciphers for data encryption, only entities with access to encryption keys can have access to data. Revoking or deleting encryption keys renders the corresponding data inaccessible.
-### FIPS 140-2 validated cryptography
+### FIPS 140 validated cryptography
-The [Federal Information Processing Standard (FIPS) 140-2](https://csrc.nist.gov/publications/detail/fips/140/2/final) is a US government standard that defines minimum security requirements for cryptographic modules in information technology products. The FIPS 140-2 security requirements cover 11 areas related to the design and implementation of a cryptographic module. The US National Institute of Standards and Technology (NIST) Information Technology Laboratory operates a program that validates the FIPS approved cryptographic algorithms in the module.
+The [Federal Information Processing Standard (FIPS) 140](https://csrc.nist.gov/publications/detail/fips/140/2/final) is a US government standard that defines minimum security requirements for cryptographic modules in information technology products. The current version of the standard, FIPS 140-2, has security requirements covering 11 areas related to the design and implementation of a cryptographic module. Microsoft maintains an active commitment to meeting the [FIPS 140 requirements](/azure/compliance/offerings/offering-fips-140-2), having validated cryptographic modules since the standardΓÇÖs inception in 2001. Microsoft validates its cryptographic modules under the US National Institute of Standards and Technology (NIST) [Cryptographic Module Validation Program](https://csrc.nist.gov/Projects/cryptographic-module-validation-program) (CMVP). Multiple Microsoft products, including many cloud services, use these cryptographic modules.
-Microsoft maintains an active commitment to meeting the [FIPS 140-2 requirements](/azure/compliance/offerings/offering-fips-140-2), having validated cryptographic modules since the standardΓÇÖs inception in 2001. Microsoft validates its cryptographic modules under the NIST [Cryptographic Module Validation Program](https://csrc.nist.gov/Projects/cryptographic-module-validation-program) (CMVP). Multiple Microsoft products, including many cloud services, use these cryptographic modules.
-
-While the current CMVP FIPS 140-2 implementation guidance precludes a FIPS 140-2 validation for a cloud service, cloud service providers can obtain and operate FIPS 140-2 validated cryptographic modules for the computing elements that comprise their cloud services. Azure is built with a combination of hardware, commercially available operating systems (Linux and Windows), and Azure-specific version of Windows. Through the Microsoft [Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) (SDL), all Azure services use FIPS 140-2 approved algorithms for data security because the operating system uses FIPS 140-2 approved algorithms while operating at a hyper scale cloud. The corresponding crypto modules are FIPS 140-2 validated as part of the Microsoft [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). Moreover, Azure customers can store their own cryptographic keys and other secrets in FIPS 140-2 validated hardware security modules (HSMs).
+While the current CMVP FIPS 140 implementation guidance precludes a FIPS 140 validation for a cloud service, cloud service providers can obtain and operate FIPS 140 validated cryptographic modules for the computing elements that comprise their cloud services. Azure is built with a combination of hardware, commercially available operating systems (Linux and Windows), and Azure-specific version of Windows. Through the Microsoft [Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) (SDL), all Azure services use FIPS 140 approved algorithms for data security because the operating system uses FIPS 140 approved algorithms while operating at a hyper scale cloud. The corresponding crypto modules are FIPS 140 validated as part of the Microsoft [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). Moreover, you can store your own cryptographic keys and other secrets in FIPS 140 validated hardware security modules (HSMs).
### Encryption key management
-Proper protection and management of encryption keys is essential for data security. [Azure Key Vault](../key-vault/index.yml) is a cloud service for securely storing and managing secrets. Key Vault enables customers to store their encryption keys in hardware security modules (HSMs) that are FIPS 140-2 validated. For more information, see [Data encryption key management](./azure-secure-isolation-guidance.md#data-encryption-key-management).
+Proper protection and management of encryption keys is essential for data security. [Azure Key Vault](../key-vault/index.yml) is a cloud service for securely storing and managing secrets. Key Vault enables you to store your encryption keys in hardware security modules (HSMs) that are FIPS 140 validated. For more information, see [Data encryption key management](./azure-secure-isolation-guidance.md#data-encryption-key-management).
### Data encryption in transit
-Azure provides many options for [encrypting data in transit](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit). Data encryption in transit isolates customer network traffic from other traffic and helps protect data from interception. For more information, see [Data encryption in transit](./azure-secure-isolation-guidance.md#data-encryption-in-transit).
+Azure provides many options for [encrypting data in transit](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit). Data encryption in transit isolates your network traffic from other traffic and helps protect data from interception. For more information, see [Data encryption in transit](./azure-secure-isolation-guidance.md#data-encryption-in-transit).
### Data encryption at rest
-Azure provides extensive options for [encrypting data at rest](../security/fundamentals/encryption-atrest.md) to help customers safeguard their data and meet their compliance needs using both Microsoft-managed encryption keys and customer-managed encryption keys. This process relies on multiple encryption keys and services such as Azure Key Vault and Azure Active Directory to ensure secure key access and centralized key management. For more information about Azure Storage encryption and Azure Disk encryption, see [Data encryption at rest](./azure-secure-isolation-guidance.md#data-encryption-at-rest).
+Azure provides extensive options for [encrypting data at rest](../security/fundamentals/encryption-atrest.md) to help you safeguard your data and meet your compliance needs using both Microsoft-managed encryption keys and customer-managed encryption keys. This process relies on multiple encryption keys and services such as Azure Key Vault and Azure Active Directory to ensure secure key access and centralized key management. For more information about Azure Storage encryption and Azure Disk encryption, see [Data encryption at rest](./azure-secure-isolation-guidance.md#data-encryption-at-rest).
-Azure SQL Database provides [transparent data encryption](../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) at rest by [default](https://azure.microsoft.com/updates/newly-created-azure-sql-databases-encrypted-by-default/). TDE performs real-time encryption and decryption operations on the data and log files. Database Encryption Key (DEK) is a symmetric key stored in the database boot record for availability during recovery. It is secured via a certificate stored in the master database of the server or an asymmetric key called TDE Protector stored under customer control in [Azure Key Vault](../key-vault/general/security-features.md), which is AzureΓÇÖs cloud-based external key management system. Key Vault supports [bring your own key](../azure-sql/database/transparent-data-encryption-byok-overview.md) (BYOK), which enables customers to store TDE Protector in Key Vault and control key management tasks including key rotation, permissions, deleting keys, enabling auditing/reporting on all TDE Protectors, etc. The key can be generated by the Key Vault, imported, or [transferred to the Key Vault from an on-premises HSM device](../key-vault/keys/hsm-protected-keys.md). Customers can also use the [Always Encrypted](../azure-sql/database/always-encrypted-azure-key-vault-configure.md) feature of Azure SQL Database, which is designed specifically to help protect sensitive data by allowing clients to encrypt data inside client applications and [never reveal the encryption keys to the database engine](/sql/relational-databases/security/encryption/always-encrypted-database-engine). In this manner, Always Encrypted provides separation between those users who own the data (and can view it) and those users who manage the data (but should have no access).
+Azure SQL Database provides [transparent data encryption](../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) at rest by [default](https://azure.microsoft.com/updates/newly-created-azure-sql-databases-encrypted-by-default/). TDE performs real-time encryption and decryption operations on the data and log files. Database Encryption Key (DEK) is a symmetric key stored in the database boot record for availability during recovery. It is secured via a certificate stored in the master database of the server or an asymmetric key called TDE Protector stored under your control in [Azure Key Vault](../key-vault/general/security-features.md). Key Vault supports [bring your own key](../azure-sql/database/transparent-data-encryption-byok-overview.md) (BYOK), which enables you to store the TDE Protector in Key Vault and control key management tasks including key rotation, permissions, deleting keys, enabling auditing/reporting on all TDE Protectors, and so on. The key can be generated by the Key Vault, imported, or [transferred to the Key Vault from an on-premises HSM device](../key-vault/keys/hsm-protected-keys.md). You can also use the [Always Encrypted](../azure-sql/database/always-encrypted-azure-key-vault-configure.md) feature of Azure SQL Database, which is designed specifically to help protect sensitive data by allowing you to encrypt data inside your applications and [never reveal the encryption keys to the database engine](/sql/relational-databases/security/encryption/always-encrypted-database-engine). In this manner, Always Encrypted provides separation between those users who own the data (and can view it) and those users who manage the data (but should have no access).
## Restrictions on insider access All Azure and Azure Government employees in the United States are subject to Microsoft background checks. For more information, see [Screening](./documentation-government-plan-security.md#screening).
-Insider threat is characterized as potential for providing back-door connections and cloud service provider (CSP) privileged administrator access to customerΓÇÖs systems and data. For more information on how Microsoft restricts insider access to customer data, see [Restrictions on insider access](./documentation-government-plan-security.md#restrictions-on-insider-access).
+Insider threat is characterized as potential for providing back-door connections and cloud service provider (CSP) privileged administrator access to your systems and data. For more information on how Microsoft restricts insider access to your data, see [Restrictions on insider access](./documentation-government-plan-security.md#restrictions-on-insider-access).
-## Customer monitoring of Azure resources
+## Monitoring your Azure resources
-Azure provides essential services that customers can use to gain in-depth insight into their provisioned Azure resources and get alerted about suspicious activity, including outside attacks aimed at their applications and data. For more information about these services, see [Customer monitoring of Azure resources](./documentation-government-plan-security.md#customer-monitoring-of-azure-resources).
+Azure provides essential services that you can use to gain in-depth insight into your provisioned Azure resources and get alerted about suspicious activity, including outside attacks aimed at your applications and data. For more information about these services, see [Customer monitoring of Azure resources](./documentation-government-plan-security.md#customer-monitoring-of-azure-resources).
## Conclusion
-Customers should carefully assess how their use of Azure may implicate US export controls and determine whether any of the data they want to store or process in the cloud may be subject to export controls. Microsoft Azure provides important technical features, operational processes, and contractual commitments to help customers manage export control risks. Where technical data subject to US export controls may be involved, Azure is configured to offer features that help mitigate the potential risk of customers inadvertently violating US export controls when accessing controlled technical data in Azure. With appropriate planning, customers can use Azure features and their own internal procedures to help ensure full compliance with US export controls when using the Azure platform.
+You should carefully assess how your use of Azure may implicate US export controls and determine whether any of the data you want to store or process in the cloud may be subject to export controls. Microsoft Azure provides important technical features, operational processes, and contractual commitments to help you manage export control risks. Where technical data subject to US export controls may be involved, Azure is configured to offer features that help mitigate the potential risk of you inadvertently violating US export controls when accessing controlled technical data in Azure. With appropriate planning, you can use Azure features and your own internal procedures to help ensure full compliance with US export controls when using the Azure platform.
## Next steps
-To help Azure customers navigate export control rules, Microsoft has published the [Microsoft Azure Export Controls](https://aka.ms/Azure-Export-Paper) whitepaper, which describes US export controls (particularly as they apply to software and technical data), reviews potential sources of export control risks, and offers specific guidance to help customers assess their obligations under these controls.
+To help you navigate export control rules, Microsoft has published the [Microsoft Azure Export Controls](https://aka.ms/Azure-Export-Paper) whitepaper, which describes US export controls (particularly as they apply to software and technical data), reviews potential sources of export control risks, and offers specific guidance to help you assess your obligations under these controls.
Learn more about:
azure-government Documentation Government Overview Wwps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-overview-wwps.md
Previously updated : 03/02/2021 Last updated : 06/29/2021 # Azure for secure worldwide public sector cloud adoption
This article addresses common data residency, security, and isolation concerns p
## Executive summary
-Microsoft Azure provides strong customer commitments regarding data residency and transfer policies. Most Azure services enable the customer to specify the deployment region. For those services, Microsoft will not store customer data outside the customer specified geography. Customers can use extensive and robust data encryption options to help safeguard their data in Azure and control who can access it.
+Microsoft Azure provides strong customer commitments regarding data residency and transfer policies. Most Azure services enable you to specify the deployment region. For those services, Microsoft will not store your data outside your specified geography. You can use extensive and robust data encryption options to help safeguard your data in Azure and control who can access it.
-Listed below are some of the options available to customers to safeguard their data in Azure:
+Listed below are some of the options available to you to safeguard your data in Azure:
-- Customers can choose to store their most sensitive customer content in services that store customer data at rest in Geo.-- Customers can obtain further protection by encrypting data with their own key using Azure Key Vault.-- While customers cannot control the precise network path for data in transit, data encryption in transit helps protect data from interception.-- Azure is a 24x7 globally operated service; however, support and troubleshooting rarely require access to customer data.-- Customers who want added control for support and troubleshooting can use Customer Lockbox for Azure to approve or deny access to their data.-- Microsoft will notify customers of any breach of customer or personal data within 72 hours of incident declaration.-- Customers can monitor potential threats and respond to incidents on their own using Azure Security Center.
+- You can choose to store your most sensitive content in services that store data at rest in Geography.
+- You can obtain further protection by encrypting data with your own key using Azure Key Vault.
+- While you cannot control the precise network path for data in transit, data encryption in transit helps protect data from interception.
+- Azure is a 24x7 globally operated service; however, support and troubleshooting rarely require access to your data.
+- If you want extra control for support and troubleshooting scenarios, you can use Customer Lockbox for Azure to approve or deny access to your data.
+- Microsoft will notify you of any breach of your data (customer or personal) within 72 hours of incident declaration.
+- You can monitor potential threats and respond to incidents on your own using Azure Security Center.
-Using Azure data protection technologies and intelligent edge capabilities from the Azure Stack portfolio of products, customers can process confidential and secret data in secure isolated infrastructure within the public multi-tenant cloud or top secret data on premises and at the edge under the customerΓÇÖs full operational control.
+Using Azure data protection technologies and intelligent edge capabilities from the Azure Stack portfolio of products, you can process confidential and secret data in secure isolated infrastructure within the public multi-tenant cloud or top secret data on premises and at the edge under your full operational control.
## Introduction Governments around the world are in the process of a digital transformation, actively investigating solutions and selecting architectures that will help them transition many of their workloads to the cloud. There are many drivers behind the digital transformation, including the need to engage citizens, empower employees, transform government services, and optimize government operations. Governments across the world are also looking to improve their cybersecurity posture to secure their assets and counter the evolving threat landscape.
-For governments and the public sector industry worldwide, Microsoft provides [Azure](https://azure.microsoft.com/) ΓÇô a public multi-tenant cloud services platform ΓÇô that government agencies can use to deploy various solutions. A multi-tenant cloud services platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses [logical isolation](./azure-secure-isolation-guidance.md) to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent customers from accessing one another's data or applications.
+For governments and the public sector industry worldwide, Microsoft provides [Azure](https://azure.microsoft.com/) ΓÇô a public multi-tenant cloud services platform ΓÇô that you can use to deploy various solutions. A multi-tenant cloud services platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses [logical isolation](./azure-secure-isolation-guidance.md) to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent other customers from accessing your data or applications.
A hyperscale public cloud provides resiliency in time of natural disaster and warfare. The cloud provides capacity for failover redundancy and empowers sovereign nations with flexibility regarding global resiliency planning. Hyperscale public cloud also offers a feature-rich environment incorporating the latest cloud innovations such as artificial intelligence, machine learning, IoT services, intelligent edge, and many more to help government customers increase efficiency and unlock insights into their operations and performance.
-Using AzureΓÇÖs public cloud capabilities, customers benefit from rapid feature growth, resiliency, and the cost-effective operation of the hyperscale cloud while still obtaining the levels of isolation, security, and confidence required to handle workloads across a broad spectrum of data classifications, including unclassified and classified data. Using Azure data protection technologies and intelligent edge capabilities from the [Azure Stack](https://azure.microsoft.com/overview/azure-stack/) portfolio of products, customers can process confidential and secret data in secure isolated infrastructure within the public multi-tenant cloud or top secret data on-premises and at the edge under the customerΓÇÖs full operational control.
+Using AzureΓÇÖs public cloud capabilities, you benefit from rapid feature growth, resiliency, and the cost-effective operation of the hyperscale cloud while still obtaining the levels of isolation, security, and confidence required to handle workloads across a broad spectrum of data classifications, including unclassified and classified data. Using Azure data protection technologies and intelligent edge capabilities from the [Azure Stack](https://azure.microsoft.com/overview/azure-stack/) portfolio of products, you can process confidential and secret data in secure isolated infrastructure within the public multi-tenant cloud or top secret data on-premises and at the edge under your full operational control.
-This article addresses common data residency, security, and isolation concerns pertinent to worldwide public sector customers. It also explores technologies available in Azure to help safeguard unclassified, confidential, and secret workloads in the public multi-tenant cloud in combination with Azure Stack products deployed on-premises and at the edge for fully disconnected scenarios involving top secret data. Given that unclassified workloads comprise most scenarios involved in worldwide public sector digital transformation, Microsoft recommends that customers start their cloud journey with unclassified workloads and then progress to classified workloads of increasing data sensitivity.
+This article addresses common data residency, security, and isolation concerns pertinent to worldwide public sector customers. It also explores technologies available in Azure to help safeguard unclassified, confidential, and secret workloads in the public multi-tenant cloud in combination with Azure Stack products deployed on-premises and at the edge for fully disconnected scenarios involving top secret data. Given that unclassified workloads comprise most scenarios involved in worldwide public sector digital transformation, Microsoft recommends that you start your cloud journey with unclassified workloads and then progress to classified workloads of increasing data sensitivity.
## Data residency Established privacy regulations are silent on **data residency and data location**, and permit data transfers in accordance with approved mechanisms such as the EU Standard Contractual Clauses (also known as EU Model Clauses). Microsoft commits contractually in the Online Services Terms [Data Protection Addendum](https://aka.ms/DPA) (DPA) that all potential transfers of customer data out of the EU, European Economic Area (EEA), and Switzerland shall be governed by the EU Model Clauses. Microsoft will abide by the requirements of the EEA and Swiss data protection laws regarding the collection, use, transfer, retention, and other processing of personal data from the EEA and Switzerland. All transfers of personal data are subject to appropriate safeguards and documentation requirements. However, many customers considering cloud adoption are seeking assurances about customer and personal data being kept within the geographic boundaries corresponding to customer operations or location of customerΓÇÖs end users.
-**Data sovereignty** implies data residency; however, it also introduces rules and requirements that define who has control over and access to customer data stored in the cloud. In many cases, data sovereignty mandates that customer data be subject to the laws and legal jurisdiction of the country or region in which data resides. These laws can have direct implications on data access even for platform maintenance or customer-initiated support requests. Customers can use Azure public multi-tenant cloud in combination with Azure Stack products for on-premises and edge solutions to meet their data sovereignty requirements, as described later in this article. These other products can be deployed to put customers solely in control of their data, including storage, processing, transmission, and remote access.
+**Data sovereignty** implies data residency; however, it also introduces rules and requirements that define who has control over and access to customer data stored in the cloud. In many cases, data sovereignty mandates that customer data be subject to the laws and legal jurisdiction of the country or region in which data resides. These laws can have direct implications on data access even for platform maintenance or customer-initiated support requests. You can use Azure public multi-tenant cloud in combination with Azure Stack products for on-premises and edge solutions to meet your data sovereignty requirements, as described later in this article. These other products can be deployed to put you solely in control of your data, including storage, processing, transmission, and remote access.
Among several [data categories and definitions](https://www.microsoft.com/trust-center/privacy/customer-data-definitions) that Microsoft established for cloud services, the following four categories are discussed in this article: -- **Customer data** is all data that customers provide to Microsoft to manage on customerΓÇÖs behalf through customerΓÇÖs use of Microsoft online services.-- **Customer content** is a subset of customer data and includes, for example, the content stored in a customerΓÇÖs Azure Storage account.-- **Personal data** means any information associated with a specific natural person, for example, names and contact information of customerΓÇÖs end users. However, personal data could also include data that is not customer data, such as user ID that Azure can generate and assign to each customer administrator ΓÇô such personal data is considered pseudonymous because it cannot identify an individual on its own.-- **Support and consulting data** mean all data provided by customer to Microsoft to obtain Support or Professional Services.
+- **Customer data** is all data that you provide to Microsoft to manage on your behalf through your use of Microsoft online services.
+- **Customer content** is a subset of customer data and includes, for example, the content stored in your Azure Storage account.
+- **Personal data** means any information associated with a specific natural person, for example, names and contact information of your end users. However, personal data could also include data that is not customer data, such as user ID that Azure can generate and assign to your administrator ΓÇô such personal data is considered pseudonymous because it cannot identify an individual on its own.
+- **Support and consulting data** mean all data provided by you to Microsoft to obtain support or Professional Services.
-The following sections address key cloud implications for data residency and the fundamental principles guiding MicrosoftΓÇÖs safeguarding of customer data at rest, in transit, and as part of customer-initiated support requests.
+The following sections address key cloud implications for data residency and the fundamental principles guiding MicrosoftΓÇÖs safeguarding of your data at rest, in transit, and as part of support requests that you initiate.
### Data at rest
-Microsoft provides transparent insight into data location for all online services available to customers from ΓÇ£[Where your data is located](https://www.microsoft.com/trust-center/privacy/data-location)ΓÇ¥ page ΓÇô expand *Cloud service data residency and transfer policies* section to reveal links for individual online services. **Customers who want to ensure their customer data is stored only in Geo should select from the many regional services that make this commitment.**
+Microsoft provides transparent insight into data location for all online services available to you from [Where your data is located](https://www.microsoft.com/trust-center/privacy/data-location). Expand *Cloud service data residency and transfer policies* section to reveal links for individual online services.
+
+**If you want to ensure your data is stored only in your chosen Geography, you should select from the many regional services that make this commitment.**
#### *Regional vs. non-regional services* Microsoft Azure provides [strong customer commitments](https://azure.microsoft.com/global-infrastructure/data-residency/) regarding data residency and transfer policies: -- **Data storage for regional -- **Data storage for non-regional
+- **Data storage for regional
+- **Data storage for non-regional
-Customer data in an Azure Storage account is [always replicated](../storage/common/storage-redundancy.md) to help ensure durability and high availability. Azure Storage copies customer data to protect it from transient hardware failures, network or power outages, and even massive natural disasters. Customers can typically choose to replicate their data within the same data center, across availability zones within the same region, or across geographically separated regions. Specifically, when creating a storage account, customers can select one of the following redundancy options:
+Your data in an Azure Storage account is [always replicated](../storage/common/storage-redundancy.md) to help ensure durability and high availability. Azure Storage copies your data to protect it from transient hardware failures, network or power outages, and even massive natural disasters. You can typically choose to replicate your data within the same data center, across availability zones within the same region, or across geographically separated regions. Specifically, when creating a storage account, you can select one of the following redundancy options:
- [Locally redundant storage (LRS)](../storage/common/storage-redundancy.md#locally-redundant-storage) - [Zone-redundant storage (ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage) - [Geo-redundant storage (GRS)](../storage/common/storage-redundancy.md#geo-redundant-storage) - [Geo-zone-redundant storage (GZRS)](../storage/common/storage-redundancy.md#geo-zone-redundant-storage)
-Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage provides LRS and ZRS redundancy options for replicating data in the primary region. For applications requiring high availability, customers can choose geo-replication to a secondary region that is hundreds of kilometers away from the primary region. Azure Storage offers GRS and GZRS options for copying data to a secondary region. More options are available to customers for configuring read access (RA) to the secondary region (RA-GRS and RA-GZRS), as explained in [Read access to data in the secondary region](../storage/common/storage-redundancy.md#read-access-to-data-in-the-secondary-region).
+Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage provides LRS and ZRS redundancy options for replicating data in the primary region. For applications requiring high availability, you can choose geo-replication to a secondary region that is hundreds of kilometers away from the primary region. Azure Storage offers GRS and GZRS options for copying data to a secondary region. More options are available to you for configuring read access (RA) to the secondary region (RA-GRS and RA-GZRS), as explained in [Read access to data in the secondary region](../storage/common/storage-redundancy.md#read-access-to-data-in-the-secondary-region).
-Azure Storage redundancy options can have implications on data residency as Azure relies on [paired regions](../best-practices-availability-paired-regions.md) to deliver [geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage) (GRS). For example, customers concerned about geo-replication across regions that span country boundaries, may want to choose LRS or ZRS to keep Azure Storage data at rest within the geographic boundaries of the country in which the primary region is located. Similarly, [geo replication for Azure SQL Database](../azure-sql/database/active-geo-replication-overview.md) can be obtained by configuring asynchronous replication of transactions to any region in the world, although it is recommended that paired regions be used for this purpose as well. If customers need to keep relational data inside the geographic boundaries of their country/region, they should not configure Azure SQL Database asynchronous replication to a region outside that country.
+Azure Storage redundancy options can have implications on data residency as Azure relies on [paired regions](../best-practices-availability-paired-regions.md) to deliver [geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage) (GRS). For example, if you are concerned about geo-replication across regions that span country boundaries, you may want to choose LRS or ZRS to keep Azure Storage data at rest within the geographic boundaries of the country in which the primary region is located. Similarly, [geo replication for Azure SQL Database](../azure-sql/database/active-geo-replication-overview.md) can be obtained by configuring asynchronous replication of transactions to any region in the world, although it is recommended that paired regions be used for this purpose as well. If you need to keep relational data inside the geographic boundaries of your country/region, you should not configure Azure SQL Database asynchronous replication to a region outside that country.
-As described on the [data location page](https://azure.microsoft.com/global-infrastructure/data-residency/), most Azure **regional** services honor the data at rest commitment to ensure that customer data remains within the geographic boundary where the corresponding service is deployed. A handful of exceptions to this rule are noted on the data location page. Customers should review these exceptions to determine if the type of data stored outside their chosen deployment Geo meets their needs.
+As described on the [data location page](https://azure.microsoft.com/global-infrastructure/data-residency/), most Azure **regional** services honor the data at rest commitment to ensure that your data remains within the geographic boundary where the corresponding service is deployed. A handful of exceptions to this rule are noted on the data location page. You should review these exceptions to determine if the type of data stored outside your chosen deployment Geography meets your needs.
-**Non-regional** Azure services do not enable customers to specify the region where the services will be deployed. Some non-regional services do not store customer data at all but merely provide global routing functions such as Azure Traffic Manager or Azure DNS. Other non-regional services are intended for data caching at edge locations around the globe, such as the Content Delivery Network ΓÇô such services are optional and customers should not use them for sensitive customer content they wish to keep in Geo. One non-regional service that warrants extra discussion is **Azure Active Directory**, which is discussed in the next section.
+**Non-regional** Azure services do not enable you to specify the region where the services will be deployed. Some non-regional services do not store your data at all but merely provide global routing functions such as Azure Traffic Manager or Azure DNS. Other non-regional services are intended for data caching at edge locations around the globe, such as the Content Delivery Network ΓÇô such services are optional and you should not use them for sensitive customer content you wish to keep in your Geography. One non-regional service that warrants extra discussion is **Azure Active Directory**, which is discussed in the next section.
#### *Customer data in Azure Active Directory*
Azure Active Directory (Azure AD) is a non-regional service that may store ident
- Europe, where Azure AD keeps most of the identity data within European datacenters except as noted in [Identity data storage for European customers in Azure Active Directory](../active-directory/fundamentals/active-directory-data-storage-eu.md). - Australia and New Zealand, where identity data is stored in Australia except as noted in [Customer data storage for Australian and New Zealand customers in Azure Active Directory](../active-directory/fundamentals/active-directory-data-storage-australia-newzealand.md).
-Azure AD provides a [dashboard](https://go.microsoft.com/fwlink/?linkid=2092972) with transparent insight into data location for every Azure AD component service. Among other features, Azure AD is an identity management service that stores directory data for customerΓÇÖs Azure administrators, including user **personal data** categorized as **End User Identifiable Information (EUII)**, for example, names, email addresses, and so on. In Azure AD, customers can create User, Group, Device, Application, and other entities using various attribute types such as Integer, DateTime, Binary, String (limited to 256 characters), and so on. Azure AD is not intended to store customer content and it is not possible to store blobs, files, database records, and similar structures in Azure AD. Moreover, Azure AD is not intended to be an identity management service for customerΓÇÖs external end users ΓÇô [Azure AD B2C](../active-directory-b2c/overview.md) should be used for that purpose.
+Azure AD provides a [dashboard](https://go.microsoft.com/fwlink/?linkid=2092972) with transparent insight into data location for every Azure AD component service. Among other features, Azure AD is an identity management service that stores directory data for your Azure administrators, including user **personal data** categorized as **End User Identifiable Information (EUII)**, for example, names, email addresses, and so on. In Azure AD, you can create User, Group, Device, Application, and other entities using various attribute types such as Integer, DateTime, Binary, String (limited to 256 characters), and so on. Azure AD is not intended to store your customer content and it is not possible to store blobs, files, database records, and similar structures in Azure AD. Moreover, Azure AD is not intended to be an identity management service for your external end users ΓÇô [Azure AD B2C](../active-directory-b2c/overview.md) should be used for that purpose.
Azure AD implements extensive **data protection features**, including tenant isolation and access control, data encryption in transit, secrets encryption and management, disk level encryption, advanced cryptographic algorithms used by various Azure AD components, data operational considerations for insider access, and more. Detailed information is available from a whitepaper [Active Directory Data Security Considerations](https://aka.ms/AADDataWhitePaper). #### *Generating pseudonymous data for internal systems*
-Personal data is defined broadly. It includes not just customer data but also unique personal identifiers such as Probably Unique Identifier (PUID) and Globally Unique Identifier (GUID), the latter being often labeled as Universally Unique Identifier (UUID). These unique personal identifiers are *pseudonymous identifiers*. This type of information is generated automatically to track users who interact directly with Azure services, such as customerΓÇÖs administrators. For example, PUID is a random string generated programmatically via a combination of characters and digits to provide a high probability of uniqueness. Pseudonymous identifiers are stored in centralized internal Azure systems.
+Personal data is defined broadly. It includes not just customer data but also unique personal identifiers such as Probably Unique Identifier (PUID) and Globally Unique Identifier (GUID), the latter being often labeled as Universally Unique Identifier (UUID). These unique personal identifiers are *pseudonymous identifiers*. This type of information is generated automatically to track users who interact directly with Azure services, such as your administrators. For example, PUID is a random string generated programmatically via a combination of characters and digits to provide a high probability of uniqueness. Pseudonymous identifiers are stored in centralized internal Azure systems.
-Whereas EUII represents data that could be used on its own to identify a user (for example, user name, display name, user principal name, or even user-specific IP address), pseudonymous identifiers are considered pseudonymous because they cannot identify an individual on their own. Pseudonymous identifiers do not contain any information uploaded or created by the customer.
+Whereas EUII represents data that could be used on its own to identify a user (for example, user name, display name, user principal name, or even user-specific IP address), pseudonymous identifiers are considered pseudonymous because they cannot identify an individual on their own. Pseudonymous identifiers do not contain any information that you uploaded or created.
### Data in transit
-**While customers cannot control the precise network path for data in transit, data encryption in transit helps protect data from interception.**
+**While you cannot control the precise network path for data in transit, data encryption in transit helps protect data from interception.**
Data in transit applies to the following scenarios involving data traveling between: -- CustomerΓÇÖs end users and Azure service-- CustomerΓÇÖs on-premises datacenter and Azure region
+- Your end users and Azure service
+- Your on-premises datacenter and Azure region
- Microsoft datacenters as part of expected Azure service operation
-While data in transit between two points within the Geo will typically remain in Geo, it is not possible to guarantee this 100% of the time because of the way that networks automatically reroute traffic to avoid congestion or bypass other interruptions. That said, data in transit can be protected through encryption as detailed below and in *[Data encryption in transit](#data-encryption-in-transit)* section.
+While data in transit between two points within your chosen Geography will typically remain in that Geography, it is not possible to guarantee this outcome 100% of the time because of the way networks automatically reroute traffic to avoid congestion or bypass other interruptions. That said, data in transit can be protected through encryption as detailed below and in *[Data encryption in transit](#data-encryption-in-transit)* section.
-#### *CustomerΓÇÖs end users connection to Azure service*
+#### *Your end users connection to Azure service*
-Most customers will connect to Azure over the Internet, and the precise routing of network traffic will depend on the many network providers that contribute to Internet infrastructure. As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/DPA) (DPA), Microsoft does not control or limit the regions from which customer or customerΓÇÖs end users may access or move customer data. Customers can increase security by enabling encryption in transit. For example, customers can use [Azure Application Gateway](../application-gateway/application-gateway-end-to-end-ssl-powershell.md) to configure end-to-end encryption of traffic. As described in *[Data encryption in transit](#data-encryption-in-transit)* section, Azure uses the Transport Layer Security (TLS) protocol to help protect data when it is traveling between customers and Azure services. However, Microsoft cannot control network traffic paths corresponding to customerΓÇÖs end-user interaction with Azure.
+Most customers will connect to Azure over the Internet, and the precise routing of network traffic will depend on the many network providers that contribute to Internet infrastructure. As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/DPA) (DPA), Microsoft does not control or limit the regions from which you or your end users may access or move customer data. You can increase security by enabling encryption in transit. For example, you can use [Azure Application Gateway](../application-gateway/application-gateway-end-to-end-ssl-powershell.md) to configure end-to-end encryption of traffic. As described in *[Data encryption in transit](#data-encryption-in-transit)* section, Azure uses the Transport Layer Security (TLS) protocol to help protect data when it is traveling between you and Azure services. However, Microsoft cannot control network traffic paths corresponding to your end-user interaction with Azure.
-#### *CustomerΓÇÖs datacenter connection to Azure region*
+#### *Your datacenter connection to Azure region*
-[Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) provides a means for Azure virtual machines (VMs) to act as part of a customerΓÇÖs internal (on-premises) network. Customers have options to securely connect to a VNet from their on-premises infrastructure ΓÇô choose an [IPSec protected VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) (for example, point-to-site VPN or site-to-site VPN) or a private connection by using Azure [ExpressRoute](../expressroute/expressroute-introduction.md) with several [data encryption options](../expressroute/expressroute-about-encryption.md).
+[Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) provides a means for Azure virtual machines (VMs) to act as part of your internal (on-premises) network. You have options to securely connect to a VNet from your on-premises infrastructure ΓÇô choose an [IPSec protected VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) (for example, point-to-site VPN or site-to-site VPN) or a private connection by using Azure [ExpressRoute](../expressroute/expressroute-introduction.md) with several [data encryption options](../expressroute/expressroute-about-encryption.md).
-- **IPSec protected VPN** uses an encrypted tunnel established across the public Internet, which means that customers need to rely on the local Internet service providers for any network-related assurances.-- **ExpressRoute** allows customers to create private connections between Microsoft datacenters and their on-premises infrastructure or colocation facility. ExpressRoute connections do not go over the public Internet and offer lower latency and higher reliability than IPSec protected VPN connections. [ExpressRoute locations](../expressroute/expressroute-locations-providers.md) are the entry points to MicrosoftΓÇÖs global network backbone and they may or may not match the location of Azure regions. For example, customers can connect to Microsoft in Amsterdam through ExpressRoute and have access to all Azure cloud services hosted in Northern and Western Europe. However, itΓÇÖs also possible to have access to the same Azure regions from ExpressRoute connections located elsewhere in the world. Once the network traffic enters the Microsoft backbone, it is guaranteed to traverse that private networking infrastructure instead of the public Internet.
+- **IPSec protected VPN** uses an encrypted tunnel established across the public Internet, which means that you need to rely on the local Internet service providers for any network-related assurances.
+- **ExpressRoute** allows you to create private connections between Microsoft datacenters and your on-premises infrastructure or colocation facility. ExpressRoute connections do not go over the public Internet and offer lower latency and higher reliability than IPSec protected VPN connections. [ExpressRoute locations](../expressroute/expressroute-locations-providers.md) are the entry points to MicrosoftΓÇÖs global network backbone and they may or may not match the location of Azure regions. For example, you can connect to Microsoft in Amsterdam through ExpressRoute and have access to all Azure cloud services hosted in Northern and Western Europe. However, itΓÇÖs also possible to have access to the same Azure regions from ExpressRoute connections located elsewhere in the world. Once the network traffic enters the Microsoft backbone, it is guaranteed to traverse that private networking infrastructure instead of the public Internet.
#### *Traffic across Microsoft global network backbone*
-As described in *[Data at rest](#data-at-rest)* section, Azure services such as Storage and SQL Database can be configured for geo-replication to help ensure durability and high availability especially for disaster recovery scenarios. Azure relies on [paired regions](../best-practices-availability-paired-regions.md) to deliver [geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage) (GRS), and paired regions are also recommended when configuring active [geo-replication](../azure-sql/database/active-geo-replication-overview.md) for Azure SQL Database. Paired regions are located within the same Geo.
+As described in *[Data at rest](#data-at-rest)* section, Azure services such as Storage and SQL Database can be configured for geo-replication to help ensure durability and high availability especially for disaster recovery scenarios. Azure relies on [paired regions](../best-practices-availability-paired-regions.md) to deliver [geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage) (GRS), and paired regions are also recommended when configuring active [geo-replication](../azure-sql/database/active-geo-replication-overview.md) for Azure SQL Database. Paired regions are located within the same Geography.
-Inter-region traffic is encrypted using [Media Access Control Security](https://1.ieee802.org/security/802-1ae/) (MACsec), which protects network traffic at the data link layer (Network Layer 2) and relies on AES-128 block cipher for encryption. This traffic stays entirely within the Microsoft [global network backbone](../networking/microsoft-global-network.md) and never enters the public Internet. The backbone is one of the largest in the world with more than 160,000 km of lit fiber optic and undersea cable systems. However, network traffic is not guaranteed to always follow the same path from one Azure region to another. To provide the reliability needed for the Azure cloud, Microsoft has many physical networking paths with automatic routing around failures for optimal reliability. Therefore, Microsoft cannot guarantee that network traffic traversing between Azure regions will always be confined to the corresponding Geo. In networking infrastructure disruptions, Microsoft can reroute the encrypted network traffic across its private backbone to ensure service availability and best possible performance.
+Inter-region traffic is encrypted using [Media Access Control Security](https://1.ieee802.org/security/802-1ae/) (MACsec), which protects network traffic at the data link layer (Layer 2 of the networking stack) and relies on AES-128 block cipher for encryption. This traffic stays entirely within the Microsoft [global network backbone](../networking/microsoft-global-network.md) and never enters the public Internet. The backbone is one of the largest in the world with more than 200,000 km of lit fiber optic and undersea cable systems. However, network traffic is not guaranteed to always follow the same path from one Azure region to another. To provide the reliability needed for the Azure cloud, Microsoft has many physical networking paths with automatic routing around congestion or failures for optimal reliability. Therefore, Microsoft cannot guarantee that network traffic traversing between Azure regions will always be confined to the corresponding Geography. In networking infrastructure disruptions, Microsoft can reroute the encrypted network traffic across its private backbone to ensure service availability and best possible performance.
### Data for customer support and troubleshooting
-**Azure is a 24x7 globally operated service; however, support and troubleshooting rarely requires access to customer data. Customers who want added control for support and troubleshooting can use Customer Lockbox for Azure to approve or deny access to their data.**
+**Azure is a 24x7 globally operated service; however, support and troubleshooting rarely requires access to your data. If you want extra control over support and troubleshooting scenarios, you can use Customer Lockbox for Azure to approve or deny access requests to your data.**
-Microsoft [Azure support](https://azure.microsoft.com/support/options/) is available in markets where Azure is offered. It is staffed globally to accommodate 24x7 access to support engineers via email and phone for technical support. Customers can [create and manage support requests](../azure-portal/supportability/how-to-create-azure-support-request.md) in the Azure portal. As needed, frontline support engineers can escalate customer requests to Azure DevOps personnel responsible for Azure service development and operations. These Azure DevOps engineers are also staffed globally. The same production access controls and processes are imposed on all Microsoft engineers, which include support staff comprised of both Microsoft full-time employees and subprocessors/vendors.
+Microsoft [Azure support](https://azure.microsoft.com/support/options/) is available in markets where Azure is offered. It is staffed globally to accommodate 24x7 access to support engineers via email and phone for technical support. You can [create and manage support requests](../azure-portal/supportability/how-to-create-azure-support-request.md) in the Azure portal. As needed, frontline support engineers can escalate your requests to Azure DevOps personnel responsible for Azure service development and operations. These Azure DevOps engineers are also staffed globally. The same production access controls and processes are imposed on all Microsoft engineers, which include support staff comprised of both Microsoft full-time employees and subprocessors/vendors.
-As explained in *[Data encryption at rest](#data-encryption-at-rest)* section, **customer data is encrypted at rest** by default when stored in Azure and customers can control their own encryption keys in Azure Key Vault. Moreover, access to customer data is not needed to resolve most customer support requests. Microsoft engineers rely heavily on logs to provide customer support. As described in *[Insider data access](#insider-data-access)* section, Azure has controls in place to restrict access to customer data for support and troubleshooting scenarios should that access be necessary. For example, **Just-in-Time (JIT)** access provisions restrict access to production systems to Microsoft engineers who are authorized to be in that role and were granted temporary access credentials. As part of the support workflow, **Customer Lockbox** puts customers in charge of approving or denying access to customer data by Microsoft engineers. When combined, these Azure technologies and processes (data encryption, JIT, and Customer Lockbox) provide appropriate risk mitigation to safeguard confidentiality and integrity of customer data.
+As explained in *[Data encryption at rest](#data-encryption-at-rest)* section, **your data is encrypted at rest** by default when stored in Azure and you can control your own encryption keys in Azure Key Vault. Moreover, access to your data is not needed to resolve most customer support requests. Microsoft engineers rely heavily on logs to provide customer support. As described in *[Insider data access](#insider-data-access)* section, Azure has controls in place to restrict access to your data for support and troubleshooting scenarios should that access be necessary. For example, **Just-in-Time (JIT)** access provisions restrict access to production systems to Microsoft engineers who are authorized to be in that role and were granted temporary access credentials. As part of the support workflow, **Customer Lockbox** puts you in charge of approving or denying access to your data by Microsoft engineers. When combined, these Azure technologies and processes (data encryption, JIT, and Customer Lockbox) provide appropriate risk mitigation to safeguard confidentiality and integrity of your data.
Government customers worldwide expect to be fully in control of protecting their data in the cloud. As described in the next section, Azure provides extensive options for data encryption through its entire lifecycle (at rest, in transit, and in use), including customer control of encryption keys. ## Data encryption
-Azure has extensive support to safeguard customer data using [data encryption](../security/fundamentals/encryption-overview.md). **Customers who require extra security for their most sensitive customer data stored in Azure services can encrypt it using their own encryption keys they control in Azure Key Vault. While customers cannot control the precise network path for data in transit, data encryption in transit helps protect data from interception.** Azure supports the following data encryption models:
+Azure has extensive support to safeguard your data using [data encryption](../security/fundamentals/encryption-overview.md). **If you require extra security for your most sensitive customer content stored in Azure services, you can encrypt it using your own encryption keys that you control in Azure Key Vault. While you cannot control the precise network path for data in transit, data encryption in transit helps protect data from interception.** Azure supports the following data encryption models:
- Server-side encryption that uses service-managed keys, customer-managed keys (CMK) in Azure, or CMK in customer-controlled hardware.-- Client-side encryption that enables customers to manage and store keys on-premises or in another secure location.
+- Client-side encryption that enables you to manage and store keys on-premises or in another secure location.
-Data encryption provides isolation assurances that are tied directly to encryption key access. Since Azure uses strong ciphers for data encryption, only entities with access to encryption keys can have access to data. Deleting or revoking encryption keys renders the corresponding data inaccessible.
+Data encryption provides isolation assurances that are tied directly to encryption key access. Since Azure uses strong ciphers for data encryption, only entities with access to encryption keys can have access to data. Revoking or deleting encryption keys renders the corresponding data inaccessible.
### Encryption key management
Proper protection and management of encryption keys is essential for data securi
- **Vault** supports software-protected and hardware security module (HSM)-protected secrets, keys, and certificates. - **Managed HSM** supports only HSM-protected cryptographic keys.
-Key Vault enables customers to store their encryption keys in hardware security modules (HSMs) that are FIPS 140-2 validated. With Azure Key Vault, customers can import or generate encryption keys in HSMs, ensuring that keys never leave the HSM protection boundary to support *bring your own key* (BYOK) scenarios. **Keys generated inside the Azure Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs.** This binding is enforced by the underlying HSM.
+Key Vault enables you to store your encryption keys in hardware security modules (HSMs) that are FIPS 140 validated. With Azure Key Vault, you can import or generate encryption keys in HSMs, ensuring that keys never leave the HSM protection boundary to support *bring your own key* (BYOK) scenarios. **Keys generated inside the Azure Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs.** This binding is enforced by the underlying HSM.
-**Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract customer cryptographic keys.**
+**Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract your cryptographic keys.**
For more information, see [Azure Key Vault](./azure-secure-isolation-guidance.md#azure-key-vault). ### Data encryption in transit
-Azure provides many options for [encrypting data in transit](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit). Data encryption in transit isolates customer network traffic from other traffic and helps protect data from interception. For more information, see [Data encryption in transit](./azure-secure-isolation-guidance.md#data-encryption-in-transit).
+Azure provides many options for [encrypting data in transit](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit). Data encryption in transit isolates your network traffic from other traffic and helps protect data from interception. For more information, see [Data encryption in transit](./azure-secure-isolation-guidance.md#data-encryption-in-transit).
### Data encryption at rest
-Azure provides extensive options for [encrypting data at rest](../security/fundamentals/encryption-atrest.md) to help customers safeguard their data and meet their compliance needs using both Microsoft-managed encryption keys and customer-managed encryption keys. This process relies on multiple encryption keys and services such as Azure Key Vault and Azure Active Directory to ensure secure key access and centralized key management. For more information about Azure Storage encryption and Azure Disk encryption, see [Data encryption at rest](./azure-secure-isolation-guidance.md#data-encryption-at-rest).
+Azure provides extensive options for [encrypting data at rest](../security/fundamentals/encryption-atrest.md) to help you safeguard your data and meet your compliance needs using both Microsoft-managed encryption keys and customer-managed encryption keys. This process relies on multiple encryption keys and services such as Azure Key Vault and Azure Active Directory to ensure secure key access and centralized key management. For more information about Azure Storage encryption and Azure Disk encryption, see [Data encryption at rest](./azure-secure-isolation-guidance.md#data-encryption-at-rest).
-Azure SQL Database provides [transparent data encryption](../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) at rest by [default](https://azure.microsoft.com/updates/newly-created-azure-sql-databases-encrypted-by-default/). TDE performs real-time encryption and decryption operations on the data and log files. Database Encryption Key (DEK) is a symmetric key stored in the database boot record for availability during recovery. It is secured via a certificate stored in the master database of the server or an asymmetric key called TDE Protector stored under customer control in [Azure Key Vault](../key-vault/general/security-features.md), which is AzureΓÇÖs cloud-based external key management system. Key Vault supports [bring your own key](../azure-sql/database/transparent-data-encryption-byok-overview.md) (BYOK), which enables customers to store TDE Protector in Key Vault and control key management tasks including key permissions, rotation, deletion, enabling auditing/reporting on all TDE Protectors, and so on. The key can be generated by the Key Vault, imported, or [transferred to the Key Vault from an on-premises HSM device](../key-vault/keys/hsm-protected-keys.md). Customers can also use the [Always Encrypted](../azure-sql/database/always-encrypted-azure-key-vault-configure.md) feature of Azure SQL Database, which is designed specifically to help protect sensitive data by allowing clients to encrypt data inside client applications and [never reveal the encryption keys to the database engine](/sql/relational-databases/security/encryption/always-encrypted-database-engine). In this manner, Always Encrypted provides separation between those users who own the data (and can view it) and those users who manage the data (but should have no access).
+Azure SQL Database provides [transparent data encryption](../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) at rest by [default](https://azure.microsoft.com/updates/newly-created-azure-sql-databases-encrypted-by-default/). TDE performs real-time encryption and decryption operations on the data and log files. Database Encryption Key (DEK) is a symmetric key stored in the database boot record for availability during recovery. It is secured via a certificate stored in the master database of the server or an asymmetric key called TDE Protector stored under your control in [Azure Key Vault](../key-vault/general/security-features.md). Key Vault supports [bring your own key](../azure-sql/database/transparent-data-encryption-byok-overview.md) (BYOK), which enables you to store the TDE Protector in Key Vault and control key management tasks including key permissions, rotation, deletion, enabling auditing/reporting on all TDE Protectors, and so on. The key can be generated by the Key Vault, imported, or [transferred to the Key Vault from an on-premises HSM device](../key-vault/keys/hsm-protected-keys.md). You can also use the [Always Encrypted](../azure-sql/database/always-encrypted-azure-key-vault-configure.md) feature of Azure SQL Database, which is designed specifically to help protect sensitive data by allowing you to encrypt data inside your applications and [never reveal the encryption keys to the database engine](/sql/relational-databases/security/encryption/always-encrypted-database-engine). In this manner, Always Encrypted provides separation between those users who own the data (and can view it) and those users who manage the data (but should have no access).
### Data encryption in use
-Microsoft enables customers to protect their data throughout its entire lifecycle: at rest, in transit, and in use. Azure confidential computing and Homomorphic encryption are two techniques that safeguard customer data while it is processed in the cloud.
+Microsoft enables you to protect your data throughout its entire lifecycle: at rest, in transit, and in use. Azure confidential computing and Homomorphic encryption are two techniques that safeguard your data while it is processed in the cloud.
#### *Azure confidential computing*
-[Azure confidential computing](https://azure.microsoft.com/solutions/confidential-compute/) is a set of data security capabilities that offers encryption of data while in use. This approach means that data can be processed in the cloud with the assurance that it is always under customer control. Confidential computing ensures that when data is in the clear, which is needed for efficient data processing in memory, the data is protected inside a hardware-based [trusted execution environment](../confidential-computing/overview.md) (TEE, also known as an enclave), as depicted in Figure 1. TEE helps ensure that there is no way to view data or the operations from outside the enclave and that only the application designer has access to TEE data; access is denied to everyone else including Azure administrators. Moreover, TEE helps ensure that only authorized code is permitted to access data. If the code is altered or tampered with, the operations are denied, and the environment is disabled.
+[Azure confidential computing](https://azure.microsoft.com/solutions/confidential-compute/) is a set of data security capabilities that offers encryption of data while in use. This approach means that data can be processed in the cloud with the assurance that it is always under your control. Azure confidential computing supports two different technologies for data encryption while in use.
+
+First, you can choose Azure VMs based on [Intel Software Guard Extensions](https://software.intel.com/sgx) (SGX) technology that supports confidentiality in a granular manner down to the application level. With this approach, when data is in the clear, which is needed for efficient data processing in memory, the data is protected inside a hardware-based [trusted execution environment](../confidential-computing/overview.md) (TEE, also known as an enclave), as depicted in Figure 1. Intel SGX isolates a portion of physical memory to create an enclave where select code and data are protected from viewing or modification. TEE helps ensure that only the application designer has access to TEE data; access is denied to everyone else including Azure administrators. Moreover, TEE helps ensure that only authorized code is permitted to access data. If the code is altered or tampered with, the operations are denied, and the environment is disabled.
:::image type="content" source="./media/wwps-hardware-backed-enclave.png" alt-text="Trusted execution environment protection" border="false"::: **Figure 1.** Trusted execution environment protection
-Azure [DCsv2-series virtual machines](../virtual-machines/dcv2-series.md) have the latest generation of Intel Xeon processors with [Intel Software Guard Extensions](https://software.intel.com/sgx) (SGX) technology, which provides a hardware-based TEE. Intel SGX isolates a portion of physical memory to create an enclave where select code and data are protected from viewing or modification. The protection offered by Intel SGX, when used appropriately by application developers, can prevent compromise due to attacks from privileged software and many hardware-based attacks. An application using Intel SGX needs to be [refactored into trusted and untrusted components](https://software.intel.com/sites/default/files/managed/c3/8b/intel-sgx-product-brief-2019.pdf). The untrusted part of the application sets up the enclave, which then allows the trusted part to run inside the enclave. No other code, irrespective of the privilege level, has access to the code executing within the enclave or the data associated with enclave code. Design best practices call for the trusted partition to contain just the minimum amount of content required to protect customerΓÇÖs secrets. For more information, see [Application development on Intel SGX](../confidential-computing/application-development.md).
+Azure [DCsv2-series virtual machines](../virtual-machines/dcv2-series.md) have the latest generation of Intel Xeon processors with Intel SGX technology. The protection offered by Intel SGX, when used appropriately by application developers, can prevent compromise due to attacks from privileged software and many hardware-based attacks. An application using Intel SGX needs to be refactored into trusted and untrusted components. The untrusted part of the application sets up the enclave, which then allows the trusted part to run inside the enclave. No other code, irrespective of the privilege level, has access to the code executing within the enclave or the data associated with enclave code. Design best practices call for the trusted partition to contain just the minimum amount of content required to protect customerΓÇÖs secrets. For more information, see [Application development on Intel SGX](../confidential-computing/application-development.md).
-Based on customer feedback, Microsoft has started to invest in higher-level [scenarios for Azure confidential computing](../confidential-computing/use-cases-scenarios.md). Customers can review the scenario recommendations as a starting point for developing their own applications using confidential computing services and frameworks.
+Second, you can choose Azure VMs based on AMD EPYC 3rd Generation CPUs to lift and shift applications without requiring any code changes. These AMD EPYC CPUs make it possible to encrypt your entire virtual machine at runtime. The encryption keys used for VM encryption are generated and safeguarded by a dedicated secure processor on the EPYC CPU and cannot be extracted by any external means. These Azure VMs are currently in Preview.
+
+Based on customer feedback, Microsoft has started to invest in higher-level [scenarios for Azure confidential computing](../confidential-computing/use-cases-scenarios.md). You can review the scenario recommendations as a starting point for developing your own applications using confidential computing services and frameworks.
#### *Homomorphic encryption*
-[Homomorphic encryption](https://www.microsoft.com/research/project/homomorphic-encryption/) refers to a special type of encryption technology that allows for computations to be performed on encrypted data, without requiring access to a key needed to decrypt the data. The results of the computation are encrypted and can be revealed only by the owner of the encryption key. In this manner, only the encrypted data are processed in the cloud and only the customer can reveal the results of the computation.
+[Homomorphic encryption](https://www.microsoft.com/research/project/homomorphic-encryption/) refers to a special type of encryption technology that allows for computations to be performed on encrypted data, without requiring access to a key needed to decrypt the data. The results of the computation are encrypted and can be revealed only by the owner of the encryption key. In this manner, only the encrypted data are processed in the cloud and only you can reveal the results of the computation.
-To help customers adopt homomorphic encryption, [Microsoft SEAL](https://www.microsoft.com/research/project/microsoft-seal/) provides a set of encryption libraries that allow computations to be performed directly on encrypted data. This approach enables customers to build end-to-end encrypted data storage and compute services where the customer never needs to share their encryption keys with the cloud service. Microsoft SEAL aims to make homomorphic encryption easy to use and available to everyone. It provides a simple and convenient API and comes with several detailed examples demonstrating how the library can be used correctly and securely.
+To help you adopt homomorphic encryption, [Microsoft SEAL](https://www.microsoft.com/research/project/microsoft-seal/) provides a set of encryption libraries that allow computations to be performed directly on encrypted data. This approach enables you to build end-to-end encrypted data storage and compute services where you never need to share your encryption keys with the cloud service. Microsoft SEAL aims to make homomorphic encryption easy to use and available to everyone. It provides a simple and convenient API and comes with several detailed examples demonstrating how the library can be used correctly and securely.
-Data encryption in the cloud is an important risk mitigation requirement expected by government customers worldwide. As described in this section, Azure helps customers protect their data through its entire lifecycle whether at rest, in transit, or even in use. Moreover, Azure offers comprehensive encryption key management to help customers control their keys in the cloud, including key permissions, rotation, deletion, and so on. End-to-end data encryption using advanced ciphers is fundamental to ensuring confidentiality and integrity of customer data in the cloud. However, customers also expect assurances regarding any potential customer data access by Microsoft engineers for service maintenance, customer support, or other scenarios. These controls are described in the next section.
+Data encryption in the cloud is an important risk mitigation requirement expected by government customers worldwide. As described in this section, Azure helps you protect your data through its entire lifecycle whether at rest, in transit, or even in use. Moreover, Azure offers comprehensive encryption key management to help you control your keys in the cloud, including key permissions, rotation, deletion, and so on. End-to-end data encryption using advanced ciphers is fundamental to ensuring confidentiality and integrity of your data in the cloud. However, many customers also expect assurances regarding any potential customer data access by Microsoft engineers for service maintenance, customer support, or other scenarios. These controls are described in the next section.
## Insider data access
-Insider threat is characterized as potential for providing back-door connections and cloud service provider (CSP) privileged administrator access to customerΓÇÖs systems and data. Microsoft provides strong [customer commitments](https://www.microsoft.com/trust-center/privacy/data-access) regarding who can access customer data and on what terms. Access to customer data by Microsoft operations and support personnel is **denied by default**. Access to customer data is not needed to operate Azure. Moreover, for most support scenarios involving customer troubleshooting tickets, access to customer data is not needed.
+Insider threat is characterized as potential for providing back-door connections and cloud service provider (CSP) privileged administrator access to your systems and data. Microsoft provides strong [customer commitments](https://www.microsoft.com/trust-center/privacy/data-access) regarding who can access your data and on what terms. Access to your data by Microsoft operations and support personnel is **denied by default**. Access to your data is not needed to operate Azure. Moreover, for most support scenarios involving customer-initiated troubleshooting tickets, access to your data is not needed.
-No default access rights and Just-in-Time (JIT) access provisions reduce greatly the risks associated with traditional on-premises administrator elevated access rights that typically persist throughout the duration of employment. Microsoft makes it considerably more difficult for malicious insiders to tamper with customer applications and data. The same access control restrictions and processes are imposed on all Microsoft engineers, including both full-time employees and subprocessors/vendors.
+No default access rights and Just-in-Time (JIT) access provisions reduce greatly the risks associated with traditional on-premises administrator elevated access rights that typically persist throughout the duration of employment. Microsoft makes it considerably more difficult for malicious insiders to tamper with your applications and data. The same access control restrictions and processes are imposed on all Microsoft engineers, including both full-time employees and subprocessors/vendors.
-For more information on how Microsoft restricts insider access to customer data, see [Restrictions on insider access](./documentation-government-plan-security.md#restrictions-on-insider-access).
+For more information on how Microsoft restricts insider access to your data, see [Restrictions on insider access](./documentation-government-plan-security.md#restrictions-on-insider-access).
-## Government requests for customer data
+## Government requests for your data
-Government requests for customer data follow a strict procedure according to [Microsoft practices for responding to government requests](https://blogs.microsoft.com/datalaw/our-practices/). Microsoft takes strong measures to help protect customer data from inappropriate access or use by unauthorized persons. These measures include restricting access by Microsoft personnel and subcontractors and carefully defining requirements for responding to government requests for customer data. Microsoft ensures that there are no back-door channels and no direct or unfettered government access to customer data. Microsoft imposes special requirements for government and law enforcement requests for customer data.
+Government requests for your data follow a strict procedure according to [Microsoft practices for responding to government requests](https://blogs.microsoft.com/datalaw/our-practices/). Microsoft takes strong measures to help protect your data from inappropriate access or use by unauthorized persons. These measures include restricting access by Microsoft personnel and subcontractors and carefully defining requirements for responding to government requests for your data. Microsoft ensures that there are no back-door channels and no direct or unfettered government access to your data. Microsoft imposes special requirements for government and law enforcement requests for your data.
-As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/DPA) (DPA), Microsoft will not disclose customer data to law enforcement unless required by law. If law enforcement contacts Microsoft with a demand for customer data, Microsoft will attempt to redirect the law enforcement agency to request that data directly from the customer. If compelled to disclose customer data to law enforcement, Microsoft will promptly notify the customer and provide a copy of the demand unless legally prohibited from doing so.
+As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/DPA) (DPA), Microsoft will not disclose your data to law enforcement unless required by law. If law enforcement contacts Microsoft with a demand for your data, Microsoft will attempt to redirect the law enforcement agency to request that data directly from you. If compelled to disclose your data to law enforcement, Microsoft will promptly notify you and provide a copy of the demand unless legally prohibited from doing so.
-Government requests for customer data must comply with applicable laws.
+Government requests for your data must comply with applicable laws.
- A subpoena or its local equivalent is required to request non-content data. - A warrant, court order, or its local equivalent is required for content data.
-Every year, Microsoft rejects many law enforcement requests for customer data. Challenges to government requests can take many forms. In many of these cases, Microsoft simply informs the requesting government that it is unable to disclose the requested information and explains the reason for rejecting the request. Where appropriate, Microsoft challenges requests in court.
+Every year, Microsoft rejects many law enforcement requests for your data. Challenges to government requests can take many forms. In many of these cases, Microsoft simply informs the requesting government that it is unable to disclose the requested information and explains the reason for rejecting the request. Where appropriate, Microsoft challenges requests in court.
Our [Law Enforcement Request Report](https://www.microsoft.com/corporate-responsibility/law-enforcement-requests-report?rtc=1) and [US National Security Order Report](https://www.microsoft.com/corporate-responsibility/us-national-security-orders-report) are updated every six months and show that most of our customers are never impacted by government requests for data.
Our [Law Enforcement Request Report](https://www.microsoft.com/corporate-respons
The [CLOUD Act](https://www.congress.gov/bill/115th-congress/house-bill/4943) is a United States law that was enacted in March 2018. For more information, see MicrosoftΓÇÖs [blog post](https://blogs.microsoft.com/on-the-issues/2018/04/03/the-cloud-act-is-an-important-step-forward-but-now-more-steps-need-to-follow/) and the [follow-up blog post](https://blogs.microsoft.com/on-the-issues/2018/09/11/a-call-for-principle-based-international-agreements-to-govern-law-enforcement-access-to-data/) that describes MicrosoftΓÇÖs call for principle-based international agreements governing law enforcement access to data. Key points of interest to government customers procuring Azure services are captured below. - The CLOUD Act enables governments to negotiate new government-to-government agreements that will result in greater transparency and certainty for how information is disclosed to law enforcement agencies across international borders.-- The CLOUD Act is not a mechanism for greater government surveillance; it is a mechanism toward ensuring that customer data is ultimately protected by the laws of each customerΓÇÖs home country/region while continuing to facilitate lawful access to evidence for legitimate criminal investigations. Law enforcement in the US still needs to obtain a warrant demonstrating probable cause of a crime from an independent court before seeking the contents of communications. The CLOUD Act requires similar protections for other countries seeking bilateral agreements.
+- The CLOUD Act is not a mechanism for greater government surveillance; it is a mechanism toward ensuring that your data is ultimately protected by the laws of your home country/region while continuing to facilitate lawful access to evidence for legitimate criminal investigations. Law enforcement in the US still needs to obtain a warrant demonstrating probable cause of a crime from an independent court before seeking the contents of communications. The CLOUD Act requires similar protections for other countries seeking bilateral agreements.
- While the CLOUD Act creates new rights under new international agreements, it also preserves the common law right of cloud service providers to go to court to challenge search warrants when there is a conflict of laws ΓÇô even without these new treaties in place.-- Microsoft retains the legal right to object to a law enforcement order in the United States where the order clearly conflicts with the laws of the country/region where customer data is hosted. Microsoft will continue to carefully evaluate every law enforcement request and exercise its rights to protect customers where appropriate.-- For legitimate enterprise customers, US law enforcement will, in most instances, now go directly to the customer rather than Microsoft for information requests.
+- Microsoft retains the legal right to object to a law enforcement order in the United States where the order clearly conflicts with the laws of the country/region where your data is hosted. Microsoft will continue to carefully evaluate every law enforcement request and exercise its rights to protect customers where appropriate.
+- For legitimate enterprise customers, US law enforcement will, in most instances, now go directly to customers rather than to Microsoft for information requests.
**Microsoft does not disclose extra data as a result of the CLOUD Act**. This law does not practically change any of the legal and privacy protections that previously applied to law enforcement requests for data ΓÇô and those protections continue to apply. Microsoft adheres to the same principles and customer commitments related to government demands for user data.
Most government customers have requirements in place for handling security incid
## Breach notifications
-**Microsoft will notify customers of any breach of customer or personal data within 72 hours of incident declaration. Customers can monitor potential threats and respond to incidents on their own using Azure Security Center.**
+**Microsoft will notify you of any breach of your data (customer or personal) within 72 hours of incident declaration. You can monitor potential threats and respond to incidents on your own using Azure Security Center.**
-Microsoft is responsible for monitoring and remediating security and availability incidents affecting the Azure platform and notifying customers of any security breaches involving customer or personal data. Microsoft Azure has a mature security and privacy incident management process that is used for this purpose. Customers are responsible for monitoring their own resources provisioned in Azure, as described in the next section.
+Microsoft is responsible for monitoring and remediating security and availability incidents affecting the Azure platform and notifying you of any security breaches involving your data. Azure has a mature security and privacy incident management process that is used for this purpose. You are responsible for monitoring your own resources provisioned in Azure, as described in the next section.
### Shared responsibility
-The NIST [SP 800-145](https://csrc.nist.gov/publications/detail/sp/800-145/final) standard defines the following cloud service models: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). The [shared responsibility](../security/fundamentals/shared-responsibility.md) model for cloud computing is depicted in Figure 2. With on-premises deployment in their own datacenter, customers assume the responsibility for all layers in the stack. As workloads get migrated to the cloud, Microsoft assumes progressively more responsibility depending on the cloud service model. For example, with the IaaS model, MicrosoftΓÇÖs responsibility ends at the Hypervisor layer, and customers are responsible for all layers above the virtualization layer, including maintaining the base operating system in guest Virtual Machines.
+The NIST [SP 800-145](https://csrc.nist.gov/publications/detail/sp/800-145/final) standard defines the following cloud service models: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). The [shared responsibility](../security/fundamentals/shared-responsibility.md) model for cloud computing is depicted in Figure 2. With on-premises deployment in your own datacenter, you assume the responsibility for all layers in the stack. As workloads get migrated to the cloud, Microsoft assumes progressively more responsibility depending on the cloud service model. For example, with the IaaS model, MicrosoftΓÇÖs responsibility ends at the Hypervisor layer, and you are responsible for all layers above the virtualization layer, including maintaining the base operating system in guest Virtual Machines.
:::image type="content" source="./media/wwps-shared-responsibility.png" alt-text="Shared responsibility model in cloud computing" border="false"::: **Figure 2.** Shared responsibility model in cloud computing
-In line with the shared responsibility model, Microsoft does not inspect, approve, or monitor individual customer applications deployed on Azure. For example, Microsoft does not know what firewall ports need to be open for customerΓÇÖs application to function correctly, what the back-end database schema looks like, what constitutes normal network traffic for the application, and so on. Microsoft has extensive monitoring infrastructure in place for the cloud platform; however, customers are responsible for provisioning and monitoring their own resources in Azure. Customers can deploy a range of Azure services to monitor and safeguard their applications and data, as described in the next section.
+In line with the shared responsibility model, Microsoft does not inspect, approve, or monitor your individual applications deployed on Azure. For example, Microsoft does not know what firewall ports need to be open for your application to function correctly, what the back-end database schema looks like, what constitutes normal network traffic for the application, and so on. Microsoft has extensive monitoring infrastructure in place for the cloud platform; however, you are responsible for provisioning and monitoring your own resources in Azure. You can deploy a range of Azure services to monitor and safeguard your applications and data, as described in the next section.
### Essential Azure services for extra protection
-Azure provides essential services that customers can use to gain in-depth insight into their provisioned Azure resources and get alerted about suspicious activity, including outside attacks aimed at their applications and data. The [Azure Security Benchmark](../security/benchmarks/index.yml) provides security recommendations and implementation details to help customers improve the security posture with respect to Azure resources.
+Azure provides essential services that you can use to gain in-depth insight into your provisioned Azure resources and get alerted about suspicious activity, including outside attacks aimed at your applications and data. The [Azure Security Benchmark](../security/benchmarks/index.yml) provides security recommendations and implementation details to help you improve the security posture of your provisioned Azure resources.
For more information about essential Azure services for extra protection, see [Customer monitoring of Azure resources](./documentation-government-plan-security.md#customer-monitoring-of-azure-resources). ### Breach notification process
-Security incident response, including breach notification, is a subset of MicrosoftΓÇÖs overall incident management plan for Azure. All Microsoft employees are trained to identify and escalate potential security incidents. A dedicated team of security engineers within the Microsoft Security Response Center (MSRC) is responsible for always managing the security incident response for Azure. Microsoft follows a five-step incident response process when managing both security and availability incidents for Azure services. The process includes the following stages:
+Security incident response, including breach notification, is a subset of MicrosoftΓÇÖs overall incident management plan for Azure. All Microsoft employees are trained to identify and escalate potential security incidents. A dedicated team of security engineers within the Microsoft Security Response Center (MSRC) is responsible for always managing the security incident response for Azure. Microsoft follows a five-step incident response process when managing both security and availability incidents for Azure services. The process includes the following stages:
1. Detect 2. Assess
Security incident response, including breach notification, is a subset of Micros
4. Stabilize and recover 5. Close
-The goal of this process is to restore normal service operations and security as quickly as possible after an issue is detected, and an investigation started. Moreover, Microsoft enables customers to investigate, manage, and respond to security incidents in their Azure subscriptions. For more information, see [Incident management implementation guidance: Azure and Office 365](https://servicetrust.microsoft.com/ViewPage/TrustDocumentsV3?command=Download&downloadType=Document&downloadId=a8a7cb87-9710-4d09-8748-0835b6754e95&tab=7f51cb60-3d6c-11e9-b2af-7bb9f5d2d913&docTab=7f51cb60-3d6c-11e9-b2af-7bb9f5d2d913_FAQ_and_White_Papers).
+The goal of this process is to restore normal service operations and security as quickly as possible after an issue is detected, and an investigation started. Moreover, Microsoft enables you to investigate, manage, and respond to security incidents in your Azure subscriptions. For more information, see [Incident management implementation guidance: Azure and Office 365](https://servicetrust.microsoft.com/ViewPage/TrustDocumentsV3?command=Download&downloadType=Document&downloadId=a8a7cb87-9710-4d09-8748-0835b6754e95&tab=7f51cb60-3d6c-11e9-b2af-7bb9f5d2d913&docTab=7f51cb60-3d6c-11e9-b2af-7bb9f5d2d913_FAQ_and_White_Papers).
-If during the investigation of a security or privacy event, Microsoft becomes aware that customer or personal data has been exposed or accessed by an unauthorized party, the security incident manager is required to trigger the incident notification subprocess in consultation with Microsoft legal affairs division. This subprocess is designed to fulfill incident notification requirements stipulated in Azure customer contracts (see *Security Incident Notification* in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/DPA)). Customer notification and external reporting obligations (if any) are triggered by a security incident being declared. The customer notification subprocess begins in parallel with security incident investigation and mitigation phases to help minimize any impact resulting from the security incident.
+If during the investigation of a security or privacy event, Microsoft becomes aware that customer or personal data has been exposed or accessed by an unauthorized party, the security incident manager is required to trigger the incident notification subprocess in consultation with the Microsoft legal affairs division. This subprocess is designed to fulfill incident notification requirements stipulated in Azure customer contracts (see *Security Incident Notification* in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/DPA)). Customer notification and external reporting obligations (if any) are triggered by a security incident being declared. The customer notification subprocess begins in parallel with security incident investigation and mitigation phases to help minimize any impact resulting from the security incident.
-Microsoft will notify customers, Data Protection Authorities, and data subjects (each as applicable) of any breach of customer or personal data within 72 hours of incident declaration. **The notification process upon a declared security or privacy incident will occur as expeditiously as possible while still considering the security risks of proceeding quickly**. In practice, this approach means that most notifications will take place well before the 72-hr deadline to which Microsoft commits contractually. Notification of a security or privacy incident will be delivered to one or more of customerΓÇÖs administrators by any means Microsoft selects, including via email. Customers should [provide security contact details](../security-center/security-center-provide-security-contact-details.md) for their Azure subscription ΓÇô this information will be used by Microsoft to contact the customer if the MSRC discovers that customer data has been exposed or accessed by an unlawful or unauthorized party. To ensure that notification can be delivered successfully, it is the customerΓÇÖs responsibility to maintain correct administrative contact information for each applicable subscription.
+Microsoft will notify you, Data Protection Authorities, and data subjects (each as applicable) of any breach of customer or personal data within 72 hours of incident declaration. **The notification process upon a declared security or privacy incident will occur as expeditiously as possible while still considering the security risks of proceeding quickly**. In practice, this approach means that most notifications will take place well before the 72-hr deadline to which Microsoft commits contractually. Notification of a security or privacy incident will be delivered to one or more of your administrators by any means Microsoft selects, including via email. You should [provide security contact details](../security-center/security-center-provide-security-contact-details.md) for your Azure subscription ΓÇô this information will be used by Microsoft to contact you if the MSRC discovers that your data has been exposed or accessed by an unlawful or unauthorized party. To ensure that notification can be delivered successfully, it is your responsibility to maintain correct administrative contact information for each applicable subscription.
-Most Azure security and privacy investigations do not result in declared security incidents. Most external threats do not lead to breaches of customer or personal data because of extensive platform security measures that Microsoft has in place. Microsoft has deployed extensive monitoring and diagnostics infrastructure throughout Azure that relies on big-data analytics and machine learning to get insight into the platform health, including real-time threat intelligence. While Microsoft takes all platform attacks seriously, it would be impractical to notify customers of potential attacks at the platform level.
+Most Azure security and privacy investigations do not result in declared security incidents. Most external threats do not lead to breaches of your data because of extensive platform security measures that Microsoft has in place. Microsoft has deployed extensive monitoring and diagnostics infrastructure throughout Azure that relies on big-data analytics and machine learning to get insight into the platform health, including real-time threat intelligence. While Microsoft takes all platform attacks seriously, it would be impractical to notify you of *potential* attacks at the platform level.
Aside from controls implemented by Microsoft to safeguard customer data, government customers deployed on Azure derive considerable benefits from security research that Microsoft conducts to protect the cloud platform. Microsoft global threat intelligence is one of the largest in the industry, and it is derived from one of the most diverse sets of threat telemetry sources. It is both the volume and diversity of threat telemetry that makes Microsoft machine learning algorithms applied to that telemetry so powerful. All Azure customers benefit directly from these investments as described in the next section.
The Microsoft [Graph Security API](https://www.microsoft.com/security/business/g
:::image type="content" source="./media/wwps-graph.png" alt-text="Microsoft global threat intelligence is one of the largest in the industry" border="false"::: **Figure 3.** Microsoft global threat intelligence is one of the largest in the industry
-The Microsoft Graph Security API provides an unparalleled view into the evolving threat landscape and enables rapid innovation to detect and respond to threats. Machine learning models and artificial intelligence reason over vast security signals to identify vulnerabilities and threats. The Microsoft Graph Security API provides a common gateway to [share and act on security insights](/graph/security-concept-overview) across the Microsoft platform and partner solutions. Azure customers benefit directly from the Microsoft Graph Security API as Microsoft makes the vast threat telemetry and advanced analytics [available in Microsoft online services](/graph/api/resources/security-api-overview), including Azure Security Center. These services can help customers address their own security requirements in the cloud.
+The Microsoft Graph Security API provides an unparalleled view into the evolving threat landscape and enables rapid innovation to detect and respond to threats. Machine learning models and artificial intelligence reason over vast security signals to identify vulnerabilities and threats. The Microsoft Graph Security API provides a common gateway to [share and act on security insights](/graph/security-concept-overview) across the Microsoft platform and partner solutions. You benefit directly from the Microsoft Graph Security API as Microsoft makes the vast threat telemetry and advanced analytics [available in Microsoft online services](/graph/api/resources/security-api-overview), including Azure Security Center. These services can help you address your own security requirements in the cloud.
-Microsoft has implemented extensive protections for the Azure cloud platform and made available a wide range of Azure services to help customers monitor and protect their provisioned cloud resources from attacks. Nonetheless, for certain types of workloads and data classifications, government customers expect to have full operational control over their environment and even operate in a fully disconnected mode. The Azure Stack portfolio of products enables customers to provision private and hybrid cloud deployment models that can accommodate highly sensitive data, as described in the next section.
+Microsoft has implemented extensive protections for the Azure cloud platform and made available a wide range of Azure services to help you monitor and protect your provisioned cloud resources from attacks. Nonetheless, for certain types of workloads and data classifications, government customers expect to have full operational control over their environment and even operate in a fully disconnected mode. The Azure Stack portfolio of products enables you to provision private and hybrid cloud deployment models that can accommodate highly sensitive data, as described in the next section.
## Private and hybrid cloud with Azure Stack
-[Azure Stack](https://azure.microsoft.com/overview/azure-stack/) portfolio is an extension of Azure that enables customers to build and run hybrid applications across on-premises, edge locations, and cloud. As shown in Figure 4, Azure Stack includes Azure Stack Hyperconverged Infrastructure (HCI), Azure Stack Hub (previously Azure Stack), and Azure Stack Edge (previously Azure Data Box Edge). The last two components (Azure Stack Hub and Azure Stack Edge) are discussed in this section. For more information, see [Differences between global Azure, Azure Stack Hub, and Azure Stack HCI](/azure-stack/operator/compare-azure-azure-stack).
+[Azure Stack](https://azure.microsoft.com/overview/azure-stack/) portfolio is an extension of Azure that enables you to build and run hybrid applications across on-premises, edge locations, and cloud. As shown in Figure 4, Azure Stack includes Azure Stack Hyperconverged Infrastructure (HCI), Azure Stack Hub (previously Azure Stack), and Azure Stack Edge (previously Azure Data Box Edge). The last two components (Azure Stack Hub and Azure Stack Edge) are discussed in this section. For more information, see [Differences between global Azure, Azure Stack Hub, and Azure Stack HCI](/azure-stack/operator/compare-azure-azure-stack).
:::image type="content" source="./media/wwps-azure-stack-portfolio.jpg" alt-text="Azure Stack portfolio" border="false"::: **Figure 4.** Azure Stack portfolio
-Azure Stack Hub and Azure Stack Edge represent key enabling technologies that allow customers to process highly sensitive data using a private or hybrid cloud and pursue digital transformation using Microsoft [intelligent cloud and intelligent edge](https://azure.microsoft.com/overview/future-of-cloud/) approach. For many government customers, enforcing data sovereignty, addressing custom compliance requirements, and applying maximum available protection to highly sensitive data are the primary driving factors behind these efforts.
+Azure Stack Hub and Azure Stack Edge represent key enabling technologies that allow you to process highly sensitive data using a private or hybrid cloud and pursue digital transformation using Microsoft [intelligent cloud and intelligent edge](https://azure.microsoft.com/overview/future-of-cloud/) approach. For many government customers, enforcing data sovereignty, addressing custom compliance requirements, and applying maximum available protection to highly sensitive data are the primary driving factors behind these efforts.
### Azure Stack Hub
-[Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) (formerly Azure Stack) is an integrated system of software and validated hardware that customers can purchase from Microsoft hardware partners, deploy in their own data center, and then operate entirely on their own or with the help from a managed service provider. With Azure Stack Hub, the customer is always fully in control of access to their data. Azure Stack Hub can accommodate up to [16 physical servers per Azure Stack Hub scale unit](/azure-stack/operator/azure-stack-overview). It represents an extension of Azure, enabling customers to provision various IaaS and PaaS services and effectively bring multi-tenant cloud technology to on-premises and edge environments. Customers can run many types of VM instances, App Services, Containers (including Cognitive Services containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes they use in Azure. Azure Stack Hub is not dependent on connectivity to Azure to run deployed applications and enable operations via local connectivity.
+[Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) (formerly Azure Stack) is an integrated system of software and validated hardware that you can purchase from Microsoft hardware partners, deploy in your own data center, and then operate entirely on your own or with the help from a managed service provider. With Azure Stack Hub, you are always fully in control of access to your data. Azure Stack Hub can accommodate up to [16 physical servers per Azure Stack Hub scale unit](/azure-stack/operator/azure-stack-overview). It represents an extension of Azure, enabling you to provision various IaaS and PaaS services and effectively bring multi-tenant cloud technology to on-premises and edge environments. You can run many types of VM instances, App Services, Containers (including Cognitive Services containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes you use in Azure. Azure Stack Hub is not dependent on connectivity to Azure to run deployed applications and enable operations via local connectivity.
+
+In addition to Azure Stack Hub, which is intended for on-premises deployment (for example, in a data center), a ruggedized and field-deployable version called [Tactical Azure Stack Hub](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/converged-infrastructure/dell-emc-integrated-system-for-azure-stack-hub-tactical-spec-sheet.pdf) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions.
-In addition to Azure Stack Hub, which is intended for on-premises deployment (for example, in a data center), a ruggedized and field-deployable version called [Tactical Azure Stack Hub](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/converged-infrastructure/dell-emc-integrated-system-for-azure-stack-hub-tactical-spec-sheet.pdf) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions. Azure Stack Hub can be [operated disconnected](/azure-stack/operator/azure-stack-disconnected-deployment) from Azure or the Internet. Customers can run the next generation of AI-enabled hybrid applications where their data lives. For example, government agencies can rely on Azure Stack Hub to bring a trained AI model to the edge and integrate it with their applications for low-latency intelligence, with no tool or process changes for local applications.
+Azure Stack Hub can be [operated disconnected](/azure-stack/operator/azure-stack-disconnected-deployment) from Azure or the Internet. You can run the next generation of AI-enabled hybrid applications where your data lives. For example, you can rely on Azure Stack Hub to bring a trained AI model to the edge and integrate it with your applications for low-latency intelligence, with no tool or process changes for local applications.
-Azure and Azure Stack Hub can help government customers unlock new hybrid use cases for customer-facing and internal line-of-business application, including edge and disconnected scenarios, cloud applications intended to meet data sovereignty and compliance requirements, and cloud applications deployed on-premises in customer data center. These use cases may include mobile scenarios or fixed deployments within highly secure data center facilities. Figure 5 shows Azure Stack Hub capabilities and key usage scenarios.
+Azure and Azure Stack Hub can help you unlock new hybrid use cases for externally facing or internally deployed line-of-business application, including edge and disconnected scenarios, cloud applications intended to meet data sovereignty and compliance requirements, and cloud applications deployed on-premises in your data center. These use cases may include mobile scenarios or fixed deployments within highly secure data center facilities. Figure 5 shows Azure Stack Hub capabilities and key usage scenarios.
:::image type="content" source="./media/wwps-azure-stack-hub.png" alt-text="Azure Stack Hub capabilities" border="false"::: **Figure 5.** Azure Stack Hub capabilities Azure Stack Hub brings the following [value proposition for key scenarios](/azure-stack/operator/azure-stack-overview) shown in Figure 5: -- **Edge and disconnected solutions:** Address latency and connectivity requirements by processing data locally in Azure Stack Hub and then aggregating in Azure for further analytics, with common application logic across both, connected or disconnected. Aircraft, ship, or truck-delivered, Azure Stack Hub meets the tough demands of exploration, construction, agriculture, oil and gas, manufacturing, disaster response, government, and military efforts in the most extreme conditions and remote locations. Government customers can use Azure Stack Hub architecture for [edge and disconnected solutions](/azure/architecture/solution-ideas/articles/ai-at-the-edge-disconnected), for example, bring the next generation of AI-enabled hybrid applications to the edge where the data lives and integrate it with existing applications for low-latency intelligence.-- **Cloud applications to meet data sovereignty:** Deploy a single application differently depending on the country or region. Customers can develop and deploy applications in Azure, with full flexibility to deploy on-premises with Azure Stack Hub based on the need to meet data sovereignty or custom compliance requirements. Customers can use Azure Stack Hub architecture for [data sovereignty](/azure/architecture/solution-ideas/articles/data-sovereignty-and-gravity), for example, transmit data from Azure VNet to Azure Stack Hub VNet over private connection and ultimately store data in SQL Server database running in a VM on Azure Stack Hub. Government customers can use Azure Stack Hub to accommodate even more restrictive requirements such as the need to deploy solutions in a disconnected environment managed by security-cleared, in-country personnel. These disconnected environments may not be permitted to connect to the Internet for any purpose because of the security classification they operate at.-- **Cloud application model on-premises:** Use Azure Stack Hub to update and extend legacy applications and make them cloud ready. With App Service on Azure Stack Hub, customers can create a web front end to consume modern APIs with modern clients while taking advantage of consistent programming models and skills. Customers can use Azure Stack Hub architecture for [legacy system modernization](/azure/architecture/solution-ideas/articles/unlock-legacy-data), for example, apply a consistent DevOps process, Azure Web Apps, containers, serverless computing, and microservices architectures to modernize legacy applications while integrating and preserving legacy data in mainframe and core line-of-business systems.
+- **Edge and disconnected solutions:** Address latency and connectivity requirements by processing data locally in Azure Stack Hub and then aggregating in Azure for further analytics, with common application logic across both, connected or disconnected. Aircraft, ship, or truck-delivered, Azure Stack Hub meets the tough demands of exploration, construction, agriculture, oil and gas, manufacturing, disaster response, government, and military efforts in the most extreme conditions and remote locations. For example, with Azure Stack Hub architecture for [edge and disconnected solutions](/azure/architecture/solution-ideas/articles/ai-at-the-edge-disconnected), you can bring the next generation of AI-enabled hybrid applications to the edge where the data lives and integrate it with existing applications for low-latency intelligence.
+- **Cloud applications to meet data sovereignty:** Deploy a single application differently depending on the country or region. You can develop and deploy applications in Azure, with full flexibility to deploy on-premises with Azure Stack Hub based on the need to meet data sovereignty or custom compliance requirements. For example, with Azure Stack Hub architecture for [data sovereignty](/azure/architecture/solution-ideas/articles/data-sovereignty-and-gravity), you can transmit data from an Azure VNet to Azure Stack Hub VNet over private connection and ultimately store data in a SQL Server database running in a VM on Azure Stack Hub. You can use Azure Stack Hub to accommodate even more restrictive requirements such as the need to deploy solutions in a disconnected environment managed by security-cleared, in-country personnel. These disconnected environments may not be permitted to connect to the Internet for any purpose because of the security classification they operate at.
+- **Cloud application model on-premises:** Use Azure Stack Hub to update and extend legacy applications and make them cloud ready. With App Service on Azure Stack Hub, you can create a web front end to consume modern APIs with modern clients while taking advantage of consistent programming models and skills. For example, with Azure Stack Hub architecture for [legacy system modernization](/azure/architecture/solution-ideas/articles/unlock-legacy-data), you can apply a consistent DevOps process, Azure Web Apps, containers, serverless computing, and microservices architectures to modernize legacy applications while integrating and preserving legacy data in mainframe and core line-of-business systems.
-Azure Stack Hub requires Azure Active Directory (Azure AD) or Active Directory Federation Services, backed by Active Directory as an [identity provider](/azure-stack/operator/azure-stack-identity-overview). Customers can use [role-based access control](/azure-stack/user/azure-stack-manage-permissions) (RBAC) to grant system access to authorized users, groups, and services by assigning them roles at a subscription, resource group, or individual resource level. Each role defines the access level a user, group, or service has over Azure Stack Hub resources.
+Azure Stack Hub requires Azure Active Directory (Azure AD) or Active Directory Federation Services (ADFS), backed by Active Directory as an [identity provider](/azure-stack/operator/azure-stack-identity-overview). You can use [role-based access control](/azure-stack/user/azure-stack-manage-permissions) (RBAC) to grant system access to authorized users, groups, and services by assigning them roles at a subscription, resource group, or individual resource level. Each role defines the access level a user, group, or service has over Azure Stack Hub resources.
-Azure Stack Hub protects customer data at the storage subsystem level using [encryption at rest](/azure-stack/operator/azure-stack-security-bitlocker). By default, Azure Stack Hub's storage subsystem is encrypted using BitLocker with 128-bit AES encryption. BitLocker keys are persisted in an internal secret store. At deployment time, it is also possible to configure BitLocker to use 256-bit AES encryption. Customers can store and manage their secrets including cryptographic keys using [Key Vault in Azure Stack Hub](/azure-stack/user/azure-stack-key-vault-intro).
+Azure Stack Hub protects your data at the storage subsystem level using [encryption at rest](/azure-stack/operator/azure-stack-security-bitlocker). By default, Azure Stack Hub's storage subsystem is encrypted using BitLocker with 128-bit AES encryption. BitLocker keys are persisted in an internal secret store. At deployment time, it is also possible to configure BitLocker to use 256-bit AES encryption. You can store and manage your secrets including cryptographic keys using [Key Vault in Azure Stack Hub](/azure-stack/user/azure-stack-key-vault-intro).
### Azure Stack Edge
-[Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) (formerly Azure Data Box Edge) is an AI-enabled edge computing device with network data transfer capabilities. It enables customers to pre-process data at the edge and move data to Azure efficiently. Azure Stack Edge uses advanced Field-Programmable Gate Array (FPGA) hardware natively integrated into the appliance to run machine learning algorithms at the edge efficiently. The size and portability allow customers to run Azure Stack Edge as close to users, apps, and data as needed. Figure 6 shows Azure Stack Edge capabilities and key use cases.
+[Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) (formerly Azure Data Box Edge) is an AI-enabled edge computing device with network data transfer capabilities. The latest generation of these devices relies on a built-in Graphical Processing Unit (GPU) to enable accelerated AI inferencing. Azure Stack Edge uses GPU hardware natively integrated into the appliance to run machine learning algorithms at the edge efficiently. The size and portability allow you to run Azure Stack Edge as close to your users, apps, and data as needed. Figure 6 shows Azure Stack Edge capabilities and key use cases.
:::image type="content" source="./media/wwps-azure-stack-edge.png" alt-text="Azure Stack Edge capabilities" border="false"::: **Figure 6.** Azure Stack Edge capabilities
-Azure Stack Edge brings the following [value proposition for key use cases](../databox-online/azure-stack-edge-overview.md#use-cases) shown in Figure 6:
+Azure Stack Edge brings the following [value proposition for key use cases](../databox-online/azure-stack-edge-gpu-overview.md#use-cases) shown in Figure 6:
+- **Inference with Azure Machine Learning:** Inference is a part of deep learning that takes place after model training, such as the prediction stage resulting from applying learned capability to new data. For example, itΓÇÖs the part that recognizes a vehicle in a target image after the model has been trained by processing many tagged vehicle images, often augmented by computer synthesized images (also known as synthetics). With Azure Stack Edge, you can run Machine Learning (ML) models to get results quickly and act on them before the data is sent to the cloud. The necessary subset of data (in case of bandwidth constraints) or the full data set is transferred to the cloud to continue to retrain and improve customerΓÇÖs ML models.
- **Preprocess data:** Analyze data from on-premises or IoT devices to quickly obtain results while staying close to where data is generated. Azure Stack Edge transfers the full data set (or just the necessary subset of data when bandwidth is an issue) to the cloud to perform more advanced processing or deeper analytics. Preprocessing can be used to aggregate data, modify data (for example, remove personally identifiable information or other sensitive data), transfer data needed for deeper analytics in the cloud, and analyze and react to IoT events.-- **Inference Azure Machine Learning:** Inference is a part of deep learning that takes place after model training, such as the prediction stage resulting from applying learned capability to new data. For example, itΓÇÖs the part that recognizes a vehicle in a target image after the model has been trained by processing many tagged vehicle images, often augmented by computer synthesized images (also known as synthetics). With Azure Stack Edge, customers can run Machine Learning (ML) models to get results quickly and act on them before the data is sent to the cloud. The necessary subset of data (in case of bandwidth constraints) or the full data set is transferred to the cloud to continue to retrain and improve customerΓÇÖs ML models. - **Transfer data over network to Azure:** Use Azure Stack Edge to transfer data to Azure to enable further compute and analytics or for archival purposes. Being able to gather, discern, and distribute mission data is essential for making critical decisions. Tools that help process and transfer data directly at the edge make this capability possible. For example, Azure Stack Edge, with its light footprint and built-in hardware acceleration for ML inferencing, is useful to further the intelligence of forward-operating units or similar mission needs with AI solutions designed for the tactical edge. Data transfer from the field, which is traditionally complex and slow, is made seamless with the [Azure Data Box](https://azure.microsoft.com/services/databox/) family of products.
-These products unite the best of edge and cloud computing to unlock never-before-possible capabilities like synthetic mapping and ML model inferencing. From submarines to aircraft to remote bases, Azure Stack Hub and Azure Stack Edge allow customers to harness the power of cloud at the edge.
+These products unite the best of edge and cloud computing to unlock never-before-possible capabilities like synthetic mapping and ML model inferencing. From submarines to aircraft to remote bases, Azure Stack Hub and Azure Stack Edge allow you to harness the power of cloud at the edge.
-Using Azure in combination with Azure Stack Hub and Azure Stack Edge, government customers can process confidential and sensitive data in a secure isolated infrastructure within the Azure public multi-tenant cloud or highly sensitive data at the edge under the customerΓÇÖs full operational control. The next section describes a conceptual architecture for classified workloads.
+Using Azure in combination with Azure Stack Hub and Azure Stack Edge, you can process confidential and sensitive data in a secure isolated infrastructure within the Azure public multi-tenant cloud or highly sensitive data at the edge under your full operational control. The next section describes a conceptual architecture for classified workloads.
## Conceptual architecture
-Figure 7 shows a conceptual architecture using products and services that support various data classifications. Azure public multi-tenant cloud is the underlying cloud platform that makes this solution possible. Customers can augment Azure with on-premises and edge products such as Azure Stack Hub and Azure Stack Edge to accommodate critical workloads over which customers seek increased or exclusive operational control. For example, Azure Stack Hub is intended for on-premises deployment in a customer-owned data center where the customer has full control over service connectivity. Moreover, Azure Stack Hub can be deployed to address tactical edge deployments for limited or no connectivity, including fully mobile scenarios.
+Figure 7 shows a conceptual architecture using products and services that support various data classifications. Azure public multi-tenant cloud is the underlying cloud platform that makes this architecture possible. You can augment Azure with on-premises and edge products such as Azure Stack Hub and Azure Stack Edge to accommodate critical workloads over which you seek increased or exclusive operational control. For example, Azure Stack Hub is intended for on-premises deployment in your data center where you have full control over service connectivity. Moreover, Azure Stack Hub can be deployed to address tactical edge deployments for limited or no connectivity, including fully mobile scenarios.
:::image type="content" source="./media/wwps-architecture.png" alt-text="Conceptual architecture for classified workloads" border="false"::: **Figure 7.** Conceptual architecture for classified workloads
-For classified workloads, customers can provision key enabling Azure services to secure target workloads while mitigating identified risks. Azure, in combination with [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) and [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/), can accommodate private and hybrid cloud deployment models, making them suitable for many government workloads involving both unclassified and classified data. The following data classification taxonomy is used in this article:
+For classified workloads, you can provision key enabling Azure services to secure target workloads while mitigating identified risks. Azure, in combination with [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) and [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/), can accommodate private and hybrid cloud deployment models, making them suitable for many government workloads involving both unclassified and classified data. The following data classification taxonomy is used in this article:
- Confidential - Secret
For classified workloads, customers can provision key enabling Azure services to
Similar data classification schemes exist in many countries.
-For top secret data, customers can deploy Azure Stack Hub, which can operate fully disconnected from Azure and the Internet.
-[Tactical Azure Stack Hub](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/converged-infrastructure/dell-emc-integrated-system-for-azure-stack-hub-tactical-spec-sheet.pdf) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions. Figure 8 depicts key enabling services that customers can provision to accommodate various workloads on Azure.
+For top secret data, you can deploy Azure Stack Hub, which can operate disconnected from Azure and the Internet.
+[Tactical Azure Stack Hub](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/converged-infrastructure/dell-emc-integrated-system-for-azure-stack-hub-tactical-spec-sheet.pdf) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions. Figure 8 depicts key enabling services that you can provision to accommodate various workloads on Azure.
:::image type="content" source="./media/wwps-data-classifications.png" alt-text="Azure support for various data classifications" border="false"::: **Figure 8.** Azure support for various data classifications ### Confidential data
-Listed below are key enabling technologies and services that customers may find helpful when deploying confidential data and workloads on Azure:
+Listed below are key enabling technologies and services that you may find helpful when deploying confidential data and workloads on Azure:
- All recommended technologies used for Unclassified data, especially services such as [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet), [Azure Security Center](../security-center/index.yml), and [Azure Monitor](../azure-monitor/index.yml). - Public IP addresses are disabled allowing only traffic through private connections, including [ExpressRoute](../expressroute/index.yml) and [Virtual Private Network](../vpn-gateway/index.yml) (VPN) gateway.-- Data encryption is recommended with customer-managed keys (CMK) in [Azure Key Vault](../key-vault/index.yml) backed by multi-tenant hardware security modules (HSMs) that have FIPS 140-2 Level 2 validation.-- Only services that support [VNet integration](../virtual-network/virtual-network-for-azure-services.md) options are enabled. Azure VNet enables customers to place Azure resources in a non-internet routable network, which can then be connected to customerΓÇÖs on-premises network using VPN technologies. VNet integration gives web apps access to resources in the virtual network.-- Customers can use [Azure Private Link](../private-link/index.yml) to access Azure PaaS services over a private endpoint in their VNet, ensuring that traffic between their VNet and the service travels across the Microsoft global backbone network, which eliminates the need to expose the service to the public Internet.-- [Customer Lockbox](../security/fundamentals/customer-lockbox-overview.md) for Azure enables customers to approve/deny elevated access requests for customer data in support scenarios. ItΓÇÖs an extension of the Just-in-Time (JIT) workflow that comes with full audit logging enabled.
+- Data encryption is recommended with customer-managed keys (CMK) in [Azure Key Vault](../key-vault/index.yml) backed by multi-tenant hardware security modules (HSMs) that have FIPS 140 Level 2 validation.
+- Only services that support [VNet integration](../virtual-network/virtual-network-for-azure-services.md) options are enabled. Azure VNet enables you to place Azure resources in a non-internet routable network, which can then be connected to your on-premises network using VPN technologies. VNet integration gives web apps access to resources in the virtual network.
+- You can use [Azure Private Link](../private-link/index.yml) to access Azure PaaS services over a private endpoint in your VNet, ensuring that traffic between your VNet and the service travels across the Microsoft global backbone network, which eliminates the need to expose the service to the public Internet.
+- [Customer Lockbox](../security/fundamentals/customer-lockbox-overview.md) for Azure enables you to approve/deny elevated access requests for your data in support scenarios. ItΓÇÖs an extension of the Just-in-Time (JIT) workflow that comes with full audit logging enabled.
-Using Azure public multi-tenant cloud capabilities, customers can achieve the level of [isolation and security](./azure-secure-isolation-guidance.md) required to store confidential data. Customers should use Azure Security Center and Azure Monitor to gain visibility into their Azure environments including the security posture of their subscriptions.
+Using Azure public multi-tenant cloud capabilities, you can achieve the level of [isolation and security](./azure-secure-isolation-guidance.md) required to store confidential data. You should use Azure Security Center and Azure Monitor to gain visibility into your Azure environments including the security posture of your subscriptions.
### Secret data
-Listed below are key enabling technologies and services that customers may find helpful when deploying secret data and workloads on Azure:
+Listed below are key enabling technologies and services that you may find helpful when deploying secret data and workloads on Azure:
- All recommended technologies used for confidential data.-- Use Azure Key Vault [Managed HSM](../key-vault/managed-hsm/overview.md), which provides a fully managed, highly available, single-tenant HSM as a service that uses FIPS 140-2 Level 3 validated HSMs. Each Managed HSM instance is bound to a separate security domain controlled by the customer and isolated cryptographically from instances belonging to other customers.-- [Azure Dedicated Host](https://azure.microsoft.com/services/virtual-machines/dedicated-host/) provides physical servers that can host one or more Azure VMs and are dedicated to one Azure subscription. Customers can provision dedicated hosts within a region, availability zone, and fault domain. They can then place VMs directly into provisioned hosts using whatever configuration best meets their needs. Dedicated Host provides hardware isolation at the physical server level, enabling customers to place their Azure VMs on an isolated and dedicated physical server that runs only their organizationΓÇÖs workloads to meet corporate compliance requirements.-- Accelerated FPGA networking based on [Azure SmartNICs](https://www.microsoft.com/research/publication/azure-accelerated-networking-smartnics-public-cloud/) enables customers to offload host networking to dedicated hardware, enabling tunneling for VNets, security, and load balancing. Offloading network traffic to a dedicated chip guards against side-channel attacks on the main CPU.-- [Azure confidential computing](../confidential-computing/index.yml) offers encryption of data while in use, ensuring that data is always under customer control. Data is protected inside a hardware-based trusted execution environment (TEE, also known as enclave) and there is no way to view data or operations from outside the enclave.-- [Just-in-time (JIT) virtual machine (VM) access](../security-center/security-center-just-in-time.md) can be used to lock down inbound traffic to Azure VMs by creating network security group (NSG) rules. Customers select ports on the VM to which inbound traffic will be locked down and when a user requests access to a VM, Azure Security Center checks that the user has proper role-based access control (RBAC) permissions.
+- Use Azure Key Vault [Managed HSM](../key-vault/managed-hsm/overview.md), which provides a fully managed, highly available, single-tenant HSM as a service that uses FIPS 140 Level 3 validated HSMs. Each Managed HSM instance is bound to a separate security domain controlled by you and isolated cryptographically from instances belonging to other customers.
+- [Azure Dedicated Host](https://azure.microsoft.com/services/virtual-machines/dedicated-host/) provides physical servers that can host one or more Azure VMs and are dedicated to one Azure subscription. You can provision dedicated hosts within a region, availability zone, and fault domain. You can then place VMs directly into provisioned hosts using whatever configuration best meets your needs. Dedicated Host provides hardware isolation at the physical server level, enabling you to place your Azure VMs on an isolated and dedicated physical server that runs only your organizationΓÇÖs workloads to meet corporate compliance requirements.
+- Accelerated FPGA networking based on [Azure SmartNICs](https://www.microsoft.com/research/publication/azure-accelerated-networking-smartnics-public-cloud/) enables you to offload host networking to dedicated hardware, enabling tunneling for VNets, security, and load balancing. Offloading network traffic to a dedicated chip guards against side-channel attacks on the main CPU.
+- [Azure confidential computing](../confidential-computing/index.yml) offers encryption of data while in use, ensuring that data is always under your control. Data is protected inside a hardware-based trusted execution environment (TEE, also known as enclave) and there is no way to view data or operations from outside the enclave.
+- [Just-in-time (JIT) virtual machine (VM) access](../security-center/security-center-just-in-time.md) can be used to lock down inbound traffic to Azure VMs by creating network security group (NSG) rules. You select ports on the VM to which inbound traffic will be locked down and when a user requests access to a VM, Azure Security Center checks that the user has proper role-based access control (RBAC) permissions.
-To accommodate secret data in the Azure public multi-tenant cloud, customers can deploy extra technologies and services on top of those technologies used for confidential data and limit provisioned services to those services that provide sufficient isolation. These services offer various isolation options at run time. They also support data encryption at rest using customer-managed keys in single-tenant HSMs controlled by the customer and isolated cryptographically from HSM instances belonging to other customers.
+To accommodate secret data in the Azure public multi-tenant cloud, you can deploy extra technologies and services on top of those technologies used for confidential data and limit provisioned services to those services that provide sufficient isolation. These services offer various isolation options at run time. They also support data encryption at rest using customer-managed keys in single-tenant HSMs controlled by you and isolated cryptographically from HSM instances belonging to other customers.
### Top secret data
-Listed below are key enabling products that customers may find helpful when deploying top secret data and workloads on Azure:
+Listed below are key enabling products that you may find helpful when deploying top secret data and workloads on Azure:
- All recommended technologies used for secret data.-- [Azure Stack Hub](/azure-stack/operator/azure-stack-overview) (formerly Azure Stack) enables customers to run workloads using the same architecture and APIs as in Azure while having a physically isolated network for their highest classification data.-- [Azure Stack Edge](../databox-online/azure-stack-edge-overview.md) (formerly Azure Data Box Edge) allows the storage and processing of highest classification data but also enables customers to upload resulting information or models directly to Azure. This approach creates a path for information sharing between domains that makes it easier and more secure.
+- [Azure Stack Hub](/azure-stack/operator/azure-stack-overview) (formerly Azure Stack) enables you to run workloads using the same architecture and APIs as in Azure while having a physically isolated network for your highest classification data.
+- [Azure Stack Edge](../databox-online/azure-stack-edge-gpu-overview.md) (formerly Azure Data Box Edge) allows the storage and processing of highest classification data but also enables you to upload resulting information or models directly to Azure. This approach creates a path for information sharing between domains that makes it easier and more secure.
- In addition to Azure Stack Hub, which is intended for on-premises deployment (for example, in a data center), a ruggedized and field-deployable version called [Tactical Azure Stack Hub](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/converged-infrastructure/dell-emc-integrated-system-for-azure-stack-hub-tactical-spec-sheet.pdf) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions.-- User-provided hardware security modules (HSMs) allow customers to store their encryption keys and other secrets in HSMs deployed on-premises and controlled solely by customers.
+- User-provided hardware security modules (HSMs) allow you to store your encryption keys in HSMs deployed on-premises and controlled solely by you.
Accommodating top secret data will likely require a disconnected environment, which is what Azure Stack Hub provides. Azure Stack Hub can be [operated disconnected](/azure-stack/operator/azure-stack-disconnected-deployment) from Azure or the Internet. Even though ΓÇ£air-gappedΓÇ¥ networks do not necessarily increase security, many governments may be reluctant to store data with this classification in an Internet connected environment.
-Azure offers an unmatched variety of public, private, and hybrid cloud deployment models to address each customerΓÇÖs concerns regarding the control of their data. The following section covers select use cases that might be of interest to worldwide government customers.
+Azure offers an unmatched variety of public, private, and hybrid cloud deployment models to address your concerns regarding the safeguarding of your data. The following section covers select use cases that might be of interest to worldwide government customers.
## Select workloads and use cases
-This section provides an overview of select use cases that showcase Azure capabilities for workloads that might be of interest to worldwide governments. In terms of capabilities, Azure is presented via a combination of public multi-tenant cloud and on-premises + edge capabilities provided by [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) and [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/).
+This section provides an overview of select use cases that showcase Azure capabilities for workloads that might be of interest to worldwide governments. In terms of capabilities, Azure is presented via a combination of public multi-tenant cloud and on-premises + edge capabilities provided by [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) and [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/).
### Processing highly sensitive or regulated data on Azure Stack Hub
-Microsoft provides Azure Stack Hub as an on-premises, cloud-consistent experience for customers who do not have the ability to directly connect to the Internet, or where certain workload types are required to be hosted in-country due to law, compliance, or sentiment. Azure Stack Hub offers IaaS and PaaS services and shares the same APIs as the public Azure cloud. Azure Stack Hub is available in scale units of 4, 8, and 16 servers in a single-server rack, and 4 servers in a military-specification, ruggedized set of transit cases, or multiple racks in a modular data center configuration.
+Microsoft provides Azure Stack Hub as an on-premises, cloud-consistent experience for customers who do not have the ability to connect directly to the Internet, or where certain workload types are required to be hosted in-country due to law, compliance, or sentiment. Azure Stack Hub offers IaaS and PaaS services and shares the same APIs as the public Azure cloud. Azure Stack Hub is available in scale units of 4, 8, and 16 servers in a single-server rack, and 4 servers in a military-specification, ruggedized set of transit cases, or multiple racks in a modular data center configuration.
-Azure Stack Hub is a solution for customers who operate in scenarios where:
+Azure Stack Hub is a solution if you operate in scenarios where:
-- Microsoft does not have an in-country cloud presence and therefore cannot meet data sovereignty requirements.-- For compliance reasons, the customer cannot connect their network to the public Internet.
+- For compliance reasons, you cannot connect your network to the public Internet.
- For geo-political or security reasons, Microsoft cannot offer connectivity to other Microsoft clouds. - For geo-political or security reasons, the host organization may require cloud management by non-Microsoft entities, or in-country by security-cleared personnel.
+- Microsoft does not have an in-country cloud presence and therefore cannot meet data sovereignty requirements.
- Cloud management would pose significant risk to the physical well-being of Microsoft resources operating the environment.
-For most of these customers, Microsoft and its partners offer a customer-managed, Azure Stack Hub-based private cloud appliance on field-deployable hardware from [major vendors](https://azure.microsoft.com/products/azure-stack/hub/#partners) such as Avanade, Cisco, Dell EMC, Hewlett Packard Enterprise, and Lenovo. Azure Stack Hub is manufactured, configured, and deployed by the hardware vendor, and can be ruggedized and security-hardened to meet a broad range of environmental and compliance standards, including the ability to withstand transport by aircraft, ship, or truck, and deployment into colocation, mobile, or modular data centers. Azure Stack Hub can be used in exploration, construction, agriculture, oil and gas, manufacturing, disaster response, government, and military efforts in hospitable or the most extreme conditions and remote locations. Azure Stack Hub allows customers the full autonomy to monitor, manage, and provision their own private cloud resources while meeting their connectivity, compliance, and ruggedization requirements.
+For most of these scenarios, Microsoft and its partners offer a customer-managed, Azure Stack Hub-based private cloud appliance on field-deployable hardware from [major vendors](https://azure.microsoft.com/products/azure-stack/hub/#partners) such as Avanade, Cisco, Dell EMC, Hewlett Packard Enterprise, and Lenovo. Azure Stack Hub is manufactured, configured, and deployed by the hardware vendor, and can be ruggedized and security-hardened to meet a broad range of environmental and compliance standards, including the ability to withstand transport by aircraft, ship, or truck, and deployment into colocation, mobile, or modular data centers. Azure Stack Hub can be used in exploration, construction, agriculture, oil and gas, manufacturing, disaster response, government, and military efforts in hospitable or the most extreme conditions and remote locations. Azure Stack Hub allows you the full autonomy to monitor, manage, and provision your own private cloud resources while meeting your connectivity, compliance, and ruggedization requirements.
### Machine learning model training
For most of these customers, Microsoft and its partners offer a customer-managed
- Transparency of outcome - Deploying closer to where data lives
-In the following sections, we expand on areas that can help government agencies with some of the above vectors.
+In the following sections, we expand on areas that can help you with some of the above vectors.
### IoT analytics
In recent years, we have been witnessing massive proliferation of Internet of Th
Governments are increasingly employing IoT devices for their missions, which could include maintenance predictions, borders monitoring, weather stations, smart meters, and field operations. In many cases, the data is often analyzed and inferred from where itΓÇÖs gathered. The main challenges of IoT analytics are: (1) large amount of data from independent sources, (2) analytics at the edge and often in disconnected scenarios, and (3) data and analysis aggregation.
-With innovative solutions such as [IoT Hub](https://azure.microsoft.com/services/iot-hub/) and [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/), Azure services are well positioned to help governments with these challenges.
+With innovative solutions such as [IoT Hub](https://azure.microsoft.com/services/iot-hub/) and [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/), Azure services are well positioned to help you with these challenges.
### Precision Agriculture with Farm Beats
-Agriculture plays a vital role in most economies worldwide. In the US, over 70% of the rural households depend on agriculture as it contributes about 17% to the total GDP and provides employment to over 60% of the population. In project [Farm Beats](https://www.microsoft.com/research/project/farmbeats-iot-agriculture/), we gather numerous data from farms that we couldnΓÇÖt get before, and then by applying AI and ML algorithms we are able to turn this data into actionable insights for farmers. We call this technique data-driven farming. What we mean by data-driven farming is the ability to map every farm and overlay it with data. For example, what is the soil moisture level 6 inches below soil, what is the soil temperature 6 inches below soil, etc. These maps can then enable techniques, such as Precision Agriculture, which has been shown to improve yield, reduce costs, and benefit the environment. Despite the fact the Precision Agriculture as a technique was proposed more than 30 years ago, it hasnΓÇÖt taken off. The biggest reason is the inability to capture numerous data from farms to accurately represent the conditions in the farm. Our goal as part of the Farm Beats project is to be able to accurately construct precision maps at a fraction of the cost.
+Agriculture plays a vital role in most economies worldwide. In the US, over 70% of the rural households depend on agriculture as it contributes about 17% to the total GDP and provides employment to over 60% of the population. In project [Farm Beats](https://www.microsoft.com/research/project/farmbeats-iot-agriculture/), we gather numerous data from farms that we couldnΓÇÖt get before, and then by applying AI and ML algorithms we are able to turn this data into actionable insights for farmers. We call this technique data-driven farming. What we mean by data-driven farming is the ability to map every farm and overlay it with data. For example, what is the soil moisture level 15 cm below surface, what is the soil temperature 15 cm below surface, etc. These maps can then enable techniques, such as Precision Agriculture, which has been shown to improve yield, reduce costs, and benefit the environment. Despite the fact the Precision Agriculture as a technique was proposed more than 30 years ago, it hasnΓÇÖt taken off. The biggest reason is the inability to capture numerous data from farms to accurately represent the conditions in the farm. Our goal as part of the Farm Beats project is to be able to accurately construct precision maps at a fraction of the cost.
### Unleashing the power of analytics with synthetic data
For instance, captured data from the field often includes documents, pamphlets,
Security is a key driver accelerating the adoption of cloud computing, but itΓÇÖs also a major concern when customers are moving sensitive IP and data to the cloud.
-Microsoft Azure provides broad capabilities to secure data at rest and in transit, but sometimes the requirement is also to protect data from threats as itΓÇÖs being processed. Microsoft [Azure confidential computing](../confidential-computing/index.yml) is designed to address this scenario by performing computations in a hardware-based trusted execution environment (TEE, also known as enclave) based on [Intel Software Guard Extensions](https://software.intel.com/sgx) (SGX) technology. The hardware provides a protected container by securing a portion of the processor and memory. Only authorized code is permitted to run and to access data, so code and data are protected against viewing and modification from outside of TEE.
+Microsoft Azure provides broad capabilities to secure data at rest and in transit, but sometimes the requirement is also to protect data from threats as itΓÇÖs being processed. [Azure confidential computing](../confidential-computing/index.yml) supports two different technologies for data encryption while in use:
+
+- VMs that provide a hardware-based trusted execution environment (TEE, also known as enclave) based on [Intel Software Guard Extensions](https://software.intel.com/sgx) (SGX) technology. The hardware provides a protected container by securing a portion of the processor and memory. Only authorized code is permitted to run and to access data, so code and data are protected against viewing and modification from outside of TEE.
+- VMs based on AMD EPYC 3rd Generation CPUs for lift and shift scenarios without requiring any application code changes. These AMD EPYC CPUs make it possible to encrypt your entire virtual machine at runtime. The encryption keys used for VM encryption are generated and safeguarded by a dedicated secure processor on the EPYC CPU and cannot be extracted by any external means.
-TEEs can directly address scenarios involving data protection while in use. For example, consider the scenario where data coming from a public or unclassified source needs to be matched with data from a highly sensitive source. Azure confidential computing can enable that matching to occur in the public cloud while protecting the highly sensitive data from disclosure. This circumstance is common in highly sensitive national security and law enforcement scenarios.
+Azure confidential computing can directly address scenarios involving data protection while in use. For example, consider the scenario where data coming from a public or unclassified source needs to be matched with data from a highly sensitive source. Azure confidential computing can enable that matching to occur in the public cloud while protecting the highly sensitive data from disclosure. This circumstance is common in highly sensitive national security and law enforcement scenarios.
-A second scenario involves data coming from multiple sources that needs to be analyzed together, even though none of the sources have the authority to see the data. Each individual provider encrypts the data they provide and only within the TEE is that data decrypted. As such, no external party and even none of the providers can see the combined data set. This capability is valuable capability for secondary use of healthcare data.
+A second scenario involves data coming from multiple sources that needs to be analyzed together, even though none of the sources have the authority to see the data. Each individual provider encrypts the data they provide and only within the TEE is that data decrypted. As such, no external party and even none of the providers can see the combined data set. This capability is valuable for secondary use of healthcare data.
-Customers deploying the types of workloads discussed in this section typically seek assurances from Microsoft that the underlying cloud platform security controls for which Microsoft is responsible are operating effectively. To address the needs of customers across regulated markets worldwide, Azure maintains a comprehensive compliance portfolio based on formal third-party certifications and other types of assurances to help customers meet their own compliance obligations.
+Customers deploying the types of workloads discussed in this section typically seek assurances from Microsoft that the underlying cloud platform security controls for which Microsoft is responsible are operating effectively. To address the needs of customers across regulated markets worldwide, Azure maintains a comprehensive compliance portfolio based on formal third-party certifications and other types of assurances to help you meet your own compliance obligations.
## Compliance and certifications
-**Azure** has the broadest [compliance coverage](../compliance/index.yml) in the industry, including key independent certifications and attestations such as ISO 27001, ISO 27017, ISO 27018, ISO 22301, ISO 9001, ISO 20000-1, SOC 1/2/3, PCI DSS Level 1, PCI 3DS, HITRUST, CSA STAR Certification, CSA STAR Attestation, US FedRAMP High, Australia IRAP, Germany C5, Japan CS Gold Mark, Singapore MTCS Level 3, Spain ENS High, UK G-Cloud and Cyber Essentials Plus, and many more. Azure compliance portfolio includes more than 90 compliance offerings spanning globally applicable certifications, US Government-specific programs, industry assurances, and regional / country-specific offerings. Government customers can use these offerings when addressing their own compliance obligations across regulated industries and markets worldwide.
+**Azure** has the broadest [compliance coverage](../compliance/index.yml) in the industry, including key independent certifications and attestations such as ISO 27001, ISO 27017, ISO 27018, ISO 22301, ISO 9001, ISO 20000-1, SOC 1/2/3, PCI DSS Level 1, PCI 3DS, HITRUST, CSA STAR Certification, CSA STAR Attestation, US FedRAMP High, Australia IRAP, Germany C5, Japan ISMAP, Korea K-ISMS, Singapore MTCS Level 3, Spain ENS High, UK G-Cloud and Cyber Essentials Plus, and many more. Azure compliance portfolio includes more than 100 compliance offerings spanning globally applicable certifications, US Government-specific programs, industry assurances, and regional/country-specific offerings. You can use these offerings when addressing your own compliance obligations across regulated industries and markets worldwide.
-When deploying applications that are subject to regulatory compliance obligations on Azure, customers seek assurances that all cloud services comprising the solution are included in the cloud service providerΓÇÖs audit scope. Azure offers industry-leading depth of compliance coverage judged by the number of cloud services in audit scope for each Azure certification. Customers can build and deploy realistic applications and benefit from extensive compliance coverage provided by Azure independent third-party audits.
+When deploying applications that are subject to regulatory compliance obligations on Azure, customers often seek assurances that all cloud services comprising the solution are included in the cloud service providerΓÇÖs audit scope. Azure offers industry-leading depth of compliance coverage judged by the number of cloud services in audit scope for each Azure certification. You can build and deploy realistic applications and benefit from extensive compliance coverage provided by Azure independent third-party audits.
-**Azure Stack Hub** also provides [compliance documentation](https://aka.ms/azurestackcompliance) to help customers integrate Azure Stack Hub into solutions that address regulated workloads. Customers can download the following Azure Stack Hub compliance documents:
+**Azure Stack Hub** also provides [compliance documentation](https://aka.ms/azurestackcompliance) to help you integrate Azure Stack Hub into solutions that address regulated workloads. You can download the following Azure Stack Hub compliance documents:
- PCI DSS assessment report produced by a third-party Qualified Security Assessor (QSA). - Cloud Security Alliance (CSA) Cloud Controls Matrix (CCM) assessment report, including Azure Stack Hub control mapping to CCM domains and controls.-- FedRAMP High System Security Plan (SSP) precompiled template to demonstrate how Azure Stack Hub addresses applicable controls, Customer Responsibility Matrix for the FedRAMP High baseline, and FedRAMP assessment report produced by an independent Third-Party Assessor Organization (3PAO).
+- FedRAMP High System Security Plan (SSP) precompiled template to demonstrate how Azure Stack Hub addresses applicable controls, Customer Responsibility Matrix for the FedRAMP High baseline, and FedRAMP assessment report produced by an accredited third-party assessment organization (3PAO).
**[Azure Blueprints](https://azure.microsoft.com/services/blueprints/)** is a service that helps automate compliance and cybersecurity risk management in cloud environments. For more information on Azure Blueprints, including production-ready blueprint solutions for ISO 27001, NIST SP 800-53, PCI DSS, HITRUST, and other standards, see the [Azure Blueprint guidance](../governance/blueprints/overview.md).
-Azure compliance and certification resources are intended to help customers address their own compliance obligations with various regulations. Some governments across the world have already established cloud adoption mandates and the corresponding regulation to facilitate cloud onboarding. However, there are many government customers that still operate traditional on-premises datacenters and are in the process of formulating their cloud adoption strategy. AzureΓÇÖs extensive compliance portfolio can be of assistance to customers irrespective of their cloud adoption maturity level.
+Azure compliance and certification resources are intended to help you address your own compliance obligations with various standards and regulations. You may have an established cloud adoption mandate in your country and the corresponding regulation to facilitate cloud onboarding. Or you may still operate traditional on-premises datacenters and are in the process of formulating your cloud adoption strategy. AzureΓÇÖs extensive compliance portfolio can help you irrespective of your cloud adoption maturity level.
## Frequently asked questions
This section addresses common customer questions related to Azure public, privat
### Data residency and data sovereignty - **Data location:** How does Microsoft keep data within a specific countryΓÇÖs boundaries? In what cases does data leave? What data attributes leave? **Answer:** Microsoft provides [strong customer commitments](https://azure.microsoft.com/global-infrastructure/data-residency/) regarding cloud services data residency and transfer policies:
- - **Data storage for regional
- - **Data storage for non-regional
-- **Sovereign cloud deployment:** Why doesnΓÇÖt Microsoft deploy a sovereign, physically isolated cloud instance in every country that requests it? **Answer:** Microsoft is actively pursuing sovereign cloud deployments where a business case can be made with governments across the world. However, physical isolation or ΓÇ£air gappingΓÇ¥, as a strategy, is diametrically opposed to the strategy of hyperscale cloud. The value proposition of the cloud, rapid feature growth, resiliency, and cost-effective operation, break down when the cloud is fragmented and physically isolated. These strategic challenges compound with each extra sovereign cloud or fragmentation within a sovereign cloud. Whereas a sovereign cloud might prove to be the right solution for certain customers, it is not the only option available to worldwide public sector customers.-- **Sovereign cloud customer options:** How can Microsoft support governments who need to operate cloud services completely in-country by local security-cleared personnel? What options does Microsoft have for cloud services operated entirely on-premises within customer owned datacenter where government employees exercise sole operational and data access control? **Answer:** Government customers can use [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) to deploy a private cloud on-premises managed by the customerΓÇÖs own security-cleared, in-country personnel. Customers can run many types of VM instances, App Services, Containers (including Cognitive Services containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes they use in Azure. With Azure Stack Hub, customers have sole control of their data, including storage, processing, transmission, and remote access.
+ - **Data storage for regional
+ - **Data storage for non-regional
+- **Air-gapped (sovereign) cloud deployment:** Why doesnΓÇÖt Microsoft deploy an air-gapped, sovereign, physically isolated cloud instance in every country? **Answer:** Microsoft is actively pursuing air-gapped cloud deployments where a business case can be made with governments across the world. However, physical isolation or ΓÇ£air gappingΓÇ¥, as a strategy, is diametrically opposed to the strategy of hyperscale cloud. The value proposition of the cloud, rapid feature growth, resiliency, and cost-effective operation, are diminished when the cloud is fragmented and physically isolated. These strategic challenges compound with each extra air-gapped cloud or fragmentation within an air-gapped cloud. Whereas an air-gapped cloud might prove to be the right solution for certain customers, it is not the only available option.
+- **Air-gapped (sovereign) cloud customer options:** How can Microsoft support governments who need to operate cloud services completely in-country by local security-cleared personnel? What options does Microsoft have for cloud services operated entirely on-premises within customer owned datacenter where government employees exercise sole operational and data access control? **Answer:** You can use [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) to deploy a private cloud on-premises managed by your own security-cleared, in-country personnel. You can run many types of VM instances, App Services, Containers (including Cognitive Services containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes that you use in Azure. With Azure Stack Hub, you have sole control of your data, including storage, processing, transmission, and remote access.
- **Local jurisdiction:** Is Microsoft subject to local country jurisdiction based on the availability of Azure public cloud service? **Answer:** Yes, Microsoft must comply with all applicable local laws; however, government requests for customer data must also comply with applicable laws. A subpoena or its local equivalent is required to request non-content data. A warrant, court order, or its local equivalent is required for content data. Government requests for customer data follow a strict procedure according to [Microsoft practices for responding to government requests](https://blogs.microsoft.com/datalaw/our-practices/). Every year, Microsoft rejects many law enforcement requests for customer data. Challenges to government requests can take many forms. In many of these cases, Microsoft simply informs the requesting government that it is unable to disclose the requested information and explains the reason for rejecting the request. Where appropriate, Microsoft challenges requests in court. Our [Law Enforcement Request Report](https://www.microsoft.com/corporate-responsibility/law-enforcement-requests-report?rtc=1) and [US National Security Order Report](https://www.microsoft.com/corporate-responsibility/us-national-security-orders-report) are updated every six months and show that most of our customers are never impacted by government requests for data. For example, in the second half of 2019, Microsoft received 39 requests from law enforcement for accounts associated with enterprise cloud customers. Of those requests, only one warrant resulted in disclosure of customer content related to a non-US enterprise customer whose data was stored outside the United States.-- **Autarky:** Can Microsoft cloud operations be separated from the Internet or the rest of Microsoft cloud and connected solely to local government network? Are operations possible without external connections to a third party? **Answer:** Yes, depending on the cloud deployment model.
- - **Public Cloud:** Azure regional datacenters can be connected to local government network through dedicated private connections such as ExpressRoute. Independent operation without any connectivity to a third party such as Microsoft is not possible in public cloud.
- - **Private Cloud:** With Azure Stack Hub, customers have full control over network connectivity and can operate Azure Stack Hub in [fully disconnected mode](/azure-stack/operator/azure-stack-disconnected-deployment).
+- **Autarky:** Can Microsoft cloud operations be separated from the rest of Microsoft cloud and connected solely to local government network? Are operations possible without external connections to a third party? **Answer:** Yes, depending on the cloud deployment model.
+ - **Public Cloud:** Azure regional datacenters can be connected to your local government network through dedicated private connections such as ExpressRoute. Independent operation without any connectivity to a third party such as Microsoft is not possible in the public cloud.
+ - **Private Cloud:** With Azure Stack Hub, you have full control over network connectivity and can operate Azure Stack Hub in [disconnected mode](/azure-stack/operator/azure-stack-disconnected-deployment).
- **Data flow restrictions:** What provisions exist for approval and documentation of all data exchange between customer and Microsoft for local, in-country deployed cloud services? **Answer:** Options vary based on the cloud deployment model.
- - **Private cloud:** For private cloud deployment using Azure Stack Hub, customers can control which data is exchanged with third parties. Azure Stack Hub telemetry can be turned off based on customer preference and Azure Stack Hub can be operated fully disconnected. Moreover, Azure Stack Hub offers the [capacity-based billing model](https://azure.microsoft.com/pricing/details/azure-stack/hub/) in which no billing or consumption data leaves the customerΓÇÖs premises.
- - **Public cloud:** In Azure public cloud, customers can use [Network Watcher](https://azure.microsoft.com/services/network-watcher/) to monitor network traffic associated with their workloads. For public cloud workloads, all billing data is generated through telemetry used exclusively for billing purposes and sent to Microsoft billing systems. Customers can [download and view](../cost-management-billing/manage/download-azure-invoice-daily-usage-date.md) their billing and usage data; however, they cannot prevent this information from being sent to Microsoft. Microsoft engineers [do not have default access](https://www.microsoft.com/trust-center/privacy/data-access) to customer data. For customer-initiated support requests, [Customer Lockbox](../security/fundamentals/customer-lockbox-overview.md) for Azure can be used to enable customers to approve/deny elevated requests for customer data access. Moreover, customers have control over data encryption at rest using customer-managed encryption keys.
-- **Patching and maintenance for private cloud:** How can Microsoft support patching and other maintenance for Azure Stack Hub private cloud deployment? **Answer:** Microsoft has a regular cadence in place for releasing [update packages for Azure Stack Hub](/azure-stack/operator/azure-stack-updates). Government customers are sole operators of Azure Stack Hub and they can download and install these update packages. An update alert for Microsoft software updates and hotfixes will appear in the Update blade for Azure Stack Hub instances that are connected to the Internet. If your instance isnΓÇÖt connected and you would like to be notified about each update release, subscribe to the RSS or ATOM feed, as explained in our online documentation.
+ - **Private cloud:** For private cloud deployment using Azure Stack Hub, you can control which data is exchanged with third parties. Azure Stack Hub telemetry can be turned off based on your preference and Azure Stack Hub can be operated disconnected. Moreover, Azure Stack Hub offers the [capacity-based billing model](https://azure.microsoft.com/pricing/details/azure-stack/hub/) in which no billing or consumption data leaves your on-premises infrastructure.
+ - **Public cloud:** In Azure public cloud, you can use [Network Watcher](https://azure.microsoft.com/services/network-watcher/) to monitor network traffic associated with your workloads. For public cloud workloads, all billing data is generated through telemetry used exclusively for billing purposes and sent to Microsoft billing systems. You can [download and view](../cost-management-billing/manage/download-azure-invoice-daily-usage-date.md) your billing and usage data; however, you cannot prevent this information from being sent to Microsoft.
+- **Patching and maintenance for private cloud:** How can Microsoft support patching and other maintenance for Azure Stack Hub private cloud deployment? **Answer:** Microsoft has a regular cadence in place for releasing [update packages for Azure Stack Hub](/azure-stack/operator/azure-stack-updates). You are the sole operator of Azure Stack Hub and you can download and install these update packages. An update alert for Microsoft software updates and hotfixes will appear in the Update blade for Azure Stack Hub instances that are connected to the Internet. If your instance isnΓÇÖt connected and you would like to be notified about each update release, subscribe to the RSS or ATOM feed, as explained in our online documentation.
### Safeguarding of customer data -- **Microsoft network security:** What network controls and security does Microsoft use? Can customer requirements be considered? **Answer:** For insight into Azure infrastructure protection, customers should review Azure [network architecture](../security/fundamentals/infrastructure-network.md), Azure [production network](../security/fundamentals/production-network.md), and Azure [infrastructure monitoring](../security/fundamentals/infrastructure-monitoring.md). Customers deploying Azure applications should review Azure [network security overview](../security/fundamentals/network-overview.md) and [network security best practices](../security/fundamentals/network-best-practices.md). To provide feedback or requirements, contact your Microsoft account representative.-- **Customer separation:** How does Microsoft logically or physically separate customers within its cloud environment? Is there an option for select customers to ensure complete physical separation? **Answer:** Azure uses [logical isolation](./azure-secure-isolation-guidance.md) to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously enforcing controls designed to keep customers from accessing one another's data or applications. There is also an option to enforce physical compute isolation via [Azure Dedicated Host](https://azure.microsoft.com/services/virtual-machines/dedicated-host/), which provides physical servers that can host one or more Azure VMs and are dedicated to one Azure subscription. Customers can provision dedicated hosts within a region, availability zone, and fault domain. They can then place VMs directly into provisioned hosts using whatever configuration best meets their needs. Dedicated Host provides hardware isolation at the physical server level, enabling customers to place their Azure VMs on an isolated and dedicated physical server that runs only their organizationΓÇÖs workloads to meet corporate compliance requirements.-- **Data encryption at rest and in transit:** Does Microsoft enforce data encryption by default? Does Microsoft support customer-managed encryption keys? **Answer:** Yes, many Azure services, including Azure Storage and Azure SQL Database, encrypt data by default and support customer-managed keys. Azure [Storage encryption for data at rest](../storage/common/storage-service-encryption.md) ensures that data is automatically encrypted before persisting it to Azure Storage and decrypted before retrieval. Customers can [use their own encryption keys](../storage/common/customer-managed-keys-configure-key-vault.md) for Azure Storage encryption at rest and manage their keys in Azure Key Vault. Storage encryption is enabled by default for all new and existing storage accounts and it cannot be disabled. When provisioning storage accounts, customers can enforce ΓÇ£[secure transfer required](../storage/common/storage-require-secure-transfer.md)ΓÇ¥ option, which allows access only from secure connections. This option is enabled by default when creating a storage account in the Azure portal. Azure SQL Database enforces [data encryption in transit](../azure-sql/database/security-overview.md#information-protection-and-encryption) by default and provides [transparent data encryption](../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) at rest [by default](https://azure.microsoft.com/updates/newly-created-azure-sql-databases-encrypted-by-default/) allowing customers to use Azure Key Vault and *[bring your own key](../azure-sql/database/transparent-data-encryption-byok-overview.md)* (BYOK) functionality to control key management tasks including key permissions, rotation, deletion, and so on. -- **Data encryption during processing:** Can Microsoft protect customer data while it is being processed in memory? **Answer:** Yes, Microsoft [Azure confidential computing](https://azure.microsoft.com/solutions/confidential-compute/) is designed to address this scenario by performing computations in a hardware-based trusted execution environment (TEE, also known as enclave) based on [Intel Software Guard Extensions](https://software.intel.com/sgx) (SGX) technology. The hardware provides a protected container by securing a portion of the processor and memory. Only authorized code is permitted to run and to access data, so code and data are protected against viewing and modification from outside of TEE.-- **FIPS 140-2 validation:** Does Microsoft offer FIPS 140-2 Level 3 validated hardware security modules (HSMs) in Azure? **Answer:** Yes, Azure Key Vault [Managed HSM](../key-vault/managed-hsm/overview.md) provides a fully managed, highly available, single-tenant HSM as a service that uses FIPS 140-2 Level 3 validated HSMs (certificate [#3718](https://csrc.nist.gov/projects/cryptographic-module-validation-program/Certificate/3718)). Each Managed HSM instance is bound to a separate security domain controlled by the customer and isolated cryptographically from instances belonging to other customers.-- **Customer provided crypto:** Can customers bring their own cryptography or encryption hardware? **Answer:** Yes, customers can use their own HSMs deployed on-premises with their own crypto algorithms. However, if customers expect to use customer-managed keys for services integrated with [Azure Key Vault](https://azure.microsoft.com/services/key-vault/) (for example, Azure Storage, SQL Database, Disk encryption, and others), then they need to use hardware security modules (HSMs) and [cryptography supported by Azure Key Vault](../key-vault/keys/about-keys.md).-- **Access to customer data by Microsoft personnel:** How does Microsoft restrict access to customer data by Microsoft engineers? **Answer:** Microsoft engineers [do not have default access](https://www.microsoft.com/trust-center/privacy/data-access) to customer data in the cloud. Instead, they are granted access, under management oversight, only when necessary using the [restricted access workflow](https://www.youtube.com/watch?v=lwjPGtGGe84&feature=youtu.be&t=25m). For customer-initiated support requests, [Customer Lockbox](../security/fundamentals/customer-lockbox-overview.md) for Azure provides customers with the capability to control how a Microsoft engineer accesses their data. As part of the support workflow, a Microsoft engineer may require elevated access to customer data. Customer Lockbox for Azure puts the customer in charge of that decision by enabling the customer to approve/deny such elevated requests.
+- **Microsoft network security:** What network controls and security does Microsoft use? Can my requirements be considered? **Answer:** For insight into Azure infrastructure protection, you should review Azure [network architecture](../security/fundamentals/infrastructure-network.md), Azure [production network](../security/fundamentals/production-network.md), and Azure [infrastructure monitoring](../security/fundamentals/infrastructure-monitoring.md). If you are deploying Azure applications, you should review Azure [network security overview](../security/fundamentals/network-overview.md) and [network security best practices](../security/fundamentals/network-best-practices.md). To provide feedback or requirements, contact your Microsoft account representative.
+- **Customer separation:** How does Microsoft logically or physically separate customers within its cloud environment? Is there an option for my organization to ensure complete physical separation? **Answer:** Azure uses [logical isolation](./azure-secure-isolation-guidance.md) to separate your applications and data from other customers. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously enforcing controls designed to keep your data and applications off limits to other customers. There is also an option to enforce physical compute isolation via [Azure Dedicated Host](https://azure.microsoft.com/services/virtual-machines/dedicated-host/), which provides physical servers that can host one or more Azure VMs and are dedicated to one Azure subscription. You can provision dedicated hosts within a region, availability zone, and fault domain. You can then place VMs directly into provisioned hosts using whatever configuration best meets your needs. Dedicated Host provides hardware isolation at the physical server level, enabling you to place your Azure VMs on an isolated and dedicated physical server that runs only your organizationΓÇÖs workloads to meet corporate compliance requirements.
+- **Data encryption at rest and in transit:** Does Microsoft enforce data encryption by default? Does Microsoft support customer-managed encryption keys? **Answer:** Yes, many Azure services, including Azure Storage and Azure SQL Database, encrypt data by default and support customer-managed keys. Azure [Storage encryption for data at rest](../storage/common/storage-service-encryption.md) ensures that data is automatically encrypted before persisting it to Azure Storage and decrypted before retrieval. You can use [your own encryption keys](../storage/common/customer-managed-keys-configure-key-vault.md) for Azure Storage encryption at rest and manage your keys in Azure Key Vault. Storage encryption is enabled by default for all new and existing storage accounts and it cannot be disabled. When provisioning storage accounts, you can enforce ΓÇ£[secure transfer required](../storage/common/storage-require-secure-transfer.md)ΓÇ¥ option, which allows access only from secure connections. This option is enabled by default when creating a storage account in the Azure portal. Azure SQL Database enforces [data encryption in transit](../azure-sql/database/security-overview.md#information-protection-and-encryption) by default and provides [transparent data encryption](../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) at rest [by default](https://azure.microsoft.com/updates/newly-created-azure-sql-databases-encrypted-by-default/) allowing you to use Azure Key Vault and *[bring your own key](../azure-sql/database/transparent-data-encryption-byok-overview.md)* (BYOK) functionality to control key management tasks including key permissions, rotation, deletion, and so on.
+- **Data encryption during processing:** Can Microsoft protect my data while it is being processed in memory? **Answer:** Yes, [Azure confidential computing](https://azure.microsoft.com/solutions/confidential-compute/) supports two different technologies for data encryption while in use. First, you can use VMs based on Intel Xeon processors with [Intel Software Guard Extensions](https://software.intel.com/sgx) (SGX) technology. With this approach, data is protected inside a hardware-based trusted execution environment (TEE, also known as enclave), which is created by securing a portion of the processor and memory. Only authorized code is permitted to run and to access data, so application code and data are protected against viewing and modification from outside of TEE. Second, you can use VMs based on AMD EPYC 3rd Generation CPUs for lift and shift scenarios without requiring any application code changes. These AMD EPYC CPUs make it possible to encrypt your entire virtual machine at runtime. The encryption keys used for VM encryption are generated and safeguarded by a dedicated secure processor on the EPYC CPU and cannot be extracted by any external means.
+- **FIPS 140 validation:** Does Microsoft offer FIPS 140 Level 3 validated hardware security modules (HSMs) in Azure? If so, can I store AES-256 symmetric encryption keys in these HSMs? **Answer:** Azure Key Vault [Managed HSM](../key-vault/managed-hsm/overview.md) provides a fully managed, highly available, single-tenant HSM as a service that uses FIPS 140 Level 3 validated HSMs (certificate [#3718](https://csrc.nist.gov/projects/cryptographic-module-validation-program/Certificate/3718)). Each Managed HSM instance is bound to a separate security domain controlled by you and isolated cryptographically from instances belonging to other customers. With Managed HSMs, support is available for AES 128-bit and 256-bit symmetric keys.
+- **Customer provided cryptography:** Can I use my own cryptography or encryption hardware? **Answer:** Yes, you can use your own HSMs deployed on-premises with your own crypto algorithms. However, if you expect to use customer-managed keys for services integrated with [Azure Key Vault](https://azure.microsoft.com/services/key-vault/) (for example, Azure Storage, SQL Database, Disk encryption, and others), then you must use hardware security modules (HSMs) and [cryptography supported by Azure Key Vault](../key-vault/keys/about-keys.md).
+- **Access to customer data by Microsoft personnel:** How does Microsoft restrict access to my data by Microsoft engineers? **Answer:** Microsoft engineers [do not have default access](https://www.microsoft.com/trust-center/privacy/data-access) to your data in the cloud. Instead, they can be granted access, under management oversight, only when necessary using a [restricted access workflow](https://www.youtube.com/watch?v=lwjPGtGGe84&feature=youtu.be&t=25m). Most customer support requests can be resolved without accessing your data as Microsoft engineers rely heavily on logs for troubleshooting and support. If a Microsoft engineer requires elevated access to your data as part of the support workflow, you can use [Customer Lockbox](../security/fundamentals/customer-lockbox-overview.md) for Azure to control how a Microsoft engineer accesses your data. Customer Lockbox for Azure puts you in charge of that decision by enabling you to approve/deny such elevated access requests. For more information on how Microsoft restricts insider access to your data, see [Restrictions on insider access](./documentation-government-plan-security.md#restrictions-on-insider-access).
### Operations -- **Code review:** What can Microsoft do to help ensure that no malicious code has been inserted into the services that customers use? Can customers review Microsoft code deployments? **Answer:** Microsoft has full control over all source code that comprises Azure services. The procedure for patching guest VMs differs greatly from traditional on-premises patching where patch verification is necessary following installation. In Azure, patches are not applied to guest VMs; instead, the VM is simply restarted and when the VM boots, it is guaranteed to boot from a known good image that Microsoft controls. There is no way to insert malicious code into the image or interfere with the boot process. PaaS VMs offer more advanced protection against persistent malware infections than traditional physical server solutions, which if compromised by an attacker can be difficult to clean, even after the vulnerability is corrected. With PaaS VMs, reimaging is a routine part of operations, and it can help clean out intrusions that have not even been detected. This approach makes it more difficult for a compromise to persist. Customers cannot review Azure source code; however, online access to view source code is available for key products through the Microsoft [Government Security Program](https://www.microsoft.com/securityengineering/gsp) (GSP).
+- **Code review:** What can Microsoft do to prevent malicious code from being inserted into services that my organization uses? Can I review Microsoft code deployments? **Answer:** Microsoft has invested heavily in security assurance processes and practices to correctly develop logically isolated services and systems. For more information, see [Security assurance processes and practices](./azure-secure-isolation-guidance.md#security-assurance-processes-and-practices). For more information about Azure Hypervisor isolation, see [Defense-in-depth exploit mitigations](./azure-secure-isolation-guidance.md#defense-in-depth-exploit-mitigations). Microsoft has full control over all source code that comprises Azure services. For example, the procedure for patching guest VMs differs greatly from traditional on-premises patching where patch verification is necessary following installation. In Azure, patches are not applied to guest VMs; instead, the VM is simply restarted and when the VM boots, it is guaranteed to boot from a known good image that Microsoft controls. There is no way to insert malicious code into the image or interfere with the boot process. PaaS VMs offer more advanced protection against persistent malware infections than traditional physical server solutions, which if compromised by an attacker can be difficult to clean, even after the vulnerability is corrected. With PaaS VMs, reimaging is a routine part of operations, and it can help clean out intrusions that have not even been detected. This approach makes it more difficult for a compromise to persist. You cannot review Azure source code; however, online access to view source code is available for key products through the Microsoft [Government Security Program](https://www.microsoft.com/securityengineering/gsp) (GSP).
- **DevOps personnel (cleared nationals):** What controls or clearance levels does Microsoft have for the personnel that have DevOps access to cloud environments or physical access to data centers? **Answer:** Microsoft conducts [background screening](./documentation-government-plan-security.md#screening) on operations personnel with access to production systems and physical data center infrastructure. Microsoft cloud background check includes verification of education and employment history upon hire, and extra checks conducted every two years thereafter (where permissible by law), including criminal history check, OFAC list, BIS denied persons list, and DDTC debarred parties list.-- **Data center site options:** Is Microsoft willing to deploy a data center to a specific physical location to meet more advanced security requirements? **Answer:** Customers should inquire with their Microsoft account team regarding options for data center locations.-- **Service availability guarantee:** How do we ensure that Microsoft (or particular government or other entity) canΓÇÖt turn off our cloud services? **Answer:** Customers should review the Microsoft [Online Services Terms](http://www.microsoftvolumelicensing.com/Downloader.aspx?documenttype=OST&lang=English) (OST) and the OST [Data Protection Addendum](https://aka.ms/DPA) (DPA) for contractual commitments Microsoft makes regarding service availability and use of online services.-- **Non-traditional cloud service needs:** What is the recommended approach for managing scenarios where Azure services are required in periodically internet free/disconnected environments? **Answer:** In addition to [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) which is intended for on-premises deployment and disconnected scenarios, a ruggedized and field-deployable version called [Tactical Azure Stack Hub](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/converged-infrastructure/dell-emc-integrated-system-for-azure-stack-hub-tactical-spec-sheet.pdf) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions.
+- **Data center site options:** Is Microsoft willing to deploy a data center to a specific physical location to meet more advanced security requirements? **Answer:** You should inquire with your Microsoft account team regarding options for data center locations.
+- **Service availability guarantee:** How can my organization ensure that Microsoft (or particular government or other entity) canΓÇÖt turn off our cloud services? **Answer:** You should review the Microsoft [Online Services Terms](https://www.microsoft.com/licensing/terms/productoffering) (OST) and the OST [Data Protection Addendum](https://aka.ms/DPA) (DPA) for contractual commitments Microsoft makes regarding service availability and use of online services.
+- **Non-traditional cloud service needs:** What options does Microsoft provide for periodically internet free/disconnected environments? **Answer:** In addition to [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) which is intended for on-premises deployment and disconnected scenarios, a ruggedized and field-deployable version called [Tactical Azure Stack Hub](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/converged-infrastructure/dell-emc-integrated-system-for-azure-stack-hub-tactical-spec-sheet.pdf) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions.
### Transparency and audit -- **Audit documentation:** Does Microsoft make all audit documentation readily available to customers to download and examine? **Answer:** Yes, Microsoft makes all independent third-party audit reports and other related documentation available to customers under a non-disclosure agreement from the Azure portal. You will need an existing Azure subscription or [free trial subscription](https://azure.microsoft.com/free/) to access Azure Security Center [audit reports blade](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/AuditReportsBlade).-- **Process auditability:** Does Microsoft make its processes, data flow, and documentation available to customers or regulators for audit? **Answer:** Yes, Microsoft offers a Regulator Right to Examine, which is a program Microsoft implemented to provide regulators with direct right to examine Azure, including the ability to conduct an on-site examination, to meet with Microsoft personnel and Microsoft external auditors, and to access any related information, records, reports, and documents.-- **Service documentation:** Can Microsoft provide in-depth documentation covering service architecture, software and hardware components, and data protocols? **Answer:** Yes, Microsoft provides extensive and in-depth Azure online documentation covering all these topics. For example, customers can review documentation on Azure [products](../index.yml), [global infrastructure](https://azure.microsoft.com/global-infrastructure/), and [API reference](/rest/api/azure/).
+- **Audit documentation:** Does Microsoft make all audit documentation readily available to customers to download and examine? **Answer:** Yes, Microsoft makes independent third-party audit reports and other related documentation available for download under a non-disclosure agreement from the Azure portal. You will need an existing Azure subscription or [free trial subscription](https://azure.microsoft.com/free/) to access the Azure Security Center [audit reports blade](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/AuditReportsBlade). Additional compliance documentation is available from the Service Trust Portal (STP) [Audit Reports](https://servicetrust.microsoft.com/ViewPage/MSComplianceGuideV3) section. You must log in to access audit reports on the STP. For more information, see [Get started with the Microsoft Service Trust Portal](https://aka.ms/stphelp).
+- **Process auditability:** Does Microsoft make its processes, data flow, and documentation available to customers or regulators for audit? **Answer:** Microsoft offers a Regulator Right to Examine, which is a program Microsoft implemented to provide regulators with direct right to examine Azure, including the ability to conduct an on-site examination, to meet with Microsoft personnel and Microsoft external auditors, and to access any related information, records, reports, and documents.
+- **Service documentation:** Can Microsoft provide in-depth documentation covering service architecture, software and hardware components, and data protocols? **Answer:** Yes, Microsoft provides extensive and in-depth Azure online documentation covering all these topics. For example, you can review documentation on Azure [products](../index.yml), [global infrastructure](https://azure.microsoft.com/global-infrastructure/), and [API reference](/rest/api/azure/).
## Next steps
azure-maps Drawing Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/drawing-requirements.md
Title: Drawing package requirements in Microsoft Azure Maps Creator
description: Learn about the Drawing package requirements to convert your facility design files to map data Previously updated : 5/27/2021 Last updated : 07/02/2021
The Drawing package must be zipped into a single archive file, with the .zip ext
## DWG file conversion process
-The [Azure Maps Conversion service](/rest/api/maps/v2/conversion) performs the following on each DWG file:
+The [Azure Maps Conversion service](/rest/api/maps/v2/conversion) does the following on each DWG file:
- Extracts feature classes: - Levels
A single DWG file is required for each level of the facility. All data of a sing
Each DWG file must adhere to the following requirements: - The DWG file must define the _Exterior_ and _Unit_ layers. It can optionally define the following layers: _Wall_, _Door_, _UnitLabel_, _Zone_, and _ZoneLabel_.-- The DWG file cannot contain features from multiple levels.-- The DWG file cannot contain features from multiple facilities.
+- The DWG file can't contain features from multiple levels.
+- The DWG file can't contain features from multiple facilities.
- The DWG must reference the same measurement system and unit of measurement as other DWG files in the Drawing package. ## DWG layer requirements
Each DWG layer must adhere to the following rules:
- A layer must exclusively contain features of a single class. For example, units and walls canΓÇÖt be in the same layer. - A single class of features can be represented by multiple layers.-- Self-intersecting polygons are permitted, but are automatically repaired. When this occurs, the [Azure Maps Conversion service](/rest/api/maps/v2/conversion) raises a warning. It's advisable to manually inspect the repaired results, because they might not match the expected results.-- Each layer has a supported list of entity types. Any other entity types in a layer will be ignored. For example, text entities are not supported on the wall layer.
+- Self-intersecting polygons are permitted, but are automatically repaired. When they repaired, the [Azure Maps Conversion service](/rest/api/maps/v2/conversion) raises a warning. It's advisable to manually inspect the repaired results, because they might not match the expected results.
+- Each layer has a supported list of entity types. Any other entity types in a layer will be ignored. For example, text entities aren't supported on the wall layer.
The table below outlines the supported entity types and converted map features for each layer. If a layer contains unsupported entity types, then the [Azure Maps Conversion service](/rest/api/maps/v2/conversion) ignores those entities. | Layer | Entity types | Converted Features | | :-- | :-| :- | [Exterior](#exterior-layer) | Polygon, PolyLine (closed), Circle, Ellipse (closed) | Levels
-| [Unit](#unit-layer) | Polygon, PolyLine (closed), Circle, Ellipse (closed) | Unit and Vertical penetrations
-| [Wall](#wall-layer) | Polygon, PolyLine (closed), Circle, Ellipse (closed) |
+| [Unit](#unit-layer) | Polygon, PolyLine (closed), Circle, Ellipse (closed) | Units and Vertical penetrations
+| [Wall](#wall-layer) | Polygon, PolyLine (closed), Circle, Ellipse (closed), Structures |
| [Door](#door-layer) | Polygon, PolyLine, Line, CircularArc, Circle | Openings
-| [Zone](#zone-layer) | Polygon, PolyLine (closed), Circle, Ellipse (closed) | Zone
+| [Zone](#zone-layer) | Polygon, PolyLine (closed), Circle, Ellipse (closed) | Zones
| [UnitLabel](#unitlabel-layer) | Text (single line) | Not applicable. This layer can only add properties to the unit features from the Units layer. For more information, see the [UnitLabel layer](#unitlabel-layer). | [ZoneLabel](#zonelabel-layer) | Text (single line) | Not applicable. This layer can only add properties to zone features from the ZonesLayer. For more information, see the [ZoneLabel layer](#zonelabel-layer).
No matter how many entity drawings are in the exterior layer, the [resulting fac
- Resulting level feature must be at least 4 square meters. - Resulting level feature must not be greater 400,000 square meters.
-If the layer contains multiple overlapping PolyLines, the PolyLines are dissolved into a single Level feature. Alternatively, if the layer contains multiple non-overlapping PolyLines, the resulting Level feature has a multi-polygonal representation.
+If the layer contains multiple overlapping PolyLines, the PolyLines are dissolved into a single Level feature. Instead, if the layer contains multiple non-overlapping PolyLines, the resulting Level feature has a multi-polygonal representation.
You can see an example of the Exterior layer as the outline layer in the [sample Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
The next sections detail the requirements for each object.
| `name` | string | true | Name of building. | | `streetAddress`| string | false | Address of building. | |`unit` | string | false | Unit in building. |
-| `locality` | string | false | Name of an city, town, area, neighborhood, or region.|
+| `locality` | string | false | Name of a city, town, area, neighborhood, or region.|
| `adminDivisions` | JSON array of strings | false | An array containing address designations. For example: (Country, State) Use ISO 3166 country codes and ISO 3166-2 state/territory codes. | | `postalCode` | string | false | The mail sorting code. | | `hoursOfOperation` | string | false | Adheres to the [OSM Opening Hours](https://wiki.openstreetmap.org/wiki/Key:opening_hours/specification) format. |
The `unitProperties` object contains a JSON array of unit properties.
|`nameSubtitle`| string |false| Subtitle of the unit. | |`addressRoomNumber`| string| false| Room, unit, apartment, or suite number of the unit.| |`verticalPenetrationCategory`| string| false| When this property is defined, the resulting feature is a vertical penetration (VRT) rather than a unit. You can use vertical penetrations to go to other vertical penetration features in the levels above or below it. Vertical penetration is a [Category](https://aka.ms/pa-indoor-spacecategories) name. If this property is defined, the `categoryName` property is overridden with `verticalPenetrationCategory`. |
-|`verticalPenetrationDirection`| string| false |If `verticalPenetrationCategory` is defined, optionally define the valid direction of travel. The permitted values are: `lowToHigh`, `highToLow`, `both`, and `closed`. The default value is `both`.|
+|`verticalPenetrationDirection`| string| false |If `verticalPenetrationCategory` is defined, optionally define the valid direction of travel. The permitted values are: `lowToHigh`, `highToLow`, `both`, and `closed`. The default value is `both`. The value is case-sensitive.|
| `nonPublic` | bool | false | Indicates if the unit is open to the public. | | `isRoutable` | bool | false | When this property is set to `false`, you can't go to or through the unit. The default value is `true`. | | `isOpenArea` | bool | false | Allows the navigating agent to enter the unit without the need for an opening attached to the unit. By default, this value is set to `true` for units with no openings, and `false` for units with openings. Manually setting `isOpenArea` to `false` on a unit with no openings results in a warning, because the resulting unit won't be reachable by a navigating agent.|
The `zoneProperties` object contains a JSON array of zone properties.
| Property | Type | Required | Description | |--||-|-| |zoneName |string |true |Name of zone to associate with `zoneProperty` record. This record is only valid when a label matching `zoneName` is found in the `zoneLabel` layer of the zone. |
-|categoryName| string| false |Purpose of the unit. A list of values that the provided rendering styles can make use of is available [here](https://atlas.microsoft.com/sdk/javascript/indoor/0.1/categories.json).|
+|categoryName| string| false |Purpose of the zone. A list of values that the provided rendering styles can make use of is available [here](https://atlas.microsoft.com/sdk/javascript/indoor/0.1/categories.json).|
|zoneNameAlt| string| false |Alternate name of the zone. | |zoneNameSubtitle| string | false |Subtitle of the zone. | |zoneSetId| string | false | Set ID to establish a relationship among multiple zones so that they can be queried or selected as a group. For example, zones that span multiple levels. |
azure-maps How To Render Custom Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-render-custom-data.md
Title: Render custom data on a raster map in Microsoft Azure Maps
description: Learn how to add pushpins, labels, and geometric shapes to a raster map. See how to use the static image service in Azure Maps for this purpose. Previously updated : 06/22/2021 Last updated : 07/02/2021
To check the status of the data upload and retrieve its unique ID (`udid`):
https://us.atlas.microsoft.com/mapData/operations/{statusUrl}?api-version=2.0&subscription-key={subscription-key} ```
-6. Using Postman, make a GET request with the above URL. In the response header retrieve the operations metadata URL from the `Resource-Location` property. This URI will be of the following format.
-
- ```HTTP
- https://us.atlas.microsoft.com/mapData/metadata/{uid}?api-version=2.0
- ```
-
-7. Copy the operations metadata URI and append the subscription-key parameter to it with the value of your Azure Maps account subscription key. Use the same account subscription key that you used to upload the data. The status URI format should look like the one below:
-
- ```HTTP
- https://us.atlas.microsoft.com/mapData/metadata/{uid}?api-version=2.0?api-version=1.0&subscription-key={Subscription-key}
- ```
-
-8. To get the udId, open a new tab in the Postman app. Select GET HTTP method on the builder tab. Make a GET request at the status URI. If your data upload was successful, you'll receive a udId in the response body. Copy the udId.
-
-9. Select **Send**.
+6. Select **Send**.
-10. In the response window, select the **Headers** tab.
+7. In the response window, select the **Headers** tab.
-11. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`udid`) of the drawing package resource.
+8. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`udid`) of the drawing package resource.
:::image type="content" source="./media/how-to-render-custom-data/resource-location-url.png" alt-text="Copy the resource location URL.":::
azure-monitor Azure Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/azure-web-apps.md
Enabling monitoring on your ASP.NET, ASP.NET Core, Java, and Node.js based web applications running on [Azure App Services](../../app-service/index.yml) is now easier than ever. Whereas previously you needed to manually instrument your app, the latest extension/agent is now built into the App Service image by default. This article will walk you through enabling Azure Monitor application Insights monitoring as well as provide preliminary guidance for automating the process for large-scale deployments. > [!NOTE]
-> For .Net on Windows only: manually adding an Application Insights site extension via **Development Tools** > **Extensions** is deprecated. This method of extension installation was dependent on manual updates for each new version. The latest stable release of the extension is now [preinstalled](https://github.com/projectkudu/kudu/wiki/Azure-Site-Extensions) as part of the App Service image. The files are located in `d:\Program Files (x86)\SiteExtensions\ApplicationInsightsAgent` and are automatically updated with each stable release. If you follow the agent based instructions to enable monitoring below, it will automatically remove the deprecated extension for you.
+> For .NET on Windows only: manually adding an Application Insights site extension via **Development Tools** > **Extensions** is deprecated. This method of extension installation was dependent on manual updates for each new version. The latest stable release of the extension is now [preinstalled](https://github.com/projectkudu/kudu/wiki/Azure-Site-Extensions) as part of the App Service image. The files are located in `d:\Program Files (x86)\SiteExtensions\ApplicationInsightsAgent` and are automatically updated with each stable release. If you follow the agent-based instructions to enable monitoring below, it will automatically remove the deprecated extension for you.
## Enable Application Insights
Targeting the full framework from ASP.NET Core, self-contained deployment, and L
# [Node.js](#tab/nodejs)
-You can monitor your Node.js apps running in Azure App Service without any code change, just with a couple of simple steps. Application insights for Node.js applications is integrated with App Service on Linux - both code-based and custom containers, and with App Service on Windows for code-based apps. The integration is in public preview. The integration adds Node.js SDK, which is in GA.
+You can monitor your Node.js apps running in Azure App Service without any code change, just with a couple of simple steps. Application insights for Node.js applications is integrated with App Service on Linux - both code-based and custom containers, and with App Service on Windows for code-based apps.
1. **Select Application Insights** in the Azure control panel for your app service.
You can monitor your Node.js apps running in Azure App Service without any code
# [Java](#tab/java)
-You can turn on monitoring for your Java apps running in Azure App Service just with one click, no code change required. Application Insights for Java is integrated with App Service on Linux - both code-based and custom containers, and with App Service on Windows - code-based apps. The integration is in public preview. It is important to know how your application will be monitored. The integration adds [Application Insights Java 3.0](./java-in-process-agent.md), which is in GA. You will get all the telemetry that it auto-collects.
+You can turn on monitoring for your Java apps running in Azure App Service just with one click, no code change required. Application Insights for Java is integrated with App Service on Linux - both code-based and custom containers, and with App Service on Windows - code-based apps. It is important to know how your application will be monitored. The integration adds [Application Insights Java 3.x](./java-in-process-agent.md) and you will get all the telemetry that it auto-collects.
1. **Select Application Insights** in the Azure control panel for your app service.
You can turn on monitoring for your Java apps running in Azure App Service just
![Instrument your web app.](./media/azure-web-apps/create-resource-01.png)
-2. After specifying which resource to use, you can configure the Java agent. The full [set of configurations](./java-standalone-config.md) is available, you just need to paste a valid json file without specifying the connection string. You have already picked an application insights resource to connect to, remember?
+2. After specifying which resource to use, you can configure the Java agent. The full [set of configurations](./java-standalone-config.md) is available, you just need to paste a valid json file. Exclude the connection string and any configurations that are in preview - you will be able to add those as they become generally available.
> [!div class="mx-imgBorder"] > ![Choose options per platform.](./media/azure-web-apps/create-app-service-ai.png)
For the latest updates and bug fixes [consult the release notes](./web-app-exten
* [Monitor service health metrics](../data-platform.md) to make sure your service is available and responsive. * [Receive alert notifications](../alerts/alerts-overview.md) whenever operational events happen or metrics cross a threshold. * Use [Application Insights for JavaScript apps and web pages](javascript.md) to get client telemetry from the browsers that visit a web page.
-* [Set up Availability web tests](monitor-web-app-availability.md) to be alerted if your site is down.
+* [Set up Availability web tests](monitor-web-app-availability.md) to be alerted if your site is down.
azure-monitor Get Started Queries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/get-started-queries.md
SecurityEvent
The query shown above returns 10 results from the *SecurityEvent* table, in no specific order. This is a very common way to take a glance at a table and understand its structure and content. Let's examine how it's built: * The query starts with the table name *SecurityEvent* - this part defines the scope of the query.
-* The pipe (|) character separates commands, so the output of the first one in the input of the following command. You can add any number of piped elements.
+* The pipe (|) character separates commands, so the output of the first one is the input of the following command. You can add any number of piped elements.
* Following the pipe is the **take** command, which returns a specific number of arbitrary records from the table. We could actually run the query even without adding `| take 10` - that would still be valid, but it could return up to 10,000 results.
SecurityEvent
| top 10 by TimeGenerated ```
-Descending is the default sorting order, so we typically omit the **desc** argument.The output will look like this:
+Descending is the default sorting order, so we typically omit the **desc** argument. The output will look like this:
![Top 10](media/get-started-queries/top10.png)
To make the output clearer, you select to display it as a time-chart, showing th
- Learn more about using string data in a log query with [Work with strings in Azure Monitor log queries](/azure/data-explorer/kusto/query/samples?&pivots=azuremonitor#string-operations). - Learn more about aggregating data in a log query with [Advanced aggregations in Azure Monitor log queries](/azure/data-explorer/write-queries#advanced-aggregations). - Learn how to join data from multiple tables with [Joins in Azure Monitor log queries](/azure/data-explorer/kusto/query/samples?&pivots=azuremonitor#joins).-- Get documentation on the entire Kusto query language in the [KQL language reference](/azure/kusto/query/).
+- Get documentation on the entire Kusto query language in the [KQL language reference](/azure/kusto/query/).
azure-netapp-files Azacsnap Cmd Ref Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-cmd-ref-backup.md
the boot volumes.
azacsnap -c backup --volume other --prefix boot_TEST --retention 9 --configfile bootVol.json ```
+> [!IMPORTANT]
+> For Azure Large Instance, the configuration file volume parameter for the boot volume might not be visible at the host operating system level.
+> This value can be provided by Microsoft Operations.
+ The command does not output to the console, but does write to a log file only. It does _not_ write to a result file or `/var/log/messages`. The *log file* name in this example is `azacsnap-backup-bootVol.log`.
-> [!NOTE]
-> The log file name is made up of the "(command name-(the `-c` option)-(the config filename)". For example, if using the `-c backup` option with a log file name of `h80.json`, then the log file will be called `azacsnap-backup-h80.log`. Or if using the `-c test` option with the same configuration file then the log file will be called `azacsnap-test-h80.log`.
+## Log files
-- HANA Large Instance Type: There are two valid values with `TYPEI` or `TYPEII` dependent on
- the HANA Large Instance Unit.
-- See [Available SKUs for HANA Large Instances](../virtual-machines/workloads/sap/hana-available-skus.md) to confirm the available SKUs.
+The log file name is constructed from the following "(command name)-(the `-c` option)-(the config filename)". For example, if running the command `azacsnap -c backup --configfile h80.json --retention 5 --prefix one-off` then the log file will be called `azacsnap-backup-h80.log`. Or if using the `-c test` option with the same configuration file (e.g. `azacsnap -c test --configfile h80.json`) then the log file will be called `azacsnap-test-h80.log`.
## Next steps
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na ms.devlang: na Previously updated : 06/30/2021 Last updated : 07/01/2021 # Solution architectures using Azure NetApp Files
This section provides references for solutions for Linux OSS applications and da
### Oracle
-* [Oracle on Azure deployment best practice guide using Azure NetApp Files](https://www.netapp.com/us/media/tr-4780.pdf)
+* [Oracle Databases on Microsoft Azure Using Azure NetApp Files](https://www.netapp.com/media/17105-tr4780.pdf)
* [Oracle VM images and their deployment on Microsoft Azure: Shared storage configuration options](../virtual-machines/workloads/oracle/oracle-vm-solutions.md#shared-storage-configuration-options) * [Oracle database performance on Azure NetApp Files single volumes](performance-oracle-single-volumes.md) * [Benefits of using Azure NetApp Files with Oracle Database](solutions-benefits-azure-netapp-files-oracle-database.md)
azure-netapp-files Performance Linux Concurrency Session Slots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/performance-linux-concurrency-session-slots.md
The following example shows Packet 14 (server maximum requests):
## Next steps
+* [Linux direct I/O best practices for Azure NetApp Files](performance-linux-direct-io.md)
+* [Linux filesystem cache best practices for Azure NetApp Files](performance-linux-filesystem-cache.md)
* [Linux NFS mount options best practices for Azure NetApp Files](performance-linux-mount-options.md)
+* [Linux NFS read-ahead best practices](performance-linux-nfs-read-ahead.md)
+* [Azure virtual machine SKUs best practices](performance-virtual-machine-sku.md)
* [Performance benchmarks for Linux](performance-benchmarks-linux.md)
azure-netapp-files Performance Linux Direct Io https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/performance-linux-direct-io.md
+
+ Title: Linux direct I/O best practices for Azure NetApp Files | Microsoft Docs
+description: Describes Linux direct I/O and the best practices to follow for Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ms.devlang: na
+ Last updated : 07/02/2021++
+# Linux direct I/O best practices for Azure NetApp Files
+
+This article helps you understand direct I/O best practices for Azure NetApp Files.
+
+## Direct I/O
+
+ The most common parameter used in storage performance benchmarking is direct I/O. It is supported by FIO and Vdbench. DISKSPD offers support for the similar construct of memory-mapped I/O. With direct I/O, the filesystem cache is bypassed, operations for direct memory access copy are avoided, and storage tests are made fast and simple.
+
+Using the direct I/O parameter makes storage testing easy. No data is read from the filesystem cache on the client. As such, the test is truly stressing the storage protocol and service itself, rather than memory access speeds. Also, without the DMA memory copies, read and write operations are efficient from a processing perspective.
+
+Take the Linux `dd` command as an example workload. Without the optional `odirect` flag, all I/O generated by `dd` is served from the Linux buffer cache. Reads with the blocks already in memory are not retrieved from storage. Reads resulting in a buffer cache miss end up being read from storage using NFS read-ahead with varying results, depending on factors as mount `rsize` and client read-ahead tunables. When writes are sent through the buffer cache, they use a write-behind mechanism, which is untuned and uses a significant amount of parallelism to send the data to the storage device. You might attempt to run two independent streams of I/O, one `dd` for reads and one `dd` for writes. But in fact, the operating system, untuned, favors writes over reads and uses more parallelism of it.
+
+Aside from database, few applications use direct I/O. Instead, they choose to leverage the advantages of a large memory cache for repeated reads and a write behind cache for asynchronous writes. In short, using direct I/O turns the test into a micro benchmark *if* the application being synthesized uses the filesystem cache.
+
+The following are some databases that support direct I/O:
+
+* Oracle
+* SAP HANA
+* MySQL (InnoDB storage engine)
+* RocksDB
+* PostgreSQL
+* Teradata
+
+## Best practices
+
+Testing with `directio` is an excellent way to understand the limits of the storage service and client. To get a better understanding for how the application itself will behave (if the application doesn't use `directio`), you should also run tests through the filesystem cache.
+
+## Next steps
+
+* [Linux filesystem cache best practices for Azure NetApp Files](performance-linux-filesystem-cache.md)
+* [Linux NFS mount options best practices for Azure NetApp Files](performance-linux-mount-options.md)
+* [Linux concurrency best practices for Azure NetApp Files](performance-linux-concurrency-session-slots.md)
+* [Linux NFS read-ahead best practices](performance-linux-nfs-read-ahead.md)
+* [Azure virtual machine SKUs best practices](performance-virtual-machine-sku.md)
+* [Performance benchmarks for Linux](performance-benchmarks-linux.md)
azure-netapp-files Performance Linux Filesystem Cache https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/performance-linux-filesystem-cache.md
+
+ Title: Linux filesystem cache best practices for Azure NetApp Files | Microsoft Docs
+description: Describes Linux filesystem cache best practices to follow for Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ms.devlang: na
+ Last updated : 07/02/2021++
+# Linux filesystem cache best practices for Azure NetApp Files
+
+This article helps you understand filesystem cache best practices for Azure NetApp Files.
+
+## Filesystem cache tunables
+
+You need to understand the following factors about filesystem cache tunables:
+
+* Flushing a dirty buffer leaves the data in a clean state usable for future reads until memory pressure leads to eviction.
+* There are three triggers for an asynchronous flush operation:
+ * Time based: When a buffer reaches the age defined by this tunable, it must be marked for cleaning (that is, flushing, or writing to storage).
+ * Memory pressure: See [`vm.dirty_ratio | vm.dirty_bytes`](#vmdirty_ratio--vmdirty_bytes) for details.
+ * Close: When a file handle is closed, all dirty buffers are asynchronously flushed to storage.
+
+These factors are controlled by four tunables. Each tunable can be tuned dynamically and persistently using `tuned` or `sysctl` in the `/etc/sysctl.conf` file. Tuning these variables improves performance for applications.
+
+> [!NOTE]
+> Information discussed in this article was uncovered during SAS GRID and SAS Viya validation exercises. As such, the tunables are based on lessons learned from the validation exercises. Many applications will similarly benefit from tuning these parameters.
+
+### `vm.dirty_ratio | vm.dirty_bytes`
+
+These two tunables define the amount of RAM made usable for data modified but not yet written to stable storage. Whichever tunable is set automatically sets the other tunable to zero; RedHat advises against manually setting either of the two tunables to zero. The option `vm.dirty_ratio` (the default of the two) is set by Redhat to either 20% or 30% of physical memory depending on the OS, which is a significant amount considering the memory footprint of modern systems. Consideration should be given to setting `vm.dirty_bytes` instead of `vm.dirty_ratio` for a more consistent experience regardless of memory size. For example, ongoing work with SAS GRID determined 30 MiB an appropriate setting for best overall mixed workload performance.
+
+### `vm.dirty_background_ratio | vm.dirty_background_bytes`
+
+These tunables define the starting point where the Linux write-back mechanism begins flushing dirty blocks to stable storage. Redhat defaults to 10% of physical memory, which, on a large memory system, is a significant amount of data to start flushing. Taking SAS GRID for example, historically the recommendation has been to set `vm.dirty_background` to 1/5 size of `vm.dirty_ratio` or `vm.dirty_bytes`. Considering how aggressively the `vm.dirty_bytes` setting is set for SAS GRID, no specific value is being set here.
+
+### `vm.dirty_expire_centisecs`
+
+This tunable defines how old a dirty buffer can be before it must be tagged for asynchronously writing out. Take SAS ViyaΓÇÖs CAS workload for example. An ephemeral write-dominant workload found that setting this value to 300 centiseconds (3 seconds) was optimal, with 3000 centiseconds (30 seconds) being the default.
+
+SAS Viya shares CAS data into multiple small chunks of a few megabytes each. Rather than closing these file handles after writing data to each shard, the handles are left open and the buffers within are memory-mapped by the application. Without a close, there will be no flush until either memory pressure or 30 seconds has passed. Waiting for memory pressure proved suboptimal as did waiting for a long timer to expire. Unlike SAS GRID, which looked for the best overall throughput, SAS Viya looked to optimize write bandwidth.
+
+### `vm.dirty_writeback_centisecs`
+
+The kernel flusher thread is responsible for asynchronously flushing dirty buffers between each flush thread sleeps. This tunable defines the amount spent sleeping between flushes. Considering the 3-second `vm.dirty_expire_centisecs` value used by SAS Viya, SAS set this tunable to 100 centiseconds (1 second) rather than the 500 centiseconds (5 seconds) default to find the best overall performance.
+
+## Impact of an untuned filesystem cache
+
+Considering the default virtual memory tunables and the amount of RAM in modern systems, write-back potentially slows down other storage-bound operations from the perspective of the specific client driving this mixed workload. The following symptoms may be expected from an untuned, write-heavy, cache-laden Linux machine.
+
+* Directory lists `ls` take long enough as to appear hung.
+* Read throughput against the filesystem decreases significantly in comparison to write throughput.
+* `nfsiostat` reports write latencies **in seconds or higher**.
+
+You might experience this behavior only by *the Linux machine* performing the mixed write-heavy workload. Further, the experience is degraded against all NFS volumes mounted against a single storage endpoint. If the mounts come from two or more endpoints, only the volumes sharing an endpoint exhibit this behavior.
+
+Setting the filesystem cache parameters as described in this section has been shown to address the issues.
+
+## Monitoring virtual memory
+
+To understand what is going with virtual memory and the write-back, consider the following code snippet and output. *Dirty* represents the amount dirty memory in the system, and *writeback* represents the amount of memory actively being written to storage.
+
+`# while true; do echo "###" ;date ; egrep "^Cached:|^Dirty:|^Writeback:|file" /proc/meminfo; sleep 5; done`
+
+The following output comes from an experiment where the `vm.dirty_ratio` and the `vm.dirty_background` ratio were set to 2% and 1% of physical memory respectively. In this case, flushing began at 3.8 GiB, 1% of the 384-GiB memory system. Writeback closely resembled the write throughput to NFS.
+
+```
+Cons
+Dirty: 1174836 kB
+Writeback: 4 kB
+###
+Dirty: 3319540 kB
+Writeback: 4 kB
+###
+Dirty: 3902916 kB <-- Writes to stable storage begins here
+Writeback: 72232 kB
+###
+Dirty: 3131480 kB
+Writeback: 1298772 kB
+```
+
+## Next steps
+
+* [Linux direct I/O best practices for Azure NetApp Files](performance-linux-direct-io.md)
+* [Linux NFS mount options best practices for Azure NetApp Files](performance-linux-mount-options.md)
+* [Linux concurrency best practices for Azure NetApp Files](performance-linux-concurrency-session-slots.md)
+* [Linux NFS read-ahead best practices](performance-linux-nfs-read-ahead.md)
+* [Azure virtual machine SKUs best practices](performance-virtual-machine-sku.md)
+* [Performance benchmarks for Linux](performance-benchmarks-linux.md)
azure-netapp-files Performance Linux Mount Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/performance-linux-mount-options.md
When preparing a multi-node SAS GRID environment for production, you might notic
| No `nconnect` | 8 hours | | `nconnect=8` | 5.5 hours |
-Both sets of tests used the same E32-8_v4 virtual machine and RHEL8.3, with readahead set to 15 MiB.
+Both sets of tests used the same E32-8_v4 virtual machine and RHEL8.3, with read-ahead set to 15 MiB.
When you use `nconnect`, keep the following rules in mind:
sudo vi /etc/fstab
10.23.1.4:/HN1-shared/shared /hana/shared nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0 ```
-Also for example, SAS Viya recommends a 256-KiB read and write sizes, and [SAS GRID](https://communities.sas.com/t5/Administration-and-Deployment/Azure-NetApp-Files-A-shared-file-system-to-use-with-SAS-Grid-on/m-p/606973/highlight/true#M17740) limits the `r/wsize` to 64 KiB while augmenting read performance with increased readahead for the NFS mounts. <!-- For more information on readahead, see the article ΓÇ£NFS ReadaheadΓÇ¥. -->
+Also for example, SAS Viya recommends a 256-KiB read and write sizes, and [SAS GRID](https://communities.sas.com/t5/Administration-and-Deployment/Azure-NetApp-Files-A-shared-file-system-to-use-with-SAS-Grid-on/m-p/606973/highlight/true#M17740) limits the `r/wsize` to 64 KiB while augmenting read performance with increased read-ahead for the NFS mounts. See [NFS read-ahead best practices for Azure NetApp Files](performance-linux-nfs-read-ahead.md) for details.
The following considerations apply to the use of `rsize` and `wsize`:
When no close-to-open consistency (`nocto`) is used, the client will trust the f
## Next steps
+* [Linux direct I/O best practices for Azure NetApp Files](performance-linux-direct-io.md)
+* [Linux filesystem cache best practices for Azure NetApp Files](performance-linux-filesystem-cache.md)
* [Linux concurrency best practices for Azure NetApp Files](performance-linux-concurrency-session-slots.md)
+* [Linux NFS read-ahead best practices](performance-linux-nfs-read-ahead.md)
+* [Azure virtual machine SKUs best practices](performance-virtual-machine-sku.md)
* [Performance benchmarks for Linux](performance-benchmarks-linux.md)
azure-netapp-files Performance Linux Nfs Read Ahead https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/performance-linux-nfs-read-ahead.md
+
+ Title: Linux NFS read-ahead best practices for Azure NetApp Files - Session slots and slot table entries | Microsoft Docs
+description: Describes filesystem cache and Linux NFS read-ahead best practices for Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ms.devlang: na
+ Last updated : 07/02/2021++
+# Linux NFS read-ahead best practices for Azure NetApp Files
+
+This article helps you understand filesystem cache best practices for Azure NetApp Files.
+
+NFS read-ahead predictively requests blocks from a file in advance of I/O requests by the application. It is designed to improve client sequential read throughput. Until recently, all modern Linux distributions set the read-ahead value to be equivalent of 15 times the mounted filesystems `rsize`.
+
+The following table shows the default read-ahead values for each given `rsize` mount option.
+
+| Mounted filesystem `rsize` | Blocks read-ahead |
+|-|-|
+| 64 KiB | 960 KiB |
+| 256 KiB | 3,840 KiB |
+| 1024 KiB | 15,360 KiB |
+
+RHEL 8.3 and Ubuntu 18.04 introduced changes that might negatively impact client sequential read performance. Unlike earlier releases, these distributions set read-ahead to a default of 128 KiB regardless of the `rsize` mount option used. Upgrading from releases with the larger read-ahead value to those with the 128-KiB default experienced decreases in sequential read performance. However, read-ahead values may be tuned upward both dynamically and persistently. For example, testing with SAS GRID found the 15,360-KiB read value optimal compared to 3,840 KiB, 960 KiB, and 128 KiB. Not enough tests have been run beyond 15,360 KiB to determine positive or negative impact.
+
+The following table shows the default read-ahead values for each currently available distribution.
+
+| Distribution | Release | Blocks read-ahead |
+|-|-|-|
+| RHEL | 8.3 | 128 KiB |
+| RHEL | 7.X, 8.0, 8.1, 8.2 | 15 X `rsize` |
+| SLES | 12.X ΓÇô at least 15SP2 | 15 X `rsize` |
+| Ubuntu | 18.04 ΓÇô at least 20.04 | 128 KiB |
+| Ubuntu | 16.04 | 15 X `rsize` |
+| Debian | Up to at least 10 | 15 x `rsize` |
++
+## How to work with per-NFS filesystem read-ahead
+
+NFS read-ahead is defined at the mount point for an NFS filesystem. The default setting can be viewed and set both dynamically and persistently. For convenience, the following bash script written by Red Hat has been provided for viewing or dynamically setting read-ahead for amounted NFS filesystem.
+
+Read-ahead can be defined either dynamically per NFS mount using the following script or persistently using `udev` rules as shown in this section. To display or set read-ahead for a mounted NFS filesystem, you can save the following script as a bash file, modify the fileΓÇÖs permissions to make it an executable (`chmod 544 readahead.sh`), and run as shown.
+
+## How to show or set read-ahead values
+
+To show the current read-ahead value (the returned value is in KiB), run the following command:
+
+`$ ./readahead.sh show <mount-point>`
+
+To set a new value for read-ahead, run the following command:
+
+`$ ./readahead.sh show <mount-point> [read-ahead-kb]`
+
+### Example
+
+```
+#!/bin/bash
+# set | show readahead for a specific mount point
+# Useful for things like NFS and if you do not know / care about the backing device
+#
+# To the extent possible under law, Red Hat, Inc. has dedicated all copyright
+# to this software to the public domain worldwide, pursuant to the
+# CC0 Public Domain Dedication. This software is distributed without any warranty.
+# See <http://creativecommons.org/publicdomain/zero/1.0/>.
+#
+
+E_BADARGS=22
+function myusage() {
+echo "Usage: `basename $0` set|show <mount-point> [read-ahead-kb]"
+}
+
+if [ $# -gt 3 -o $# -lt 2 ]; then
+ myusage
+ exit $E_BADARGS
+fi
+
+MNT=${2%/}
+BDEV=$(grep $MNT /proc/self/mountinfo | awk '{ print $3 }')
+
+if [ $# -eq 3 -a $1 == "set" ]; then
+ echo $3 > /sys/class/bdi/$BDEV/read_ahead_kb
+elif [ $# -eq 2 -a $1 == "show" ]; then
+ echo "$MNT $BDEV /sys/class/bdi/$BDEV/read_ahead_kb = "$(cat /sys/class/bdi/$BDEV/read_ahead_kb)
+else
+ myusage
+ exit $E_BADARGS
+fi
+```
+
+## How to persistently set read-ahead for NFS mounts
+
+To persistently set read-ahead for NFS mounts, `udev` rules can be written as follows:
+
+1. Create and test `/etc/udev/rules.d/99-nfs.rules`:
+
+ `SUBSYSTEM=="bdi", ACTION=="add", PROGRAM="/bin/awk -v bdi=$kernel 'BEGIN{ret=1} {if ($4 == bdi) {ret=0}} END{exit ret}' /proc/fs/nfsfs/volumes", ATTR{read_ahead_kb}="15380"`
+
+2. Apply the `udev` rule:
+
+ `$udevadm control --reload`
+
+## Next steps
+
+* [Linux direct I/O best practices for Azure NetApp Files](performance-linux-direct-io.md)
+* [Linux filesystem cache best practices for Azure NetApp Files](performance-linux-filesystem-cache.md)
+* [Linux NFS mount options best practices for Azure NetApp Files](performance-linux-mount-options.md)
+* [Linux concurrency best practices](performance-linux-concurrency-session-slots.md)
+* [Azure virtual machine SKUs best practices](performance-virtual-machine-sku.md)
+* [Performance benchmarks for Linux](performance-benchmarks-linux.md)
azure-netapp-files Performance Virtual Machine Sku https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/performance-virtual-machine-sku.md
+
+ Title: Azure virtual machine SKUs best practices for Azure NetApp Files | Microsoft Docs
+description: Describes Azure NetApp Files best practices about Azure virtual machine SKUs, including differences within and between SKUs.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ms.devlang: na
+ Last updated : 07/02/2021++
+# Azure virtual machine SKUs best practices for Azure NetApp Files
+
+This article describes Azure NetApp Files best practices about Azure virtual machine SKUs, including differences within and between SKUs.
+
+## SKU selection considerations
+
+Storage performance involves more than the speed of the storage itself. The processor speed and architecture have a lot to do with the overall experience from any particular compute node. As part of the selection process for a given SKU, you should consider the following factors:
+
+* AMD or Intel: For example, SAS uses a math kernel library designed specifically for Intel processors. In this case, Intel SKUs are preferred over AMD SKU.
+* The F2, E_v3, and D_v3 machine types are each based on more than one chipset. In using Azure Dedicated Hosts, you might select specific models (Broadwell, Cascade Lake, or Skylake when selecting the E type for example). Otherwise, the chipset selection is non-deterministic. If you are deploying an HPC cluster and a consistent experience across the inventory is important, then you can consider single Azure Dedicated Hosts or go with single chipset SKUs such as the E_v4 or D_v4.
+* Performance variability with network-attached storage (NAS) has been observed in testing with both the Intel Broadwell based SKUs and the AMD EPYCΓäó 7551 based SKUs. Two issues have been observed:
+ * When the accelerated network interface is inappropriately mapped to a sub optimal NUMA Node, read performance decreases significantly. Although mapping the accelerated networking interface to a specific NUMA node is beneficial on newer SKUs, it must be considered a requirement on SKUs with these chipsets (Lv2|E_v3|D_v3).
+ * Virtual machines running on the Lv2, or either E_v3 or D_v3 running on a Broadwell chipset are more susceptible to resource contention than when running on other SKUs. When testing using multiple virtual machines running within a single Azure Dedicated Host, running network-based storage workload from one virtual machine has been seen to decrease the performance of network-based storage workloads running from a second virtual machine. The decrease is more pronounced when any of the virtual machines on the node have not had their accelerated network interface/NUMA node optimally mapped. Keep in mind that the E_v3 and D_V3 may between them land on Haswell, Broadwell, Cascade Lake, or Skylake.
+
+For the most consistent performance when selecting virtual machines, select from SKUs with a single type of chipset ΓÇô newer SKUs are preferred over the older models where available. Keep in mind that, aside from using a dedicated host, predicting correctly which type of hardware the E_v3 or D_v3 virtual machines land on is unlikely. When using the E_v3 or D_v3 SKU:
+
+* When a virtual machine is turned off, de-allocated, and then turned on again, the virtual machine is likely to change hosts and as such hardware models.
+* When applications are deployed across multiple virtual machines, expect the virtual machines to run on heterogenous hardware.
+
+## Differences within and between SKUs
+
+The following table highlights the differences within and between SKUs. Note, for example, that the chipset of the underlying E_v3 and D_v3 vary between the Broadwell, Cascade Lake, Skylake, and also in the case of the D_v3.
+
+| Family | Version | Description | Frequency (GHz) |
+|-|-|-|-|
+| E | V3 | Intel® Xeon® E5-2673 v4 (Broadwell) | 2.3 (3.6) |
+| E | V3 | Intel® Xeon® Platinum 8272CL (Cascade Lake) | 2.6 (3.7) |
+| E | V3 | Intel® Xeon® Platinum 8171M (Skylake) | 2.1 (3.8) |
+| E | V4 | Intel® Xeon® Platinum 8272CL (Cascade Lake) | 2.6 (3.7) |
+| Ea | V4 | AMD EPYCΓäó 7452 | 2.35 (3.35) |
+| D | V3 | Intel® Xeon® E5-2673 v4 (Broadwell) | 2.3 (3.6) |
+| D | V3 | Intel® Xeon® E5-2673 v3 (Haswell) | 2.3 (2.3) |
+| D | V3 | Intel® Xeon® Platinum 8272CL (Cascade Lake) | 2.6 (3.7) |
+| D | V3 | Intel® Xeon® Platinum 8171M (Skylake) | 2.1 (3.8) |
+| D | V4 | Intel® Xeon® Platinum 8272CL (Cascade Lake) | 2.6 (3.7) |
+| Da | V4 | AMD EPYCΓäó 7452 | 2.35 (3.35) |
+| L | V2 | AMD EPYCΓäó 7551 | 2.0 (3.2) |
+| F | 1 | Intel Xeon® E5-2673 v3 (Haswell) | 2.3 (2.3) |
+| F | 2 | Intel® Xeon® Platinum 8168M (Cascade Lake) | 2.7 (3.7) |
+| F | 2 | Gen 2 Intel® Xeon® Platinum 8272CL (Skylake) | 2.1 (3.8) |
+
+When preparing a multi-node SAS GRID environment for production, you might notice a repeatable one-hour-and-fifteen-minute variance between analytics runs with no other difference than underlying hardware.
+
+| SKU and hardware platform | Job run times |
+|-|-|
+| E32-8_v3 (Broadwell) | 5.5 hours |
+| E32-8_v3 (Cascade Lake) | 4.25 hours |
+
+In both sets of tests, an E32-8_v3 SKU was selected, and RHEL 8.3 was used along with the `nconnect=8` mount option.
+
+## Best practices
+
+* Whenever possible, select the E_v4, D_v4, or newer rather than the E_v3 or D_v3 SKUs.
+* Whenever possible, select the Ed_v4, Dd_v4, or newer rather than the L2 SKU.
+
+## Next steps
+
+* [Linux direct I/O best practices for Azure NetApp Files](performance-linux-direct-io.md)
+* [Linux filesystem cache best practices for Azure NetApp Files](performance-linux-filesystem-cache.md)
+* [Linux NFS mount options best practices for Azure NetApp Files](performance-linux-mount-options.md)
+* [Linux concurrency best practices](performance-linux-concurrency-session-slots.md)
+* [Linux NFS read-ahead best practices](performance-linux-nfs-read-ahead.md)
+* [Performance benchmarks for Linux](performance-benchmarks-linux.md)
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/overview.md
Title: Bicep language for deploying Azure resources description: Describes the Bicep language for deploying infrastructure to Azure. It provides an improved authoring experience over using JSON to develop templates. Previously updated : 06/03/2021 Last updated : 07/02/2021 # What is Bicep?
They continue to function exactly as they always have. You don't need to make an
When you're ready, you can [decompile the JSON files to Bicep](./decompile.md).
+## Known limitations
+
+- No support for single-line object and arrays. For example, `['a', 'b', 'c']` is not supported. For more information, see [Arrays](/data-types#arrays), [Objects](/data-types#objects).
+- No support for breaking long lines into multiple lines. For example:
+
+ ```bicep
+ resource sa 'Microsoft.Storage/storageAccounts@2019-06-01' = if (newOrExisting == 'new') {
+ ...
+ }
+ ```
+
+ Can't be written as:
+
+ ```bicep
+ resource sa 'Microsoft.Storage/storageAccounts@2019-06-01' =
+ if (newOrExisting == 'new') {
+ ...
+ }
+ ```
+
+- No support for the concept of apiProfile which is used to map a single apiProfile to a set apiVersion for each resource type.
+- No support for user-defined functions (UDFs).
+ ## Next steps Get started with the [Quickstart](./quickstart-create-bicep-use-visual-studio-code.md).
azure-resource-manager Template Tutorial Deployment Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-tutorial-deployment-script.md
na ms.devlang: na Previously updated : 12/16/2020 Last updated : 07/02/2021
The deployment script adds a certificate to the key vault. Configure the key vau
# private key is added as a secret that can be retrieved in the Resource Manager template Add-AzKeyVaultCertificate -VaultName $vaultName -Name $certificateName -CertificatePolicy $policy -Verbose
- $newCert = Get-AzKeyVaultCertificate -VaultName $vaultName -Name $certificateName
- # it takes a few seconds for KeyVault to finish $tries = 0 do {
The deployment script adds a certificate to the key vault. Configure the key vau
} } while ($operation.Status -ne 'completed')
+ $newCert = Get-AzKeyVaultCertificate -VaultName $vaultName -Name $certificateName
$DeploymentScriptOutputs['certThumbprint'] = $newCert.Thumbprint $newCert | Out-String }
azure-video-analyzer Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/troubleshoot.md
If there are any additional issues that you may need help with, please **[collec
### Video Analyzer working with external modules
-Video Analyzer via the pipeline extension processors can extend the pipeline to send and receive data from other IoT Edge modules by using HTTP or gRPC protocols. As a [specific example](), this live pipeline can send video frames as images to an external inference module such as Yolo v3 and receive JSON-based analytics results using HTTP protocol . In such a topology, the destination for the events is mostly the IoT hub. In situations where you don't see the inference events on the hub, check for the following:
+Video Analyzer via the pipeline extension processors can extend the pipeline to send and receive data from other IoT Edge modules by using HTTP or gRPC protocols. As a [specific example](https://github.com/Azure/video-analyzer/tree/main/pipelines/live/topologies/httpExtension), this live pipeline can send video frames as images to an external inference module such as Yolo v3 and receive JSON-based analytics results using HTTP protocol . In such a topology, the destination for the events is mostly the IoT hub. In situations where you don't see the inference events on the hub, check for the following:
- Check to see whether the hub that live pipeline is publishing to and the hub you're examining are the same. As you create multiple deployments, you might end up with multiple hubs and mistakenly check the wrong hub for events. - In Azure portal, check to see whether the external module is deployed and running. In the example image here, rtspsim, yolov3, tinyyolov3 and logAnalyticsAgent are IoT Edge modules running external to the avaedge module. [ ![Screenshot that displays the running status of modules in Azure IoT Hub.](./media/troubleshoot/iot-hub-azure.png) ](./media/troubleshoot/iot-hub-azure.png#lightbox) -- Check to see whether you're sending events to the correct URL endpoint. The external AI container exposes a URL and a port through which it receives and returns the data from POST requests. This URL is specified as an `endpoint: url` property for the HTTP extension processor. As seen in the [topology URL](), the endpoint is set to the inferencing URL parameter. Ensure that the default value for the parameter or the passed-in value is accurate. You can test to see whether it's working by using Client URL (cURL).
+- Check to see whether you're sending events to the correct URL endpoint. The external AI container exposes a URL and a port through which it receives and returns the data from POST requests. This URL is specified as an `endpoint: url` property for the HTTP extension processor. As seen in the [topology URL](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/httpExtension/topology.json), the endpoint is set to the inferencing URL parameter. Ensure that the default value for the parameter or the passed-in value is accurate. You can test to see whether it's working by using Client URL (cURL).
As an example, here is a Yolo v3 container that's running on local machine with an IP address of 172.17.0.3.
azure-vmware Concepts Private Clouds Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-private-clouds-clusters.md
As with other resources, private clouds are installed and managed from within an
The diagram shows a single Azure subscription with two private clouds that represent a development and production environment. In each of those private clouds are two clusters. ## Hosts
azure-vmware Connect Multiple Private Clouds Same Region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/connect-multiple-private-clouds-same-region.md
+
+ Title: Connect multiple Azure VMware Solution private clouds in the same region (Preview)
+description: Learn how to create a network connection between two or more Azure VMware Solution private clouds located in the same region.
+ Last updated : 07/02/2021+
+#Customer intent: As an Azure service administrator, I want create a network connection between two or more Azure VMware Solution private clouds located in the same region.
+++
+# Connect multiple Azure VMware Solution private clouds in the same region (Preview)
+
+The **AVS Interconnect** feature lets you create a network connection between two or more Azure VMware Solution private clouds located in the same region. It creates a routing link between the management and workload networks of the private clouds to enable network communication between the clouds.
+
+You can connect a private cloud to multiple private clouds, and the connections are non-transitive. For example, if _private cloud 1_ is connected to _private cloud 2_, and _private cloud 2_ is connected to _private cloud 3_, private clouds 1 and 3 would not communicate until they were directly connected.
+
+You can only connect to private clouds in the same region. To connect to private clouds that are in different regions, [use ExpressRoute Global Reach](tutorial-expressroute-global-reach-private-cloud.md) to connect your private clouds in the same way you connect your private cloud to your on-premises circuit.
+
+>[!IMPORTANT]
+>The AVS Interconnect (Preview) feature is currently in public preview.
+>This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+>For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Supported regions
+
+The AVS Interconnect (Preview) feature is available in all regions except for South Central US (SAT20), North Europe (DUB21), Southeast Asia (SG2), and UK West (CWL20).
+
+## Prerequisites
+
+- Write access to each private cloud you're connecting
+- Routed IP address space in each cloud is unique and doesn't overlap
+
+>[!NOTE]
+>The **AVS interconnect** feature doesn't check for overlapping IP space the way native Azure vNet peering does before creating the peering. Therefore, it's your responsibility to ensure that there isn't overlap between the private clouds.
+>
+>In Azure VMware Solution environments, it's possible to configure non-routed, overlapping IP deployments on NSX segments that aren't routed to Azure. These don't cause issues with the AVS Interconnect feature, as it only routes between the NSX T0 on each private cloud.
++
+## Add connection between private clouds
+
+1. In your Azure VMware Solution private cloud, under **Manage**, select **Connectivity**.
+
+2. Select the **AVS Interconnect** tab and then **Add**.
+
+ :::image type="content" source="media/networking/private-cloud-to-private-cloud-no-connections.png" alt-text="Screenshot showing the AVS Interconnect tab under Connectivity." border="true" lightbox="media/networking/private-cloud-to-private-cloud-no-connections.png":::
+
+3. Select the information and Azure VMware Solution private cloud for the new connection.
+
+ >[!NOTE]
+ >You can only connect to private clouds in the same region. To connect to private clouds that are in different regions, [use ExpressRoute Global Reach](tutorial-expressroute-global-reach-private-cloud.md) to connect your private clouds in the same way you connect your private cloud to your on-premises circuit.
+
+ :::image type="content" source="media/networking/add-connection-to-other-private-cloud.png" alt-text="Screenshot showing the required information to add a connection to other private cloud." border="true":::
++
+4. Select the **I confirm** checkbox acknowledging that there are no overlapping routed IP spaces in the two private clouds.
+
+5. Select **Create**. You can check the status of the connection creation.
+
+ :::image type="content" source="media/networking/add-connection-to-other-private-cloud-notification.png" alt-text="Screenshot showing the Notification information for connection in progress and an existing connection." border="true":::
+
+ You'll' see all of your connections under **AVS Private Cloud**.
+
+ :::image type="content" source="media/networking/private-cloud-to-private-cloud-two-connections.png" alt-text="Screenshot showing the AVS Interconnect tab under Connectivity and two established private cloud connections." border="true" lightbox="media/networking/private-cloud-to-private-cloud-two-connections.png":::
++
+## Remove connection between private clouds
+
+1. In your Azure VMware Solution private cloud, under **Manage**, select **Connectivity**.
+
+2. For the connection you want to remove, on the right, select **Delete** (trash can) and then **Yes**.
++
+## Next steps
+
+Now that you've connected multiple private clouds in the same region, you may want to learn about:
+
+- [Move Azure VMware Solution resources to another region](move-azure-vmware-solution-across-regions.md)
+- [Move Azure VMware Solution subscription to another subscription](move-ea-csp-subscriptions.md)
azure-web-pubsub Quickstart Live Demo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/quickstart-live-demo.md
Last updated 04/26/2021
-# Quickstart: Get started with live demo
+# Quickstart: Get started with chatroom live demo
-The Azure Web PubSub service helps you build real-time messaging web applications using WebSockets and the publish-subscribe pattern easily. In this quickstart, learn how to get started easily with a live demo.
+The Azure Web PubSub service helps you build real-time messaging web applications using WebSockets and the publish-subscribe pattern easily. The [chatroom live demo](https://azure.github.io/azure-webpubsub/demos/clientpubsub.html) demonstrates the real-time messaging capability provided by Azure Web PubSub. With this live demo, you could easily join a chat group and send real-time message to a specific group.
++
+In this quickstart, learn how to get started easily with a live demo.
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [create-instance-portal](includes/create-instance-portal.md)]
-## Get started with the live demo
+## Get started with the chatroom live demo
### Get client URL with a temp access token
As the first step, you need to get the Client URL from the Azure Web PubSub inst
- Set proper `Roles`: **Send To Groups** and **Join/Leave Groups** - Generate and copy the `Client Access URL`. + ### Try the live demo With this live demo, you could join or leave a group and send messages to the group members easily. -- Open [Client Pub/Sub Demo](https://azure.github.io/azure-webpubsub/demos/clientpubsub.html), paste the `Client Access URL` and Connect. -- Try different groups to join and different groups to send messages to, and see what messages are received.-- You can also try to uncheck `Roles` when generating the `Client Access URL` to see what will happen when join/leave a group or send messages to a group.
+- Open [chatroom live demo](https://azure.github.io/azure-webpubsub/demos/clientpubsub.html), paste the `Client Access URL` and Connect.
++
+> [!NOTE]
+> **Client Access URL** is a convenience tool provided in the portal to simplify your getting-started experience, you can also use this Client Access URL to do some quick connect test. To write your own application, we provide SDKs in 4 languages to help you generate the URL.
+
+- Try different groups to join and different groups to send messages to, and see what messages are received. For example:
+ - Make two clients joining into the same group. You will see that the message could broadcast to the group members.
+ - Make two clients joining into different groups. You will see that the client cannot receive message if it is not group member.
+- You can also try to uncheck `Roles` when generating the `Client Access URL` to see what will happen when join a group or send messages to a group. For example:
+ - Uncheck the `Send to Groups` permission. You will see that the client cannot send messages to the group.
+ - Uncheck the `Join/Leave Groups` permission. You will see that the client cannot join a group.
+
+## Next steps
+
+In this quickstart, you learned the real-time messaging capability with the chatroom live demo. Now, you could start to build your own application.
+
+> [!div class="nextstepaction"]
+> [Quick start: publish and subscribe messages in Azure Web PubSub](https://azure.github.io/azure-webpubsub/getting-started/publish-messages/js-publish-message)
+
+> [!div class="nextstepaction"]
+> [Quick start: Create a simple chatroom with Azure Web PubSub](https://azure.github.io/azure-webpubsub/getting-started/create-a-chat-app/js-handle-events)
+
+> [!div class="nextstepaction"]
+> [Quickstart: Create a serverless simple chat application with Azure Functions and Azure Web PubSub service](./quickstart-serverless.md)
+
+> [!div class="nextstepaction"]
+> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
backup Backup Azure Security Feature Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-security-feature-cloud.md
This flow chart shows the different steps and states of a backup item when Soft
## Enabling and disabling soft delete
-Soft delete is enabled by default on newly created vaults to protect backup data from accidental or malicious deletes. Disabling this feature isn't recommended. The only circumstance where you should consider disabling soft delete is if you're planning on moving your protected items to a new vault, and can't wait the 14 days required before deleting and reprotecting (such as in a test environment.) Only the vault owner can disable this feature. If you disable this feature, all future deletions of protected items will result in immediate removal, without the ability to restore. Backup data that exists in soft deleted state before disabling this feature, will remain in soft deleted state for the period of 14 days. If you wish to permanently delete these immediately, then you need to undelete and delete them again to get permanently deleted.
+Soft delete is enabled by default on newly created vaults to protect backup data from accidental or malicious deletes. Disabling this feature isn't recommended. The only circumstance where you should consider disabling soft delete is if you're planning on moving your protected items to a new vault, and can't wait the 14 days required before deleting and reprotecting (such as in a test environment). Only the vault owner with the Contributor role (that provides permissions to perform Microsoft.RecoveryServices/Vaults/backupconfig/write on the vault) can disable this feature. If you disable this feature, all future deletions of protected items will result in immediate removal, without the ability to restore. Backup data that exists in soft deleted state before disabling this feature, will remain in soft deleted state for the period of 14 days. If you wish to permanently delete these immediately, then you need to undelete and delete them again to get permanently deleted.
It's important to remember that once soft delete is disabled, the feature is disabled for all the types of workloads. For example, it's not possible to disable soft delete only for SQL server or SAP HANA DBs while keeping it enabled for virtual machines in the same vault. You can create separate vaults for granular control.
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix.md
The following table describes the features of Recovery Services vaults:
**Move vaults** | You can [move vaults](./backup-azure-move-recovery-services-vault.md) across subscriptions or between resource groups in the same subscription. However, moving vaults across regions isn't supported. **Move data between vaults** | Moving backed-up data between vaults isn't supported. **Modify vault storage type** | You can modify the storage replication type (either geo-redundant storage or locally redundant storage) for a vault before backups are stored. After backups begin in the vault, the replication type can't be modified.
-**Zone-redundant storage (ZRS)** | Available in the UK South (UKS) and South East Asia (SEA) regions.
+**Zone-redundant storage (ZRS)** | Available in the UK South, South East Asia, Australia East, North Europe and Central US.
**Private Endpoints** | See [this section](./private-endpoints.md#before-you-start) for requirements to create private endpoints for a recovery service vault. ## On-premises backup support
Azure Backup has added the Cross Region Restore feature to strengthen data avail
[green]: ./media/backup-support-matrix/green.png [yellow]: ./media/backup-support-matrix/yellow.png
-[red]: ./media/backup-support-matrix/red.png
+[red]: ./media/backup-support-matrix/red.png
backup Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/private-endpoints.md
This article will help you understand the process of creating private endpoints
- Once a private endpoint is created for a vault, the vault will be locked down. It won't be accessible (for backups and restores) from networks apart from ones that contain a private endpoint for the vault. If all private endpoints for the vault are removed, the vault will be accessible from all networks. - A private endpoint connection for Backup uses a total of 11 private IPs in your subnet, including those used by Azure Backup for storage. This number may be higher (up to 25) for certain Azure regions. So we suggest that you have enough private IPs available when you attempt to create private endpoints for Backup. - While a Recovery Services vault is used by (both) Azure Backup and Azure Site Recovery, this article discusses use of private endpoints for Azure Backup only.-- Azure Active Directory doesn't currently support private endpoints. So IPs and FQDNs required for Azure Active Directory to work in a region will need to be allowed outbound access from the secured network when performing backup of databases in Azure VMs and backup using the MARS agent. You can also use NSG tags and Azure Firewall tags for allowing access to Azure AD, as applicable.
+- Private endpoints for Backup donΓÇÖt include access to Azure Active Directory (Azure AD) and the same needs to be ensured separately. So, IPs and FQDNs required for Azure AD to work in a region will need outbound access to be allowed from the secured network when performing backup of databases in Azure VMs and backup using the MARS agent. You can also use NSG tags and Azure Firewall tags for allowing access to Azure AD, as applicable.
- Virtual networks with Network Policies aren't supported for Private Endpoints. You'll need to [disable Network Polices](../private-link/disable-private-endpoint-network-policy.md) before continuing. - You need to re-register the Recovery Services resource provider with the subscription if you registered it before May 1 2020. To re-register the provider, go to your subscription in the Azure portal, navigate to **Resource provider** on the left navigation bar, then select **Microsoft.RecoveryServices** and select **Re-register**. - [Cross-region restore](backup-create-rs-vault.md#set-cross-region-restore) for SQL and SAP HANA database backups aren't supported if the vault has private endpoints enabled.
This article will help you understand the process of creating private endpoints
While private endpoints are enabled for the vault, they're used for backup and restore of SQL and SAP HANA workloads in an Azure VM and MARS agent backup only. You can use the vault for backup of other workloads as well (they won't require private endpoints though). In addition to backup of SQL and SAP HANA workloads and backup using the MARS agent, private endpoints are also used to perform file recovery for Azure VM backup. For more information, see the following table:
-| Backup of workloads in Azure VM (SQL, SAP HANA), Backup using MARS Agent | Use of private endpoints is recommended to allow backup and restore without needing to add to an allow list any IPs/FQDNs for Azure Backup or Azure Storage from your virtual networks. In that scenario, ensure that VMs that host SQL databases can reach Azure AD IPs or FQDNs. |
+| Backup of workloads in Azure VM (SQL, SAP HANA), Backup using MARS Agent | Use of private endpoints is recommended to allow backup and restore without needing to add to an allowlist any IPs/FQDNs for Azure Backup or Azure Storage from your virtual networks. In that scenario, ensure that VMs that host SQL databases can reach Azure AD IPs or FQDNs. |
| | | | **Azure VM backup** | VM backup doesn't require you to allow access to any IPs or FQDNs. So it doesn't require private endpoints for backup and restore of disks. <br><br> However, file recovery from a vault containing private endpoints would be restricted to virtual networks that contain a private endpoint for the vault. <br><br> When using ACLΓÇÖed unmanaged disks, ensure the storage account containing the disks allows access to **trusted Microsoft services** if it's ACLΓÇÖed. | | **Azure Files backup** | Azure Files backups are stored in the local storage account. So it doesn't require private endpoints for backup and restore. |
$privateEndpoint = New-AzPrivateEndpoint `
## Frequently asked questions
-Q. Can I create a private endpoint for an existing Backup vault?<br>
-A. No, private endpoints can be created for new Backup vaults only. So the vault must not have ever had any items protected to it. In fact, no attempts to protect any items to the vault can be made before creating private endpoints.
+### Can I create a private endpoint for an existing Backup vault?<br>
-Q. I tried to protect an item to my vault, but it failed and the vault still doesn't contain any items protected to it. Can I create private endpoints for this vault?<br>
-A. No, the vault must not have had any attempts to protect any items to it in the past.
+No, private endpoints can be created for new Backup vaults only. So the vault must not have ever had any items protected to it. In fact, no attempts to protect any items to the vault can be made before creating private endpoints.
-Q. I have a vault that's using private endpoints for backup and restore. Can I later add or remove private endpoints for this vault even if I have backup items protected to it?<br>
-A. Yes. If you already created private endpoints for a vault and protected backup items to it, you can later add or remove private endpoints as required.
+### I tried to protect an item to my vault, but it failed and the vault still doesn't contain any items protected to it. Can I create private endpoints for this vault?<br>
-Q. Can the private endpoint for Azure Backup also be used for Azure Site Recovery?<br>
-A. No, the private endpoint for Backup can only be used for Azure Backup. You'll need to create a new private endpoint for Azure Site Recovery, if it's supported by the service.
+No, the vault must not have had any attempts to protect any items to it in the past.
-Q. I missed one of the steps in this article and went on to protect my data source. Can I still use private endpoints?<br>
-A. Not following the steps in the article and continuing to protect items may lead to the vault not being able to use private endpoints. It's therefore recommended you refer to this checklist before proceeding to protect items.
+### I have a vault that's using private endpoints for backup and restore. Can I later add or remove private endpoints for this vault even if I have backup items protected to it?<br>
-Q. Can I use my own DNS server instead of using the Azure private DNS zone or an integrated private DNS zone?<br>
-A. Yes, you can use your own DNS servers. However, make sure all required DNS records are added as suggested in this section.
+Yes. If you already created private endpoints for a vault and protected backup items to it, you can later add or remove private endpoints as required.
-Q. Do I need to perform any additional steps on my server after I've followed the process in this article?<br>
-A. After following the process detailed in this article, you don't need to do additional work to use private endpoints for backup and restore.
+### Can the private endpoint for Azure Backup also be used for Azure Site Recovery?<br>
+
+No, the private endpoint for Backup can only be used for Azure Backup. You'll need to create a new private endpoint for Azure Site Recovery, if it's supported by the service.
+
+### I missed one of the steps in this article and went on to protect my data source. Can I still use private endpoints?<br>
+
+Not following the steps in the article and continuing to protect items may lead to the vault not being able to use private endpoints. It's therefore recommended you refer to this checklist before proceeding to protect items.
+
+### Can I use my own DNS server instead of using the Azure private DNS zone or an integrated private DNS zone?<br>
+
+Yes, you can use your own DNS servers. However, make sure all required DNS records are added as suggested in this section.
+
+### Do I need to perform any additional steps on my server after I've followed the process in this article?<br>
+
+After following the process detailed in this article, you don't need to do additional work to use private endpoints for backup and restore.
## Next steps
baremetal-infrastructure Connect Baremetal Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/connect-baremetal-infrastructure.md
Title: Connect BareMetal Infrastructure instances in Azure description: Learn how to identify and interact with BareMetal instances in the Azure portal or Azure CLI. Previously updated : 04/06/2021 Last updated : 07/01/2021 # Connect BareMetal Infrastructure instances in Azure
az baremetalinstance update --resource-group DSM05a-T550 --instance-name orcllab
When you acquire the instances, you can go to the Properties section to view the data collected about the instances. Data collected includes the Azure connectivity, storage backend, ExpressRoute circuit ID, unique resource ID, and the subscription ID. You'll use this information in support requests or when setting up storage snapshot configuration.
-Another critical piece of information you'll see is the storage NFS IP address. It isolates your storage to your **tenant** in the BareMetal instance stack. You'll use this IP address when you edit the [configuration file for storage snapshot backups](../virtual-machines/workloads/sap/hana-backup-restore.md#set-up-storage-snapshots).
+Another critical piece of information you'll see is the storage NFS IP address. It isolates your storage to your **tenant** in the BareMetal instance stack. You'll use this IP address when you edit the [Configure Azure Application Consistent Snapshot tool](../azure-netapp-files/azacsnap-cmd-ref-configure.md).
:::image type="content" source="media/connect-baremetal-infrastructure/baremetal-instance-properties.png" alt-text="Screenshot showing the BareMetal instance property settings." lightbox="media/connect-baremetal-infrastructure/baremetal-instance-properties.png":::
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 6/14/2021 Last updated : 7/1/2021
The following tables show the Microsoft Security Response Center (MSRC) updates
## June 2021 Guest OS
->[!NOTE]
-
->The June Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the June Guest OS. This list is subject to change.
| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |
-| Rel 21-06 | [5003646] | Latest Cumulative Update(LCU) | 6.32 | June 8, 2021 |
-| Rel 21-06 | [4580325] | Flash update | 3.98, 4.91, 5.56, 6.32 | Oct 13, 2020 |
-| Rel 21-06 | [5003636] | IE Cumulative Updates | 2.111, 3.98, 4.91 | June 8, 2021 |
-| Rel 21-06 | [5003638] | Latest Cumulative Update(LCU) | 5.56 | June 8, 2021 |
-| Rel 21-06 | [4578952] | .NET Framework 3.5 Security and Quality Rollup  | 2.111 | Oct 13, 2020 |
-| Rel 21-06 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup  | 2.111 | Oct 13, 2020 |
-| Rel 21-06 | [4578953] | .NET Framework 3.5 Security and Quality Rollup  | 4.91 | Oct 13, 2020 |
-| Rel 21-06 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup  | 4.91 | Oct 13, 2020 |
-| Rel 21-06 | [4578950] | .NET Framework 3.5 Security and Quality Rollup  | 3.98 | Oct 13, 2020 |
-| Rel 21-06 | [4578954] | . NET Framework 4.5.2 Security and Quality Rollup  | 3.98 | Oct 13, 2020 |
-| Rel 21-06 | [4601060] | . NET Framework 3.5 and 4.7.2 Cumulative Update  | 6.32 | Feb 9, 2021 |
-| Rel 21-06 | [5003667] | Monthly Rollup  | 2.111 | June 8, 2021 |
-| Rel 21-06 | [5003697] | Monthly Rollup  | 3.98 | June 8, 2021 |
-| Rel 21-06 | [5003671] | Monthly Rollup  | 4.91 | June 8, 2021 |
-| Rel 21-06 | [5001401] | Servicing Stack update  | 3.98 | Apr 13, 2021 |
-| Rel 21-06 | [5001403] | Servicing Stack update  | 4.91 | Apr 13, 2021 |
-| Rel 21-06 OOB | [4578013] | Standalone Security Update  | 4.91 | Aug 19, 2020 |
-| Rel 21-06 | [5001402] | Servicing Stack update  | 5.56 | Apr 13, 2021 |
-| Rel 21-06 | [4592510] | Servicing Stack update  | 2.111 | Dec 8, 2020 |
-| Rel 21-06 | [5003711] | Servicing Stack update  | 6.32 | June 8, 2021 |
-| Rel 21-06 | [4494175] | Microcode  | 5.56 | Sep 1, 2020 |
-| Rel 21-06 | [4494174] | Microcode  | 6.32 | Sep 1, 2020 |
-| Rel 21-06 | [4052623] | Update for Microsoft Defender antimalware platform | 6.32, 5.56 | May 13, 2021 |
+| Rel 21-06 | [5003646] | Latest Cumulative Update(LCU) | [6.32] | June 8, 2021 |
+| Rel 21-06 | [4580325] | Flash update | [3.98], [4.91], [5.56], [6.32] | Oct 13, 2020 |
+| Rel 21-06 | [5003636] | IE Cumulative Updates | [2.111], [3.98], [4.91] | June 8, 2021 |
+| Rel 21-06 | [5003638] | Latest Cumulative Update(LCU) | [5.56] | June 8, 2021 |
+| Rel 21-06 | [4578952] | .NET Framework 3.5 Security and Quality Rollup  | [2.111] | Oct 13, 2020 |
+| Rel 21-06 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup  | [2.111] | Oct 13, 2020 |
+| Rel 21-06 | [4578953] | .NET Framework 3.5 Security and Quality Rollup  | [4.91] | Oct 13, 2020 |
+| Rel 21-06 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup  | [4.91] | Oct 13, 2020 |
+| Rel 21-06 | [4578950] | .NET Framework 3.5 Security and Quality Rollup  | [3.98] | Oct 13, 2020 |
+| Rel 21-06 | [4578954] | . NET Framework 4.5.2 Security and Quality Rollup  | [3.98] | Oct 13, 2020 |
+| Rel 21-06 | [4601060] | . NET Framework 3.5 and 4.7.2 Cumulative Update  | [6.32] | Feb 9, 2021 |
+| Rel 21-06 | [5003667] | Monthly Rollup  | [2.111] | June 8, 2021 |
+| Rel 21-06 | [5003697] | Monthly Rollup  | [3.98] | June 8, 2021 |
+| Rel 21-06 | [5003671] | Monthly Rollup  | [4.91] | June 8, 2021 |
+| Rel 21-06 | [5001401] | Servicing Stack update  | [3.98] | Apr 13, 2021 |
+| Rel 21-06 | [5001403] | Servicing Stack update  | [4.91] | Apr 13, 2021 |
+| Rel 21-06 OOB | [4578013] | Standalone Security Update  | [4.91] | Aug 19, 2020 |
+| Rel 21-06 | [5001402] | Servicing Stack update  | [5.56] | Apr 13, 2021 |
+| Rel 21-06 | [4592510] | Servicing Stack update  | [2.111] | Dec 8, 2020 |
+| Rel 21-06 | [5003711] | Servicing Stack update  | [6.32] | June 8, 2021 |
+| Rel 21-06 | [4494175] | Microcode  | [5.56] | Sep 1, 2020 |
+| Rel 21-06 | [4494174] | Microcode  | [6.32] | Sep 1, 2020 |
+| Rel 21-06 | [4052623] | Update for Microsoft Defender antimalware platform | [6.32], [5.56] | May 13, 2021 |
[5003646]: https://support.microsoft.com/kb/5003646 [4580325]: https://support.microsoft.com/kb/4580325
The following tables show the Microsoft Security Response Center (MSRC) updates
[4494175]: https://support.microsoft.com/kb/4494175 [4494174]: https://support.microsoft.com/kb/4494174 [4052623]: https://support.microsoft.com/kb/4052623
+[2.111]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.98]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.91]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.56]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.32]: ./cloud-services-guestos-update-matrix.md#family-6-releases
## May 2021 Guest OS
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 19-10  | [4516655] | SSU  | [2.91] | Sept 10, 2019 | | Rel 19-10  | [4516055] | Non-Security  | [3.78] | Sept 10, 2019 | | Rel 19-10  | [4512939] | SSU  | [3.78] | Sept 10, 2019 |
-| Rel 19-10  | [4514370] | .Net Framework 3.5  | [3.78] | Sept 10, 2019 |
-| Rel 19-10  | [4514368] | .Net Framework 4.5.2  | [3.78] | Sept 10, 2019 |
+| Rel 19-10  | [4514370] | .NET Framework 3.5  | [3.78] | Sept 10, 2019 |
+| Rel 19-10  | [4514368] | .NET Framework 4.5.2  | [3.78] | Sept 10, 2019 |
| Rel 19-10  | [4516067] | Non Security  | [4.71] | Sept 10, 2019 | | Rel 19-10  | [4512938] | SSU  | [4.71] | Sept 10, 2019 |
-| Rel 19-10  | [4514371] | .Net Framework 3.5  | [4.71] | Sept 10, 2019 |
-| Rel 19-10  | [4514367] | .Net Framework 4.5.2  | [4.71] | Sept 10, 2019 |
+| Rel 19-10  | [4514371] | .NET Framework 3.5  | [4.71] | Sept 10, 2019 |
+| Rel 19-10  | [4514367] | .NET Framework 4.5.2  | [4.71] | Sept 10, 2019 |
| Rel 19-10  | [4512574] | SSU  | [5.36] | Sept 10, 2019 | | Rel 19-10  | [4512577] | SSU  | [6.12] | Sept 10, 2019 |
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-update-matrix.md
na Previously updated : 6/9/2021 Last updated : 7/1/2021 # Azure Guest OS releases and SDK compatibility matrix
Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates
+###### **July 1, 2021**
+The June Guest OS has released.
###### **May 26, 2021** The May Guest OS has released.
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-6.32_202106-01 | July 1, 2021 | Post 6.34 |
| WA-GUEST-OS-6.31_202105-01 | May 26, 2021 | Post 6.33 |
-| WA-GUEST-OS-6.30_202104-01 | April 30, 2021 | Post 6.32 |
+|~~WA-GUEST-OS-6.30_202104-01~~| April 30, 2021 | July 1, 2021 |
|~~WA-GUEST-OS-6.29_202103-01~~| March 28, 2021 | May 26, 2021 | |~~WA-GUEST-OS-6.28_202102-01~~| February 19, 2021 | April 30, 2021 | |~~WA-GUEST-OS-6.27_202101-01~~| February 5, 2021 | March 28, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-5.56_202106-01 | July 1, 2021 | Post 5.58 |
| WA-GUEST-OS-5.55_202105-01 | May 26, 2021 | Post 5.57 |
-| WA-GUEST-OS-5.54_202104-01 | April 30, 2021 | Post 5.56 |
+|~~WA-GUEST-OS-5.54_202104-01~~| April 30, 2021 | July 1, 2021 |
|~~WA-GUEST-OS-5.53_202103-01~~| March 28, 2021 | May 26, 2021 | |~~WA-GUEST-OS-5.52_202102-01~~| February 19, 2021 | April 30, 2021 | |~~WA-GUEST-OS-5.51_202101-01~~| February 5, 2021 | March 28, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-4.91_202106-01 | July 1, 2021 | Post 4.93 |
| WA-GUEST-OS-4.90_202105-01 | May 26, 2021 | Post 4.92 |
-| WA-GUEST-OS-4.89_202104-01 | April 30, 2021 | Post 4.91 |
+|~~WA-GUEST-OS-4.89_202104-01~~| April 30, 2021 | July 1, 2021 |
|~~WA-GUEST-OS-4.88_202103-01~~| March 28, 2021 | May 26, 2021 | |~~WA-GUEST-OS-4.87_202102-01~~| February 19, 2021 | April 30, 2021 | |~~WA-GUEST-OS-4.86_202101-01~~| February 5, 2021 | March 28, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-3.98_202106-01 | July 1, 2021 | Post 3.100 |
| WA-GUEST-OS-3.97_202105-01 | May 26, 2021 | Post 3.99 |
-| WA-GUEST-OS-3.96_202104-01 | April 30, 2021 | Post 3.98 |
+|~~WA-GUEST-OS-3.96_202104-01~~| April 30, 2021 | July 1, 2021 |
|~~WA-GUEST-OS-3.95_202103-01~~| March 28, 2021 | May 26, 2021 | |~~WA-GUEST-OS-3.94_202102-01~~| February 19, 2021 | April 30, 2021 | |~~WA-GUEST-OS-3.93_202101-01~~| February 5, 2021 | March 28, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-2.111_202106-01 | July 1, 2021 | Post 2.113 |
| WA-GUEST-OS-2.110_202105-01 | May 26, 2021 | Post 2.112 |
-| WA-GUEST-OS-2.109_202104-01 | April 30, 2021 | Post 2.111 |
+|~~WA-GUEST-OS-2.109_202104-01~~| April 30, 2021 | July 1, 2021 |
|~~WA-GUEST-OS-2.108_202103-01~~| March 28, 2021 | May 26, 2021 | |~~WA-GUEST-OS-2.107_202102-01~~| February 19, 2021 | April 30, 2021 | |~~WA-GUEST-OS-2.106_202101-01~~| February 5, 2021 | March 28, 2021 |
cognitive-services Best Practices Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/concepts/best-practices-multivariate.md
keywords: anomaly detection, machine learning, algorithms
-# Multivariate time series Anomaly Detector best practices
+# Multivariate Anomaly Detector best practices
-This article will provide guidance around recommended practices to follow when using the multivariate Anomaly Detector APIs.
+This article will provide guidance around recommended practices to follow when using the multivariate Anomaly Detector (MVAD) APIs.
+In this tutorial, you'll:
-## Training data
+> [!div class="checklist"]
+> * **API usage**: Learn how to use MVAD without errors.
+> * **Data engineering**: Learn how to best cook your data so that MVAD performs with better accuracy.
+> * **Common pitfalls**: Learn how to avoid common pitfalls that customers meet.
+> * **FAQ**: Learn answers to frequently asked questions.
-### Data schema
-To use the Anomaly Detector multivariate APIs, you need to first train your own models. Training data is a set of multiple time series that meet the following requirements:
+## API usage
-Each time series should be a CSV file with two (and only two) columns, **"timestamp"** and **"value"** (all in lowercase) as the header row. The "timestamp" values should conform to ISO 8601; the "value" could be integers or decimals with any number of decimal places. For example:
+Follow the instructions in this section to avoid errors while using MVAD. If you still get errors, please refer to the [full list of error codes](./troubleshoot.md) for explanations and actions to take.
-|timestamp | value|
-|-|-|
-|2019-04-01T00:00:00Z| 5|
-|2019-04-01T00:01:00Z| 3.6|
-|2019-04-01T00:02:00Z| 4|
-|`...`| `...` |
-Each CSV file should be named after a different variable that will be used for model training. For example, "temperature.csv" and "humidity.csv". All the CSV files should be zipped into one zip file without any subfolders. The zip file can have whatever name you want. The zip file should be uploaded to Azure Blob storage. Once you generate the [blob SAS (Shared access signatures) URL](../../../storage/common/storage-sas-overview.md) for the zip file, it can be used for training. Refer to this document for how to generate SAS URLs from Azure Blob Storage.
++
+## Data engineering
+
+Now you're able to run the your code with MVAD APIs without any error. What could be done to improve your model accuracy?
### Data quality-- As the model learns normal patterns from historical data, the training data should **represent the overall normal state of the system**. It is hard for the model to learn these types of patterns if the training data is full of anomalies. -- The model has millions of parameters and it needs a minimum number of data points to learn an optimal set of parameters. The general rule is that you need to provide **at least 15,000 data points per variable** to properly train the model. The more data, the better the model.-- In general, the **missing value ratio of training data should be under 20%**. Too much missing data may end up with automatically filled values (usually straight segments or constant values) being learnt as normal patterns. That may result in real data points being detected as anomalies.
- However, there are cases when a high ratio is acceptable. For example, if you have two time series in a group using `Outer` mode to align timestamps. One has one-minute granularity, the other one has hourly granularity. Then the hourly time series by nature has at least 59 / 60 = 98.33% missing data points. In such cases, it's fine to fill the hourly time series using the only value available if it does not fluctuate too much typically.
+* As the model learns normal patterns from historical data, the training data should represent the **overall normal** state of the system. It is hard for the model to learn these types of patterns if the training data is full of anomalies. An empirical threshold of abnormal rate is **1%** and below for good accuracy.
+* In general, the **missing value ratio of training data should be under 20%**. Too much missing data may end up with automatically filled values (usually linear values or constant values) being learnt as normal patterns. That may result in real (not missing) data points being detected as anomalies.
+ However, there are cases when a high missing ratio is acceptable. For example, if you have two variables (time series) in a group using `Outer` mode to align their timestamps. One of them has one-minute granularity, the other one has hourly granularity. Then the hourly variable by nature has at least 59 / 60 = 98.33% missing data points. In such cases, it's fine to fill the hourly variable using the only value available (not missing) if it typically does not fluctuate too much.
+
+### Data quantity
+
+* The underlying model of MVAD has millions of parameters. It needs a minimum number of data points to learn an optimal set of parameters. The empirical rule is that you need to provide **15,000 or more data points (timestamps) per variable** to train the model for good accuracy. In general, the more the training data, better the accuracy. However, in cases when you're not able to accrue that much data, we still encourage you to experiment with less data and see if the compromised accuracy is still acceptable.
+* Every time when you call the inference API, you need to ensure that the source data file contains just enough data points. That is normally `slidingWindow` + number of data points that **really** need inference results. For example, in a streaming case when every time you want to inference on **ONE** new timestamp, the data file could contain only the leading `slidingWindow` plus **ONE** data point; then you could move on and create another zip file with the same number of data points (`slidingWindow` + 1) but moving ONE step to the "right" side and submit for another inference job.
+
+ Anything beyond that or "before" the leading sliding window will not impact the inference result at all and may only cause performance downgrade.Anything below that may lead to an `NotEnoughInput` error.
++
+### Timestamp round-up
+
+In a group of variables (time series), each variable may be collected from an independent source. The timestamps of different variables may be inconsistent with each other and with the known frequencies. Here is a simple example.
+
+*Variable-1*
+
+| timestamp | value |
+| | -- |
+| 12:00:01 | 1.0 |
+| 12:00:35 | 1.5 |
+| 12:01:02 | 0.9 |
+| 12:01:31 | 2.2 |
+| 12:02:08 | 1.3 |
+
+*Variable-2*
+
+| timestamp | value |
+| | -- |
+| 12:00:03 | 2.2 |
+| 12:00:37 | 2.6 |
+| 12:01:09 | 1.4 |
+| 12:01:34 | 1.7 |
+| 12:02:04 | 2.0 |
+
+We have two variables collected from two sensors which send one data point every 30 seconds. However, the sensors are not sending data points at a strict even frequency, but sometimes earlier and sometimes later. Because MVAD will take into consideration correlations between different variables, timestamps must be properly aligned so that the metrics can correctly reflect the condition of the system. In the above example, timestamps of variable 1 and variable 2 must be properly 'rounded' to their frequency before alignment.
+
+Let's see what happens if they're not pre-processed. If we set `alignMode` to be `Outer` (which means union of two sets), the merged table will be
+
+| timestamp | Variable-1 | Variable-2 |
+| | -- | -- |
+| 12:00:01 | 1.0 | `nan` |
+| 12:00:03 | `nan` | 2.2 |
+| 12:00:35 | 1.5 | `nan` |
+| 12:00:37 | `nan` | 2.6 |
+| 12:01:02 | 0.9 | `nan` |
+| 12:01:09 | `nan` | 1.4 |
+| 12:01:31 | 2.2 | `nan` |
+| 12:01:34 | `nan` | 1.7 |
+| 12:02:04 | `nan` | 2.0 |
+| 12:02:08 | 1.3 | `nan` |
+
+`nan` indicates missing values. Obviously, the merged table is not what you might have expected. Variable 1 and variable 2 interleave, and the MVAD model cannot extract information about correlations between them. If we set `alignMode` to `Inner`, the merged table will be empty as there is no common timestamp in variable 1 and variable 2.
+
+Therefore, the timestamps of variable 1 and variable 2 should be pre-processed (rounded to the nearest 30-second timestamps) and the new time series are
-## Parameters
+*Variable-1*
-### Sliding window
+| timestamp | value |
+| | -- |
+| 12:00:00 | 1.0 |
+| 12:00:30 | 1.5 |
+| 12:01:00 | 0.9 |
+| 12:01:30 | 2.2 |
+| 12:02:00 | 1.3 |
-Multivariate anomaly detection takes a segment of data points of length `slidingWindow` as input and decides if the next data point is an anomaly. The larger the sample length, the more data will be considered for a decision. You should keep two things in mind when choosing a proper value for `slidingWindow`: properties of input data, and the trade-off between training/inference time and potential performance improvement. `slidingWindow` consists of an integer between 28 and 2880. You may decide how many data points are used as inputs based on whether your data is periodic, and the sampling rate for your data.
+*Variable-2*
-When your data is periodic, you may include 1 - 3 cycles as an input and when your data is sampled at a high frequency (small granularity) like minute-level or second-level data, you may select more data as an input. Another issue is that longer inputs may cause longer training/inference time, and there is no guarantee that more input points will lead to performance gains. Whereas too few data points, may make the model difficult to converge to an optimal solution. For example, it is hard to detect anomalies when the input data only has two points.
+| timestamp | value |
+| | -- |
+| 12:00:00 | 2.2 |
+| 12:00:30 | 2.6 |
+| 12:01:00 | 1.4 |
+| 12:01:30 | 1.7 |
+| 12:02:00 | 2.0 |
-### Align mode
+Now the merged table is more reasonable.
-The parameter `alignMode` is used to indicate how you want to align multiple time series on time stamps. This is because many time series have missing values and we need to align them on the same time stamps before further processing. There are two options for this parameter, `inner join` and `outer join`. `inner join` means we will report detection results on timestamps on which **every time series** has a value, while `outer join` means we will report detection results on time stamps for **any time series** that has a value. **The `alignMode` will also affect the input sequence of the model**, so choose a suitable `alignMode` for your scenario because the results might be significantly different.
+| timestamp | Variable-1 | Variable-2 |
+| | -- | -- |
+| 12:00:00 | 1.0 | 2.2 |
+| 12:00:30 | 1.5 | 2.6 |
+| 12:01:00 | 0.9 | 1.4 |
+| 12:01:30 | 2.2 | 1.7 |
+| 12:02:00 | 1.3 | 2.0 |
-Here we show an example to explain different `alignModel` values.
+Values of different variables at close timestamps are well aligned, and the MVAD model can now extract correlation information.
-#### Series1
+## Common pitfalls
-|timestamp | value|
--| --|
-|`2020-11-01`| 1
-|`2020-11-02`| 2
-|`2020-11-04`| 4
-|`2020-11-05`| 5
+Apart from the [error code table](./troubleshoot.md), we've learned from customers like you some common pitfalls while using MVAD APIs. This table will help you to avoid these issues.
-#### Series2
+| Pitfall | Consequence |Explanation and solution |
+| | -- | -- |
+| Timestamps in training data and/or inference data were not rounded up to align with the respective data frequency of each variable. | The timestamps of the inference results are not as expected: either too few timestamps or too many timestamps. | Please refer to [Timestamp round-up](#timestamp-round-up). |
+| Too many anomalous data points in the training data | Model accuracy is impacted negatively because it treats anomalous data points as normal patterns during training. | Empirically, keep the abnormal rate at or below **1%** will help. |
+| Too little training data | Model accuracy is compromised. | Empirically, training a MVAD model requires 15,000 or more data points (timestamps) per variable to keep a good accuracy.|
+| Taking all data points with `isAnomaly`=`true` as anomalies | Too many false positives | You should use both `isAnomaly` and `severity` (or `score`) to sift out anomalies that are not severe and (optionally) use grouping to check the duration of the anomalies to suppress random noises. Please refer to the [FAQ](#faq) section below for the difference between `severity` and `score`. |
+| Sub-folders are zipped into the data file for training or inference. | The csv data files inside sub-folders are ignored during training and/or inference. | No sub-folders are allowed in the zip file. Please refer to [Folder structure](#folder-structure) for details. |
+| Too much data in the inference data file: for example, compressing all historical data in the inference data zip file | You may not see any errors but you'll experience degraded performance when you try to upload the zip file to Azure Blob as well as when you try to run inference. | Please refer to [Data quantity](#data-quantity) for details. |
+| Creating Anomaly Detector resources on Azure regions that don't support MVAD yet and calling MVAD APIs | You will get a "resource not found" error while calling the MVAD APIs. | During preview stage, MVAD is available on limited regions only. Please bookmark [What's new in Anomaly Detector](../whats-new.md) to keep up to date with MVAD region roll-outs. You could also file a GitHub issue or contact us at AnomalyDetector@microsoft.com to request for specific regions. |
-timestamp | value
- | -
-`2020-11-01`| 1
-`2020-11-02`| 2
-`2020-11-03`| 3
-`2020-11-04`| 4
+## FAQ
-#### Inner join two series
-
-timestamp | Series1 | Series2
--| - | -
-`2020-11-01`| 1 | 1
-`2020-11-02`| 2 | 2
-`2020-11-04`| 4 | 4
+### How does MVAD sliding window work?
-#### Outer join two series
+Let's use two examples to learn how MVAD's sliding window works. Suppose you have set `slidingWindow` = 1,440, and your input data is at one-minute granularity.
-timestamp | series1 | series2
- | - | -
-`2020-11-01`| 1 | 1
-`2020-11-02`| 2 | 2
-`2020-11-03`| NA | 3
-`2020-11-04`| 4 | 4
-`2020-11-05`| 5 | NA
+* **Streaming scenario**: You want to predict whether the ONE data point at "2021-01-02T00:00:00Z" is anomalous. Your `startTime` and `endTime` will be the same value ("2021-01-02T00:00:00Z"). Your inference data source, however, must contain at least 1,440 + 1 timestamps. Because, MVAD will take the leading data before the target data point ("2021-01-02T00:00:00Z") to decide whether the target is an anomaly. The length of the needed leading data is `slidingWindow` or 1,440 in this case. 1,440 = 60 * 24, so your input data must start from at latest "2021-01-01T00:00:00Z".
-### Fill not available (NA)
+* **Batch scenario**: You have multiple target data points to predict. Your `endTime` will be greater than your `startTime`. Inference in such scenarios is performed in a "moving window" manner. For example, MVAD will use data from `2021-01-01T00:00:00Z` to `2021-01-01T23:59:00Z` (inclusive) to determine whether data at `2021-01-02T00:00:00Z` is anomalous. Then it moves forward and uses data from `2021-01-01T00:01:00Z` to `2021-01-02T00:00:00Z` (inclusive)
+to determine whether data at `2021-01-02T00:01:00Z` is anomalous. It moves on in the same manner (taking 1,440 data points to compare) until the last timestamp specified by `endTime` (or the actual latest timestamp). Therefore, your inference data source must contain data starting from `startTime` - `slidingWindow` and ideally contains in total of size `slidingWindow` + (`endTime` - `startTime`).
-After variables are aligned on timestamp by outer join, there might be some `Not Available` (`NA`) value in some of the variables. You can specify method to fill this NA value. The options for the `fillNAMethod` are `Linear`, `Previous`, `Subsequent`, `Zero`, and `Fixed`.
+### Why only accepting zip files for training and inference?
-| Option | Method |
-| - | -|
-| Linear | Fill NA values by linear interpolation |
-| Previous | Propagate last valid value to fill gaps. Example: `[1, 2, nan, 3, nan, 4]` -> `[1, 2, 2, 3, 3, 4]` |
-| Subsequent | Use next valid value to fill gaps. Example: `[1, 2, nan, 3, nan, 4]` -> `[1, 2, 3, 3, 4, 4]` |
-| Zero | Fill NA values with 0. |
-| Fixed | Fill NA values with a specified valid value that should be provided in `paddingValue`. |
+We use zip files because in batch scenarios, we expect the size of both training and inference data would be very large and cannot be put in the HTTP request body. This allows users to perform batch inference on historical data either for model validation or data analysis.
-## Model analysis
+However, this might be somewhat inconvenient for streaming inference and for high frequency data. We have a plan to add a new API specifically designed for streaming inference that users can pass data in the request body.
-### Training latency
+### What's the difference between `severity` and `score`?
-Multivariate Anomaly Detection training can be time-consuming. Especially when you have a large quantity of timestamps used for training. Therefore, we allow part of the training process to be asynchronous. Typically, users submit train task through Train Model API. Then get model status through the `Get Multivariate Model API`. Here we demonstrate how to extract the remaining time before training completes. In the Get Multivariate Model API response, there is an item named `diagnosticsInfo`. In this item, there is a `modelState` element. To calculate the remaining time, we need to use `epochIds` and `latenciesInSeconds`. An epoch represents one complete cycle through the training data. Every 10 epochs, we will output status information. In total, we will train for 100 epochs, the latency indicates how long an epoch takes. With this information, we know remaining time left to train the model.
+Normally we recommend you use `severity` as the filter to sift out 'anomalies' that are not so important to your business. Depending on your scenario and data pattern, those anomalies that are less important often have relatively lower `severity` values or standalone (discontinuous) high `severity` values like random spikes.
-### Model performance
+In cases where you've found a need of more sophisticated rules than thresholds against `severity` or duration of continuous high `severity` values, you may want to use `score` to build more powerful filters. Understanding how MVAD is using `score` to determine anomalies may help:
-Multivariate Anomaly Detection, as an unsupervised model. The best way to evaluate it is to check the anomaly results manually. In the Get Multivariate Model response, we provide some basic info for us to analyze model performance. In the `modelState` element returned by the Get Multivariate Model API, we can use `trainLosses` and `validationLosses` to evaluate whether the model has been trained as expected. In most cases, the two losses will decrease gradually. Another piece of information for us to analyze model performance against is in `variableStates`. The variables state list is ranked by `filledNARatio` in descending order. The larger the worse our performance, usually we need to reduce this `NA ratio` as much as possible. `NA` could be caused by missing values or unaligned variables from a timestamp perspective.
+We consider whether a data point is anomalous from both global and local perspective. If `score` at a timestamp is higher than a certain threshold, then the timestamp is marked as an anomaly. If `score` is lower than the threshold but is relatively higher in a segment, it is also marked as an anomaly.
## Next steps -- [Quickstarts](../quickstarts/client-libraries-multivariate.md).-- [Learn about the underlying algorithms that power Anomaly Detector Multivariate](https://arxiv.org/abs/2009.02040)
+* [Quickstarts: Use the Anomaly Detector multivariate client library](../quickstarts/client-libraries-multivariate.md).
+* [Learn about the underlying algorithms that power Anomaly Detector Multivariate](https://arxiv.org/abs/2009.02040)
cognitive-services Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/concepts/troubleshoot.md
This article provides guidance on how to troubleshoot and remediate common error
| Error Code | HTTP Error Code | Error Message | Comment | | -- | | - | | | `SubscriptionNotInHeaders` | 400 | apim-subscription-id is not found in headers | Please add your APIM subscription ID in the header. Example header: `{"apim-subscription-id": <Your Subscription ID>}` |
-| `FileNotExist` | 400 | File <source> does not exist. | Please check the validity of your blob shared access signature (SAS). Make sure that it has not expired. |
+| `FileNotExist` | 400 | File \<source> does not exist. | Please check the validity of your blob shared access signature (SAS). Make sure that it has not expired. |
| `InvalidBlobURL` | 400 | | Your blob shared access signature (SAS) is not a valid SAS. | | `StorageWriteError` | 403 | | This error is possibly caused by permission issues. Our service is not allowed to write the data to the blob encrypted by a Customer Managed Key (CMK). Either remove CMK or grant access to our service again. Please refer to [this page](/azure/cognitive-services/encryption/cognitive-services-encryption-keys-portal) for more details. | | `StorageReadError` | 403 | | Same as `StorageWriteError`. |
This article provides guidance on how to troubleshoot and remediate common error
| `ModelNotExist` | 404 | The model does not exist. | The model used for inference does not exist. Please check the model ID in the request URL. | | `ModelFailed` | 400 | Model failed to be trained. | The model is not successfully trained. Please get detailed information by getting the model with model ID. | | `ModelNotReady` | 400 | The model is not ready yet. | The model is not ready yet. Please wait for a while until the training process completes. |
-| `InvalidFileSize` | 413 | File <file> exceeds the file size limit (<size limit> bytes). | The size of inference data exceeds the upper limit (2GB currently). Please use less data for inference. |
+| `InvalidFileSize` | 413 | File \<file> exceeds the file size limit (\<size limit> bytes). | The size of inference data exceeds the upper limit (2GB currently). Please use less data for inference. |
#### Get Detection Results
The following error codes do not have associated HTTP Error codes.
| | | | | `NoVariablesFound` | No variables found. Please check that your files are organized as per instruction. | No csv files could be found from the data source. This is typically caused by wrong organization of files. Please refer to the sample data for the desired structure. | | `DuplicatedVariables` | There are multiple variables with the same name. | There are duplicated variable names. |
-| `FileNotExist` | File <filename> does not exist. | This error usually happens during inference. The variable has appeared in the training data but is missing in the inference data. |
-| `RedundantFile` | File <filename> is redundant. | This error usually happens during inference. The variable was not in the training data but appeared in the inference data. |
-| `FileSizeTooLarge` | The size of file <filename> is too large. | The size of the single csv file <filename> exceeds the limit. Please train with less data. |
-| `ReadingFileError` | Errors occurred when reading <filename>. <error messages> | Failed to read the file <filename>. You may refer to <error messages> for more details or verify with `pd.read_csv(filename)` in a local environment. |
-| `FileColumnsNotExist` | Columns timestamp or value in file <filename> do not exist. | Each csv file must have two columns with names **timestamp** and **value** (case sensitive). |
-| `VariableParseError` | Variable <variable> parse <error message> error. | Cannot process the <variable> due to runtime errors. Please refer to the <error message> for more details or contact us with the <error message>. |
+| `FileNotExist` | File \<filename> does not exist. | This error usually happens during inference. The variable has appeared in the training data but is missing in the inference data. |
+| `RedundantFile` | File \<filename> is redundant. | This error usually happens during inference. The variable was not in the training data but appeared in the inference data. |
+| `FileSizeTooLarge` | The size of file \<filename> is too large. | The size of the single csv file \<filename> exceeds the limit. Please train with less data. |
+| `ReadingFileError` | Errors occurred when reading \<filename>. \<error messages> | Failed to read the file \<filename>. You may refer to \<error messages> for more details or verify with `pd.read_csv(filename)` in a local environment. |
+| `FileColumnsNotExist` | Columns timestamp or value in file \<filename> do not exist. | Each csv file must have two columns with names **timestamp** and **value** (case sensitive). |
+| `VariableParseError` | Variable \<variable> parse \<error message> error. | Cannot process the \<variable> due to runtime errors. Please refer to the \<error message> for more details or contact us with the \<error message>. |
| `MergeDataFailed` | Failed to merge data. Please check data format. | Data merge failed. This is possibly due to wrong data format, organization of files, etc. Please refer to the sample data for the current file structure. |
-| `ColumnNotFound` | Column <column> cannot be found in the merged data. | A column is missing after merge. Please verify the data. |
+| `ColumnNotFound` | Column \<column> cannot be found in the merged data. | A column is missing after merge. Please verify the data. |
| `NumColumnsMismatch` | Number of columns of merged data does not match the number of variables. | Please verify the data. | | `TooManyData` | Too many data points. Maximum number is 1000000 per variable. | Please reduce the size of input data. | | `NoData` | There is no effective data | There is no data to train/inference after processing. Please check the start time and end time. |
-| `DataExceedsLimit` | The length of data whose timestamp is between `startTime` and `endTime` exceeds limit(<limit>). | The size of data after processing exceeds the limit. (Currently no limit on processed data.) |
-| `NotEnoughInput` | Not enough data. The length of data is <data length>, but the minimum length should be larger than sliding window which is <sliding window size>. | The minimum number of data points for inference is the size of sliding window. Try to provide more data for inference. |
+| `DataExceedsLimit` | The length of data whose timestamp is between `startTime` and `endTime` exceeds limit(\<limit>). | The size of data after processing exceeds the limit. (Currently no limit on processed data.) |
+| `NotEnoughInput` | Not enough data. The length of data is \<data length>, but the minimum length should be larger than sliding window which is \<sliding window size>. | The minimum number of data points for inference is the size of sliding window. Try to provide more data for inference. |
cognitive-services Overview Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/overview-multivariate.md
See the following technical documents for information about the algorithms used:
## Next steps
+- [Tutorial](./tutorials/learn-multivariate-anomaly-detection.md): This article is an end-to-end tutorial of how to use the multivariate APIs.
- [Quickstarts](./quickstarts/client-libraries-multivariate.md).-- [Best Practices](./concepts/best-practices-multivariate.md): This article is about recommended patterns to use with the multivariate APIs.
+- [Best Practices](./concepts/best-practices-multivariate.md): This article is about recommended patterns to use with the multivariate APIs.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/overview.md
No customer configuration is necessary to enable zone-resiliency. Zone-resilienc
## Join the Anomaly Detector community * Join the [Anomaly Detector Advisors group on Microsoft Teams](https://aka.ms/AdAdvisorsJoin)
-* See selected [user generated content](user-generated-content.md)
## Next steps
cognitive-services Batch Anomaly Detection Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/tutorials/batch-anomaly-detection-powerbi.md
After clicking **Ok**, you will have a `Value for True` field, at the bottom of
Apply colors to your chart by clicking on the **Format** tool and **Data colors**. Your chart should look something like the following: ![An image of the final chart](../media/tutorials/final-chart.png)-
-## Next steps
-
-> [!div class="nextstepaction"]
->[Streaming anomaly detection with Azure Databricks](../overview.md)
cognitive-services Learn Multivariate Anomaly Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/tutorials/learn-multivariate-anomaly-detection.md
+
+ Title: "Tutorial: Learn Multivariate Anomaly Detection in one hour"
+
+description: An end-to-end tutorial of multivariate anomaly detection.
++++++ Last updated : 06/27/2021+++
+# Tutorial: Learn Multivariate Anomaly Detection in one hour
+
+Anomaly Detector with Multivariate Anomaly Detection (MVAD) is an advanced AI tool for detecting anomalies from a group of metrics in an **unsupervised** manner.
+
+In general, you could take these steps to use MVAD:
+
+ 1. Create an Anomaly Detector resource that supports MVAD on Azure.
+ 1. Prepare your data.
+ 1. Train an MVAD model.
+ 1. Query the status of your model.
+ 1. Detect anomalies with the trained MVAD model.
+ 1. Retrieve and interpret the inference results.
+
+In this tutorial, you'll:
+
+> [!div class="checklist"]
+> * Understand how to prepare your data in a correct format.
+> * Understand how to train and inference with MVAD.
+> * Understand the input parameters and how to interpret the output in inference results.
+
+## 1. Create an Anomaly Detector resource that supports MVAD
+
+* Create an Azure subscription if you don't have one - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
+* Once you have your Azure subscription, [create an Anomaly Detector resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector) in the Azure portal to get your API key and API endpoint.
+
+> [!NOTE]
+> During preview stage, MVAD is available in limited regions only. Please bookmark [What's new in Anomaly Detector](../whats-new.md) to keep up to date with MVAD region roll-outs. You could also file a GitHub issue or contact us at [AnomalyDetector@microsoft.com](mailto:AnomalyDetector@microsoft.com) to request for specific regions.
+
+## 2. Data preparation
+
+Then you need to prepare your training data (and inference data).
++
+### Tools for zipping and uploading data
+
+In this section, we share some sample code and tools which you could copy and edit to add into your own application logic which deals with MVAD input data.
+
+#### Compressing CSV files in \*nix
+
+```bash
+zip -j series.zip series/*.csv
+```
+
+#### Compressing CSV files in Windows
+
+* Navigate *into* the folder with all the CSV files.
+* Select all the CSV files you need.
+* Right click on one of the CSV files and select `Send to`.
+* Select `Compressed (zipped) folder` from the drop-down.
+* Rename the zip file as needed.
+
+#### Python code zipping & uploading data to Azure Blob Storage
+
+You could refer to [this doc](/azure/storage/blobs/storage-quickstart-blobs-portal.md#upload-a-block-blob) to learn how to upload a file to Azure Blob.
+
+Or, you could refer to the sample code below that can do the zipping and uploading for you. You could copy and save the Python code in this section as a .py file (for example, `zipAndUpload.py`) and run it using command lines like these:
+
+* `python zipAndUpload.py -s "foo\bar" -z test123.zip -c {azure blob connection string} -n container_xxx`
+
+ This command will compress all the CSV files in `foo\bar` into a single zip file named `test123.zip`. It will upload `test123.zip` to the container `container_xxx` in your blob.
+* `python zipAndUpload.py -s "foo\bar" -z test123.zip -c {azure blob connection string} -n container_xxx -r`
+
+ This command will do the same thing as the above, but it will delete the zip file `test123.zip` after uploading successfully.
+
+Arguments:
+
+* `--source-folder`, `-s`, path to the source folder containing CSV files
+* `--zipfile-name`, `-z`, name of the zip file
+* `--connection-string`, `-c`, connection string to your blob
+* `--container-name`, `-n`, name of the container
+* `--remove-zipfile`, `-r`, if on, remove the zip file
+
+```python
+import os
+import argparse
+import shutil
+import sys
+
+from azure.storage.blob import BlobClient
+import zipfile
++
+class ZipError(Exception):
+ pass
++
+class UploadError(Exception):
+ pass
++
+def zip_file(root, name):
+ try:
+ z = zipfile.ZipFile(name, "w", zipfile.ZIP_DEFLATED)
+ for f in os.listdir(root):
+ if f.endswith("csv"):
+ z.write(os.path.join(root, f), f)
+ z.close()
+ print("Compress files success!")
+ except Exception as ex:
+ raise ZipError(repr(ex))
++
+def upload_to_blob(file, conn_str, cont_name, blob_name):
+ try:
+ blob_client = BlobClient.from_connection_string(conn_str, container_name=cont_name, blob_name=blob_name)
+ with open(file, "rb") as f:
+ blob_client.upload_blob(f, overwrite=True)
+ print("Upload Success!")
+ except Exception as ex:
+ raise UploadError(repr(ex))
++
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--source-folder", "-s", type=str, required=True, help="path to source folder")
+ parser.add_argument("--zipfile-name", "-z", type=str, required=True, help="name of the zip file")
+ parser.add_argument("--connection-string", "-c", type=str, help="connection string")
+ parser.add_argument("--container-name", "-n", type=str, help="container name")
+ parser.add_argument("--remove-zipfile", "-r", action="store_true", help="whether delete the zip file after uploading")
+ args = parser.parse_args()
+
+ try:
+ zip_file(args.source_folder, args.zipfile_name)
+ upload_to_blob(args.zipfile_name, args.connection_string, args.container_name, args.zipfile_name)
+ except ZipError as ex:
+ print(f"Failed to compress files. {repr(ex)}")
+ sys.exit(-1)
+ except UploadError as ex:
+ print(f"Failed to upload files. {repr(ex)}")
+ sys.exit(-1)
+ except Exception as ex:
+ print(f"Exception encountered. {repr(ex)}")
+
+ try:
+ if args.remove_zipfile:
+ os.remove(args.zipfile_name)
+ except Exception as ex:
+ print(f"Failed to delete the zip file. {repr(ex)}")
+```
+
+## 3. Train an MVAD Model
+
+Here is a sample request body and the sample code in Python to train an MVAD model.
+
+```json
+// Sample Request Body
+{
+ "slidingWindow": 200,
+ "alignPolicy": {
+ "alignMode": "Outer",
+ "fillNAMethod": "Linear",
+ "paddingValue": 0
+ },
+ "source": "YOUR_SAMPLE_ZIP_FILE_LOCATED_IN_AZURE_BLOB_STORAGE_WITH_SAS",
+ "startTime": "2021-01-01T00:00:00Z",
+ "endTime": "2021-01-02T12:00:00Z",
+ "displayName": "Contoso model"
+}
+```
+
+```python
+# Sample Code in Python
+########### Python 3.x #############
+import http.client, urllib.request, urllib.parse, urllib.error, base64
+
+headers = {
+ # Request headers
+ 'Content-Type': 'application/json',
+ 'Ocp-Apim-Subscription-Key': '{API key}',
+}
+
+params = urllib.parse.urlencode({})
+
+try:
+ conn = http.client.HTTPSConnection('{endpoint}')
+ conn.request("POST", "/anomalydetector/v1.1-preview/multivariate/models?%s" % params, "{request body}", headers)
+ response = conn.getresponse()
+ data = response.read()
+ print(data)
+ conn.close()
+except Exception as e:
+ print("[Errno {0}] {1}".format(e.errno, e.strerror))
+
+####################################
+```
+
+Response code `201` indicates a successful request.
++
+## 4. Get model status
+
+As the training API is asynchronous, you won't get the model immediately after calling the training API. However, you can query the status of models either by API key, which will list all the models, or by model ID, which will list information about the specific model.
+
+### List all the models
+
+You may refer to [this page](https://westus2.dev.cognitive.microsoft.com/docs/services/AnomalyDetector-v1-1-preview/operations/ListMultivariateModel) for information about the request URL and request headers. Notice that we only return 10 models ordered by update time, but you can visit other models by setting the `$skip` and the `$top` parameters in the request URL. For example, if your request URL is `https://{endpoint}/anomalydetector/v1.1-preview/multivariate/models?$skip=10&$top=20`, then we will skip the latest 10 models and return the next 20 models.
+
+A sample response is
+
+```json
+{
+ "models": [
+ {
+ "createdTime":"2020-12-01T09:43:45Z",
+ "displayName":"DevOps-Test",
+ "lastUpdatedTime":"2020-12-01T09:46:13Z",
+ "modelId":"b4c1616c-33b9-11eb-824e-0242ac110002",
+ "status":"READY",
+ "variablesCount":18
+ },
+ {
+ "createdTime":"2020-12-01T09:43:30Z",
+ "displayName":"DevOps-Test",
+ "lastUpdatedTime":"2020-12-01T09:45:10Z",
+ "modelId":"ab9d3e30-33b9-11eb-a3f4-0242ac110002",
+ "status":"READY",
+ "variablesCount":18
+ }
+ ],
+ "currentCount": 1,
+ "maxCount": 50,
+ "nextLink": "<link to more models>"
+}
+```
+
+The response contains 4 fields, `models`, `currentCount`, `maxCount`, and `nextLink`.
+
+* `models` contains the created time, last updated time, model ID, display name, variable counts, and the status of each model.
+* `currentCount` contains the number of trained multivariate models.
+* `maxCount` is the maximum number of models supported by this Anomaly Detector resource.
+* `nextLink` could be used to fetch more models.
+
+### Get models by model ID
+
+[This page](https://westus2.dev.cognitive.microsoft.com/docs/services/AnomalyDetector-v1-1-preview/operations/GetMultivariateModel) describes the request URL to query model information by model ID. A sample response looks like this
+
+```json
+{
+ "modelId": "45aad126-aafd-11ea-b8fb-d89ef3400c5f",
+ "createdTime": "2020-06-30T00:00:00Z",
+ "lastUpdatedTime": "2020-06-30T00:00:00Z",
+ "modelInfo": {
+ "slidingWindow": 300,
+ "alignPolicy": {
+ "alignMode": "Outer",
+ "fillNAMethod": "Linear",
+ "paddingValue": 0
+ },
+ "source": "<TRAINING_ZIP_FILE_LOCATED_IN_AZURE_BLOB_STORAGE_WITH_SAS>",
+ "startTime": "2019-04-01T00:00:00Z",
+ "endTime": "2019-04-02T00:00:00Z",
+ "displayName": "Devops-MultiAD",
+ "status": "READY",
+ "errors": [],
+ "diagnosticsInfo": {
+ "modelState": {
+ "epochIds": [10, 20, 30, 40, 50, 60, 70, 80, 90, 100],
+ "trainLosses": [0.6291328072547913, 0.1671326905488968, 0.12354248017072678, 0.1025966405868533,
+ 0.0958492755889896, 0.09069952368736267,0.08686016499996185, 0.0860302299260931,
+ 0.0828735455870684, 0.08235538005828857],
+ "validationLosses": [1.9232804775238037, 1.0645641088485718, 0.6031560301780701, 0.5302737951278687,
+ 0.4698025286197664, 0.4395163357257843, 0.4182931482799006, 0.4057914316654053,
+ 0.4056498706340729, 0.3849248886108984],
+ "latenciesInSeconds": [0.3398594856262207, 0.3659665584564209, 0.37360644340515137,
+ 0.3513407707214355, 0.3370304107666056, 0.31876277923583984,
+ 0.3283309936523475, 0.3503587245941162, 0.30800247192382812,
+ 0.3327946662902832]
+ },
+ "variableStates": [
+ {
+ "variable": "ad_input",
+ "filledNARatio": 0,
+ "effectiveCount": 1441,
+ "startTime": "2019-04-01T00:00:00Z",
+ "endTime": "2019-04-02T00:00:00Z",
+ "errors": []
+ },
+ {
+ "variable": "ad_ontimer_output",
+ "filledNARatio": 0,
+ "effectiveCount": 1441,
+ "startTime": "2019-04-01T00:00:00Z",
+ "endTime": "2019-04-02T00:00:00Z",
+ "errors": []
+ },
+ // More variables
+ ]
+ }
+ }
+ }
+```
+
+You will receive more detailed information about the queried model. The response contains meta information about the model, its training parameters, and diagnostic information. Diagnostic Information is useful for debugging and tracing training progress.
+
+* `epochIds` indicates how many epochs the model has been trained out of in total 100 epochs. For example, if the model is still in training status, `epochId` might be `[10, 20, 30, 40, 50]` which means that it has completed its 50th training epoch, and there are half way to go.
+* `trainLosses` and `validationLosses` are used to check whether the optimization progress converges in which case the two losses should decrease gradually.
+* `latenciesInSeconds` contains the time cost for each epoch and is recorded every 10 epochs. In this example, the 10th epoch takes approximately 0.34 seconds. This would be helpful to estimate the completion time of training.
+* `variableStates` summarizes information about each variable. It is a list ranked by `filledNARatio` in descending order. It tells how many data points are used for each variable and `filledNARatio` tells how many points are missing. Usually we need to reduce `filledNARatio` as much as possible.
+Too many missing data points will deteriorate model accuracy.
+* Errors during data processing will be included in the `errors` field.
+
+## 5. Inference with MVAD
+
+To perform inference, simply provide the blob source to the zip file containing the inference data, the start time, and end time.
+
+Inference is also asynchronous, so the results are not returned immediately. Notice that you need to save in a variable the link of the results in the **response header** which contains the `resultId`, so that you may know where to get the results afterwards.
+
+Failures are usually caused by model issues or data issues. You cannot perform inference if the model is not ready or the data link is invalid. Make sure that the training data and inference data are consistent, which means they should be **exactly** the same variables but with different timestamps. More variables, fewer variables, or inference with a different set of variables will not pass the data verification phase and errors will occur. Data verification is deferred so that you will get error message only when you query the results.
+
+## 6. Get inference results
+
+You need the `resultId` to get results. `resultId` is obtained from the response header when you submit the inference request. [This page](https://westus2.dev.cognitive.microsoft.com/docs/services/AnomalyDetector-v1-1-preview/operations/GetDetectionResult) contains instructions to query the inference results.
+
+A sample response looks like this
+
+```json
+ {
+ "resultId": "663884e6-b117-11ea-b3de-0242ac130004",
+ "summary": {
+ "status": "READY",
+ "errors": [],
+ "variableStates": [
+ {
+ "variable": "ad_input",
+ "filledNARatio": 0,
+ "effectiveCount": 26,
+ "startTime": "2019-04-01T00:00:00Z",
+ "endTime": "2019-04-01T00:25:00Z",
+ "errors": []
+ },
+ {
+ "variable": "ad_ontimer_output",
+ "filledNARatio": 0,
+ "effectiveCount": 26,
+ "startTime": "2019-04-01T00:00:00Z",
+ "endTime": "2019-04-01T00:25:00Z",
+ "errors": []
+ },
+ // more variables
+ ],
+ "setupInfo": {
+ "source": "https://multiadsample.blob.core.windows.net/datqGY%2FvGHJXJjUgjS4DneCGl7U5omq5c%3D",
+ "startTime": "2019-04-01T00:15:00Z",
+ "endTime": "2019-04-01T00:40:00Z"
+ }
+ },
+ "results": [
+ {
+ "timestamp": "2019-04-01T00:15:00Z",
+ "errors": [
+ {
+ "code": "InsufficientHistoricalData",
+ "message": "historical data is not enough."
+ }
+ ]
+ },
+ // more results
+ {
+ "timestamp": "2019-04-01T00:20:00Z",
+ "value": {
+ "contributors": [],
+ "isAnomaly": false,
+ "severity": 0,
+ "score": 0.17805261260751692
+ }
+ },
+ // more results
+ {
+ "timestamp": "2019-04-01T00:27:00Z",
+ "value": {
+ "contributors": [
+ {
+ "contributionScore": 0.0007775013367514271,
+ "variable": "ad_ontimer_output"
+ },
+ {
+ "contributionScore": 0.0007989604079048129,
+ "variable": "ad_series_init"
+ },
+ {
+ "contributionScore": 0.0008900927229851369,
+ "variable": "ingestion"
+ },
+ {
+ "contributionScore": 0.008068144477478554,
+ "variable": "cpu"
+ },
+ {
+ "contributionScore": 0.008222036467507165,
+ "variable": "data_in_speed"
+ },
+ {
+ "contributionScore": 0.008674941549594993,
+ "variable": "ad_input"
+ },
+ {
+ "contributionScore": 0.02232242629793674,
+ "variable": "ad_output"
+ },
+ {
+ "contributionScore": 0.1583773213660846,
+ "variable": "flink_last_ckpt_duration"
+ },
+ {
+ "contributionScore": 0.9816531517495176,
+ "variable": "data_out_speed"
+ }
+ ],
+ "isAnomaly": true,
+ "severity": 0.42135109874230336,
+ "score": 1.213510987423033
+ }
+ },
+ // more results
+ ]
+ }
+```
+
+The response contains the result status, variable information, inference parameters, and inference results.
+
+* `variableStates` lists the information of each variable in the inference request.
+* `setupInfo` is the request body submitted for this inference.
+* `results` contains the detection results. There're three typical types of detection results.
+ 1. Error code `InsufficientHistoricalData`. This usually happens only with the first few timestamps because the model inferences data in a window-based manner and it needs historical data to make a decision. For the first few timestamps, there is insufficient historical data, so inference cannot be performed on them. In this case, the error message can be ignored.
+ 1. `"isAnomaly": false` indicates the current timestamp is not an anomaly.
+ * `severity ` indicates the relative severity of the anomaly and for normal data it is always 0.
+ * `score` is the raw output of the model on which the model makes a decision which could be non-zero even for normal data points.
+ 1. `"isAnomaly": true` indicates an anomaly at the current timestamp.
+ * `severity ` indicates the relative severity of the anomaly and for abnormal data it is always greater than 0.
+ * `score` is the raw output of the model on which the model makes a decision. `severity` is a derived value from `score`. Every data point has a `score`.
+ * `contributors` is a list containing the contribution score of each variable. Higher contribution scores indicate higher possibility of the root cause. This list is often used for interpreting anomalies as well as diagnosing the root causes.
+
+> [!NOTE]
+> A common pitfall is taking all data points with `isAnomaly`=`true` as anomalies. That may end up with too many false positives.
+> You should use both `isAnomaly` and `severity` (or `score`) to sift out anomalies that are not severe and (optionally) use grouping to check the duration of the anomalies to suppress random noise.
+> Please refer to the [FAQ](../concepts/best-practices-multivariate.md#faq) in the best practices document for the difference between `severity` and `score`.
+
+## Next steps
+
+* [Best practices: Recommended practices to follow when using the multivariate Anomaly Detector APIs](../concepts/best-practices-multivariate.md)
+* [Quickstarts: Use the Anomaly Detector multivariate client library](../quickstarts/client-libraries-multivariate.md)
cognitive-services User Generated Content https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/user-generated-content.md
- Title: Featured User-generated content for the Anomaly Detector API-
-description: Find featured content and discover how other people are thinking about and using the Anomaly Detector API.
------- Previously updated : 01/22/2021--
-# Featured User-generated content for the Anomaly Detector API
-
-Use this article to discover how other customers are thinking about and using the Anomaly Detector API. The following resources were created by the community of Anomaly Detector users. They include open-source projects, and other contributions created by both Microsoft and third-party users.
-Some of the following links are hosted on websites that are external to Microsoft and Microsoft is not responsible for the content there. Use discretion when you refer to these resources.
-
-## Technical blogs
-
-* [Trying the Cognitive Service: Anomaly Detector API (in Japanese)](https://azure-recipe.kc-cloud.jp/2019/04/cognitive-service-anomaly-detector-api/)
-
-## Open-source projects
-
-* [Jupyter notebook demonstrating Anomaly Detection and streaming to Power BI](https://github.com/marvinbuss/MS-AnomalyDetector)
-
-If you'd like to nominate a resource, fill [a short form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbRxSkyhztUNZCtaivu8nmhd1UMENTMEJWTkRORkRGQUtGQzlWQ1dSV1JLTS4u).
-Contact AnomalyDetector@microsoft.com or raise an issue on GitHub if you'd like us to remove the content.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/whats-new.md
+
+ Title: What's New - Anomaly Detector
+description: This article is regularly updated with news about the Azure Cognitive Services Anomaly Detector.
+++ Last updated : 06/23/2021++
+# What's new in Anomaly Detector
+
+Learn what's new in the service. These items include release notes, videos, blog posts, papers, and other types of information. Bookmark this page to keep up to date with the service.
+
+We've also added links to some user-generated content. Those items will be marked with **[UGC]** tag. Some of them are hosted on websites that are external to Microsoft and Microsoft is not responsible for the content there. Use discretion when you refer to these resources. Contact AnomalyDetector@microsoft.com or raise an issue on GitHub if you'd like us to remove the content.
+
+## Release notes
+
+### June 2021
+
+* Multivariate anomaly detection APIs available in more regions (West US2, West Europe, East US2, South Central US, East US, and UK South).
+* Anomaly Detector (univariate) available in Azure cloud for US Government.
+* Anomaly Detector (univariate) available in Azure China (China North 2).
+
+### April 2021
+
+* [IoT Edge module](https://azuremarketplace.microsoft.com/marketplace/apps/azure-cognitive-service.edge-anomaly-detector) (univariate) published.
+* Anomaly Detector (univariate) available in Azure China (China East 2).
+* Multivariate anomaly detection APIs preview in selected regions (West US2, West Europe).
+
+### September 2020
+
+* Anomaly Detector (univariate) generally available.
+
+### March 2019
+
+* Anomaly Detector announced preview with univariate anomaly detection support.
+
+## Technical articles
+
+* March 12, 2021 [Introducing Multivariate Anomaly Detection](https://techcommunity.microsoft.com/t5/azure-ai/introducing-multivariate-anomaly-detection/ba-p/2260679) - Technical blog on the new multivariate APIs
+* September 2020 [Multivariate Time-series Anomaly Detection via Graph Attention Network](https://arxiv.org/abs/2009.02040) - Paper on multivariate anomaly detection accepted by ICDM 2020
+* November 5, 2019 [Overview of SR-CNN algorithm in Azure Anomaly Detector](https://techcommunity.microsoft.com/t5/ai-customer-engineering-team/overview-of-sr-cnn-algorithm-in-azure-anomaly-detector/ba-p/982798) - Technical blog on SR-CNN
+* June 10, 2019 [Time-Series Anomaly Detection Service at Microsoft](https://arxiv.org/abs/1906.03821) - Paper on SR-CNN accepted by KDD 2019
+* April 25, 2019 **[UGC]** [Trying the Cognitive Service: Anomaly Detector API (in Japanese)](https://azure-recipe.kc-cloud.jp/2019/04/cognitive-service-anomaly-detector-api/)
+* April 20, 2019 [Introducing Azure Anomaly Detector API](https://techcommunity.microsoft.com/t5/ai-customer-engineering-team/introducing-azure-anomaly-detector-api/ba-p/490162) - Announcement blog
+
+## Videos
+
+* May 7, 2021 [New to Anomaly Detector: Multivariate Capabilities](https://channel9.msdn.com/Shows/AI-Show/New-to-Anomaly-Detector-Multivariate-Capabilities) - AI Show on the new multivariate anomaly detection APIs with Tony Xing and Seth Juarez
+* April 20, 2021 [AI Show Live | Episode 11| New to Anomaly Detector: Multivariate Capabilities](https://channel9.msdn.com/Shows/AI-Show/AI-Show-Live-Episode-11-Whats-new-with-Anomaly-Detector) - AI Show live recording with Tony Xing and Seth Juarez
+* May 18, 2020 [Inside Anomaly Detector](https://channel9.msdn.com/Shows/AI-Show/Inside-Anomaly-Detector) - AI Show with Qun Ying and Seth Juarez
+* September 19, 2019 **[UGC]** [Detect Anomalies in Your Data with the Anomaly Detector](https://www.youtube.com/watch?v=gfb63wvjnYQ) - Video by Jon Wood
+* September 3, 2019 [Anomaly detection on streaming data using Azure Databricks](https://channel9.msdn.com/Shows/AI-Show/Anomaly-detection-on-streaming-data-using-Azure-Databricks) - AI Show with Qun Ying
+* August 27, 2019 [Anomaly Detector v1.0 Best Practices](https://channel9.msdn.com/Shows/AI-Show/Anomaly-Detector-v10-Best-Practices) - AI Show on univariate anomaly detection best practices with Qun Ying
+* August 20, 2019 [Bring Anomaly Detector on-premises with containers support](https://channel9.msdn.com/Shows/AI-Show/Bring-Anomaly-Detector-on-premise-with-containers-support) - AI Show with Qun Ying and Seth Juarez
+* August 13, 2019 [Introducing Azure Anomaly Detector](https://channel9.msdn.com/Shows/AI-Show/Introducing-Azure-Anomaly-Detector?WT.mc_id=ai-c9-niner) - AI Show with Qun Ying and Seth Juarez
+
+## Open-source projects
+
+* June 3, 2019 **[UGC]** [Jupyter notebook demonstrating Anomaly Detection and streaming to Power BI](https://github.com/marvinbuss/MS-AnomalyDetector) - Marvin Buss
+
+## Service updates
+
+[Azure update announcements for Cognitive Services](https://azure.microsoft.com/updates/?product=cognitive-services)
cognitive-services Active Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Tutorials/active-learning.md
+
+ Title: Enrich your knowledge base with Active Learning
+description: In this tutorial, learn how to enrich your knowledge bases with action learning
+++++ Last updated : 06/29/2021++
+# Enrich your knowledge base with Active Learning
+
+This tutorial shows you how enhance your knowledge base with active learning. If you notice that customers are asking questions, which are not part of your knowledge base. There are often variations of question that are paraphrased differently.
+
+These variations when added as alternate questions to the relevant QnA pair, help to optimize the knowledge base to answer real world user queries. You can manually add alternate questions to QnA pairs through the editor. At the same time, you can also use the active learning feature to generate active learning suggestions based on user queries. The active learning feature, however, requires that the knowledge base receives regular user traffic to generate suggestions.
+
+## Enable Active Learning
+Active Learning is turned on by default for the Custom Question Answering feature. However, you need to manually update the Active Learning setting for QnA Maker GA. You can find more details here: [Turn on Active Learning](../how-to/use-active-learning.md#turn-on-active-learning-for-alternate-questions).
+
+To try out Active Learning suggestions, you can import the following file to the knowledge base: [SampleActiveLearning.tsv](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/knowledge-bases/SampleActiveLearning.tsv).
+For more details on importing knowledge base, refer [Import Knowledge Base](migrate-knowledge-base.md).
+
+## View and add/reject Active Learning Suggestions
+Once the active learning suggestions are available, they can be viewed from **View Options** > **Show active learning suggestions**.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot with view options and show active learning suggestions outlined in red boxes]( ../media/active-learning/view-options.png) ]( ../media/active-learning/view-options.png#lightbox)
+
+Clicking on **Show active learning suggestions**, enables the option to filter QnA Pairs that have suggestions. If active learning is disabled or there arenΓÇÖt any suggestions, **Show active learning suggestions** will be disabled.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot with filter by options highlighted in a red box]( ../media/active-learning/filter-by-suggestions.png) ]( ../media/active-learning/filter-by-suggestions.png#lightbox)
+
+We can choose to filter only those QnA pairs that have alternate questions as suggested by Active Learning, so the filtered list of QnA pairs is displayed:
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot with view help with surface pen highlighted in a red box]( ../media/active-learning/help.png) ]( ../media/active-learning/help.png#lightbox)
++
+We can now either accept these suggestions or reject them using the checkmark or cross-mark. This can be either done individually navigating to each QnA pair or using the **Accept / Reject** option at the top.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot with option to accept or reject highlighted in red]( ../media/active-learning/accept-reject.png) ]( ../media/active-learning/accept-reject.png#lightbox)
+
+The knowledge base does not change unless we choose to add or edit the suggestions as suggested by Active Learning. Finally, click on Save and train to save the changes.
+
+> [!NOTE]
+> To check your version and service settings for active learning, refer the article on [how to use active learning](../how-to/use-active-learning.md)
+
+## Add alternate questions using editor
+
+While active learning automatically suggests alternate questions based on the user queries hitting the knowledge base, we can also add variations of a question using the editor.
+Select the QnA pair where alternate question is to be added and select **Add alternative phrasing**
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot with add alternative phrasing highlighted in red]( ../media/active-learning/add-alternative-phrasing.png) ]( ../media/active-learning/add-alternative-phrasing.png#lightbox)
+
+Alternate questions added to the QnA pair is shown as
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot with drawing with pen highlighted in red]( ../media/active-learning/draw-with-pen.png) ]( ../media/active-learning/draw-with-pen.png#lightbox)
+
+By adding alternate questions along with Active Learning, we further enrich the knowledge base with variations of a question that helps to provide the same response to a similar user query.
++
+> [!NOTE]
+> Alternate questions have many stop words, then they might not impact the accuracy of response as expected. So, if the only difference between alternate questions is in the stop words, these alternate questions are not required.
+
+The list of stop words can be found here: List of [stop words](https://github.com/Azure-Samples/azure-search-sample-dat).
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Improve the quality of responses with synonyms](adding-synonyms.md)
cognitive-services Adding Synonyms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Tutorials/adding-synonyms.md
+
+ Title: Improve the quality of responses with synonyms
+description: In this tutorial, learn how to improve response with synonyms and alternate words
+++++ Last updated : 06/29/2021++
+# Improve quality of response with synonyms
+
+This tutorial will show you how you can improve the quality of your responses by using synonyms. Let's assume that users are not getting an accurate response to their queries, when they use alternate forms, synonyms or acronyms of a word. So, they decide to improve the quality of the response by using [Alterations API](/rest/api/cognitiveservices-qnamaker/QnAMaker4.0/Alterations) to add synonyms for keywords.
+
+## Add synonyms using Alterations API
+
+LetΓÇÖs us add the following words and their alterations to improve the results:
+
+|Word | Alterations|
+|--|--|
+| fix problems | `Troubleshoot`, `trouble-shoot`|
+| whiteboard | `white-board`, `white board` |
+| bluetooth | `blue-tooth`, `blue tooth` |
+
+```json
+{
+ "wordAlterations": [
+ {
+ "alterations": [
+ "fix problems",
+ "trouble shoot",
+ "trouble-shoot",
+ ]
+ },
+ {
+ "alterations": [
+ "whiteboard",
+ "white-board",
+ "white board"
+ ]
+ },
+ {
+ "alterations": [
+ "bluetooth",
+ "blue-tooth",
+ "blue tooth"
+ ]
+ }
+ ]
+}
+
+```
+
+## Response after adding synonyms
+
+For the question and answer pair ΓÇ£Fix problems with Surface PenΓÇ¥ shown below, we compare the response for a query made using its synonym ΓÇ£trouble shootΓÇ¥.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot with fix problems with Surface Pen highlighted in red]( ../media/adding-synonyms/fix-problems.png) ]( ../media/adding-synonyms/fix-problems.png#lightbox)
+
+## Response before addition of synonym
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot with confidence score of 71.82 highlighted in red]( ../media/adding-synonyms/confidence-score.png) ]( ../media/adding-synonyms/confidence-score.png#lightbox)
+
+## Response after addition of synonym
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot with a confidence score of 97 highlighted in red]( ../media/adding-synonyms/increase-score.png) ]( ../media/adding-synonyms/increase-score.png#lightbox)
+
+As you can see, when `trouble shoot` was not added as a synonym, we got a low confidence response to the query ΓÇ£How to troubleshoot your surface penΓÇ¥. However, after we add `trouble shoot` as a synonym to ΓÇ£fix problemsΓÇ¥, we received the correct response to the query with a higher confidence score. Once, these word alterations were added, the relevance of results improved thereby improving user experience.
+
+> [!NOTE]
+> Synonyms are case insensitive. Synonyms also might not work as expected if you add stop words as synonyms. The list of stop words can be found here: [List of stop words](https://github.com/Azure-Samples/azure-search-sample-dat).
+
+For instance, if you add the abbreviation **IT** for Information technology, the system might not be able to recognize Information Technology because **IT** is a stop word and filtered when query is processed.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Create knowledge bases in multiple languages](multiple-languages.md)
cognitive-services Guided Conversations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Tutorials/guided-conversations.md
+
+ Title: Add guided conversations with multi-turn prompts
+description: In this tutorial learn how to make guided conversations with multi-turn prompts.
+++++ Last updated : 06/29/2021++
+# Add guided conversations with multi-turn prompts
+
+ In this tutorial, we use [Surface Pen FAQ](https://support.microsoft.com/surface/how-to-use-your-surface-pen-8a403519-cd1f-15b2-c9df-faa5aa924e98) to create a knowledge base.
+
+For this example let's assume that users are asking for additional details about the Surface Pen product, particularly how to troubleshoot their Surface Pen, but they are not getting the correct answers. So, we add more prompts to support additional scenarios and guide the users to the correct answers using multi-turn prompts.
+
+## View QnAs with context
+While creating the knowledge base for [Surface Pen FAQ](https://support.microsoft.com/surface/how-to-use-your-surface-pen-8a403519-cd1f-15b2-c9df-faa5aa924e98), we choose to enable multi-turn extraction from the source document. For more details, follow [Create multi-turn conversation from document](../how-to/multiturn-conversation.md#create-a-multi-turn-conversation-from-a-documents-structure). This lists the multi-turn prompts that are associated with QnA pairs, which can be viewed using **Show context** under **View Options**.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot with view options show context outlined in red boxes]( ../media/guided-conversations/show-context.png) ]( ../media/guided-conversations/show-context.png#lightbox)
+
+This displays the context tree where all follow-up prompts linked to a QnA pair are shown:
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of Surface Pen FAQ page with answers to common questions]( ../media/guided-conversations/source.png) ]( ../media/guided-conversations/source.png#lightbox)
+
+## Add new QnA pair with follow-up prompts
+
+To help the user solve issues with their Surface Pen, we add follow-up prompts:
+
+1. Add a new QnA pair with two follow-up prompts
+2. Add a follow-up prompt to one of the newly added prompts
+
+**Step 1**: Add a new QnA pair with two follow-up prompts **Check compatibility** and **Check Pen Settings**
+Using the editor, we add a new QnA pair with a follow-up prompt by clicking on **Add QnA pair**
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of UI with "Add QnA pair" highlighted in a red box]( ../media/guided-conversations/add-pair.png) ]( ../media/guided-conversations/add-pair.png#lightbox)
+
+A new row in **Editorial** is created where we enter the QnA pair as shown below:
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of UI with "add a follow-up prompt" highlighted in a red box]( ../media/guided-conversations/follow-up.png) ]( ../media/guided-conversations/follow-up.png#lightbox)
+
+We then add a follow-up prompt to the newly created QnA pair by choosing **Add follow-up prompt**. Clicking on which, we fill the details for the prompt as shown:
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of UI with "add a follow-up prompt" highlighted in a red box]( ../media/guided-conversations/follow-up.png) ]( ../media/guided-conversations/follow-up.png#lightbox)
+
+We provide **Check Compatibility** as the ΓÇ£Display textΓÇ¥ for the prompt and try to link it to a QnA. Since, no related QnA pair is available to link to the prompt, when we search ΓÇ£Check your Surface Pen CompatibilityΓÇ¥, we create a new QnA pair by clicking on **Create new**. Once we **Save** the changes, the following screen is presented, where a new QnA pair for the follow-up prompt can be entered as shown below:
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of a question and answer about checking your surface pen compatibility]( ../media/guided-conversations/check-compatibility.png) ]( ../media/guided-conversations/check-compatibility.png#lightbox)
+
+Similarly, we add another prompt **Check Pen Settings** to help the user troubleshoot the Surface Pen and add QnA pair to it.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of a question and answer about checking your surface pen settings]( ../media/guided-conversations/check-pen-settings.png) ]( ../media/guided-conversations/check-pen-settings.png#lightbox)
+
+**Step 2**: Add another follow-up prompt to newly created prompt. We now add ΓÇ£Replace Pen tipsΓÇÖ as a follow-up prompt to the previously created prompt ΓÇ£Check Pen SettingsΓÇ¥.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of a question and answer about checking your surface pen settings with a red box around information regarding replacing pen tips]( ../media/guided-conversations/replace-pen-tips.png) ]( ../media/guided-conversations/replace-pen-tips.png#lightbox)
+
+We finally save the changes and test these prompts in the Test pane:
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of QnAMaker test pane]( ../media/guided-conversations/test-pane.png) ]( ../media/guided-conversations/test-pane.png#lightbox)
+
+For a user query **Issues with Surface Pen**, the system returns an answer and presents the newly added prompts to the user. The user then selects one of the prompts **Check Pen Settings** and the related answer is returned to the user with another prompt **Replace Pen Tips**, which when selected further provides the user with more information. So, multi-turn is used to help and guide the user to the desired answer.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Enrich your knowlege base with active learning](active-learning.md)
cognitive-services Multiple Languages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Tutorials/multiple-languages.md
+
+ Title: Create knowledge bases in multiple languages
+description: In this tutorial you will learn how to create knowledge bases with multiple languages.
+++++ Last updated : 06/29/2021++
+# Create knowledge bases in multiple languages
+
+This tutorial will walk through the process of creating knowledge bases in multiple languages. We use the [Surface Pen FAQ](https://support.microsoft.com/surface/how-to-use-your-surface-pen-8a403519-cd1f-15b2-c9df-faa5aa924e98) URL to create knowledge bases in German and English. We then publish the knowledge bases and use the [GenerateAnswerAPI](/rest/api/cognitiveservices-qnamaker/QnAMaker4.0/Runtime/GenerateAnswer) to query them to get answers to FAQs in the desired language.
+
+## Create Knowledge Base in German
+
+To be able to create a knowledge base in more than one language, the language setting must be set at the creation of the first Knowledge Base (KB) of the QnA service.
+
+> [!NOTE]
+> The option to enable adding knowledge bases in multiple languages to a service is only available as part of Custom Question Answering which is a feature of Text Analytics.
+>
+> If you are using the GA version of QnA Maker, a separate QnA Maker resource would need to be created for each distinct language.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of UI for Connect your QnA service to your knowledge base with add knowledge bases in multiple languages selected]( ../media/multiple-languages/add-knowledge-bases.png) ]( ../media/multiple-languages/add-knowledge-bases.png#lightbox)
+
+In **Step 2**: Enable ΓÇ£Add knowledge bases in multiple languages to this serviceΓÇ¥ and choose **German** as the language of the KB from ΓÇ£LanguageΓÇ¥ drop-down list.
+Fill relevant details in Step 3 and 4 and finally select on **Create your KB**.
+
+At this step, QnA maker reads the document and extracts QnA pairs from the source URL to create the knowledge base in the German language. The knowledge base page opens where we can edit the contents of the knowledge base.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of UI with German questions and answers]( ../media/multiple-languages/german.png) ]( ../media/multiple-languages/german.png#lightbox)
+
+## Create Knowledge Base in English
+
+We now repeat the above steps with language-specific changes in the Step 2 and Step 4 to create the knowledge base in English:
+1. **Step 2**: Choose language as **English**
+2. **Step 4**: Select source file in selected language to create a knowledge base in English.
+Once, the knowledge base is created, we can see the relevant QnA pairs generated by QnA maker in English as shown below:
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of REST API response]( ../media/multiple-languages/english.png) ]( ../media/multiple-languages/english.png#lightbox)
+
+## Publish and Query knowledge base
+
+We are now ready to publish the two knowledge bases and query them in the desired language using the [GenerateAnswerAPI](/rest/api/cognitiveservices-qnamaker/QnAMaker4.0/Runtime/GenerateAnswer). Once a knowledge base is published, the following page is shown which provides details to query the knowledge base.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of UI with English questions and answers]( ../media/multiple-languages/postman.png) ]( ../media/multiple-languages/postman.png#lightbox)
+
+The language for the incoming user query can be detected with the [Language Detection API](../../text-analytics/how-tos/text-analytics-how-to-language-detection.md) and the user can call the appropriate endpoint and knowledge base depending on the detected language.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/overview.md
The following features are part of the Speech service. Use the links in this tab
| | [Multi-device Conversation](multi-device-conversation.md) | Connect multiple devices or clients in a conversation to send speech- or text-based messages, with easy support for transcription and translation| Yes | No | | | [Conversation Transcription](./conversation-transcription.md) | Enables real-time speech recognition, speaker identification, and diarization. It's perfect for transcribing in-person meetings with the ability to distinguish speakers. | Yes | No | | | [Create Custom Speech Models](#customize-your-speech-experience) | If you are using speech-to-text for recognition and transcription in a unique environment, you can create and train custom acoustic, language, and pronunciation models to address ambient noise or industry-specific vocabulary. | No | [Yes](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) |
+| | [Pronunciation Assessment](./how-to-pronunciation-assessment.md) | Pronunciation assessment evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio. With pronunciation assessment, language learners can practice, get instant feedback, and improve their pronunciation so that they can speak and present with confidence. | [Yes](./how-to-pronunciation-assessment.md) | [Yes](./rest-speech-to-text.md#pronunciation-assessment-parameters) |
| [Text-to-Speech](text-to-speech.md) | Text-to-speech | Text-to-speech converts input text into human-like synthesized speech using [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). Use neural voices, which are human-like voices powered by deep neural networks. See [Language support](language-support.md). | [Yes](./speech-sdk.md) | [Yes](#reference-docs) | | | [Create Custom Voices](#customize-your-speech-experience) | Create custom voice fonts unique to your brand or product. | No | [Yes](#reference-docs) | | [Speech Translation](speech-translation.md) | Speech translation | Speech translation enables real-time, multi-language translation of speech to your applications, tools, and devices. Use this service for speech-to-speech and speech-to-text translation. | [Yes](./speech-sdk.md) | No |
To add a Speech service resource (free or paid tier) to your Azure account:
--> It takes a few moments to deploy your new Speech resource.
-### Find keys and region
+### Find keys and location/region
-To find the keys and region of a completed deployment, follow these steps:
+To find the keys and location/region of a completed deployment, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com/) using your Microsoft account.
Other products offer speech models tuned for specific purposes like healthcare o
> [!div class="nextstepaction"] > [Get started with speech-to-text](./get-started-speech-to-text.md)
-> [Get started with text-to-speech](get-started-text-to-speech.md)
+> [Get started with text-to-speech](get-started-text-to-speech.md)
cognitive-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-container-support.md
Previously updated : 06/25/2021 Last updated : 07/02/2021 keywords: on-premises, Docker, container, Kubernetes #Customer intent: As a potential customer, I want to know more about how Cognitive Services provides and supports Docker containers for each service.
Azure Cognitive Services containers provide the following set of Docker containe
| Service | Container | Description | Availability | |--|--|--|--| | [Computer Vision][cv-containers] | **Read OCR** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-vision-read)) | The Read OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API documentation](./computer-vision/overview-ocr.md). | Gated preview. [Request access][request-access]. |
-| [Spatial Analysis][spa-containers] | **Spatial analysis** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-vision-spatial-analysis)) | Analyzes real-time streaming video to understand spatial relationships between people, their movement, and interactions with objects in physical environments. | Gated preview. [Request access][request-access]. |
+| [Spatial Analysis][spa-containers] | **Spatial analysis** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-vision-spatial-analysis)) | Analyzes real-time streaming video to understand spatial relationships between people, their movement, and interactions with objects in physical environments. | Preview |
| [Face][fa-containers] | **Face** | Detects human faces in images, and identifies attributes, including face landmarks (such as noses and eyes), gender, age, and other machine-predicted facial features. In addition to detection, Face can check if two faces in the same image or different images are the same by using a confidence score, or compare faces against a database to see if a similar-looking or identical face already exists. It can also organize similar faces into groups, using shared visual traits. | Unavailable | | [Form Recognizer][fr-containers] | **Form Recognizer** | Form Understanding applies machine learning technology to identify and extract key-value pairs and tables from forms. | Gated preview. [Request access][request-access]. |
cognitive-services Manage Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/manage-resources.md
Previously updated : 06/14/2021 Last updated : 07/02/2021
If you need to find the name of your deleted resources, you can get a list of de
Get-AzResource -ResourceId /subscriptions/{subscriptionId}/providers/Microsoft.CognitiveServices/deletedAccounts -ApiVersion 2021-04-30 ```
+### Using the Azure CLI
+
+```azurecli-interactive
+az resource create --subscription {subscriptionID} -g {resourceGroup} -n {resourceName} --location {location} --namespace Microsoft.CognitiveServices --resource-type accounts --properties "{\"restore\": true}"
+```
## Purge a deleted resource
Remove-AzResource -ResourceId /subscriptions/{subscriptionID}/providers/Microsof
### Using the Azure CLI ```azurecli-interactive
-az resource delete /subscriptions/{subscriptionId}/providers/Microsoft.CognitiveServices/locations/{location}/resourceGroups/{resourceGroup}/deletedAccounts/{resourceName}
+az resource delete --ids /subscriptions/{subscriptionId}/providers/Microsoft.CognitiveServices/locations/{location}/resourceGroups/{resourceGroup}/deletedAccounts/{resourceName}
``` ## See also * [Create a new resource using the Azure portal](cognitive-services-apis-create-account.md) * [Create a new resource using the Azure CLI](cognitive-services-apis-create-account-cli.md) * [Create a new resource using the client library](cognitive-services-apis-create-account-client-library.md)
-* [Create a new resource using an ARM template](create-account-resource-manager-template.md)
+* [Create a new resource using an ARM template](create-account-resource-manager-template.md)
cognitive-services Data Feeds From Different Sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/data-feeds-from-different-sources.md
Use this article to find the settings and requirements for connecting different
| **Service principal**| Store your [Service Principal](../../active-directory/develop/app-objects-and-service-principals.md) as a **credential entity** in Metrics Advisor and use it directly each time when onboarding metrics data. Only admins of Credential entity can view the credentials, but enables authorized viewers to create data feed without needing to know the credential details.| | **Service principal from key vault**|Store your [Service Principal in a Key Vault](/azure/azure-stack/user/azure-stack-key-vault-store-credentials) as a **credential entity** in Metrics Advisor and use it directly each time when onboarding metrics data. Only admins of a **credential entity** can view the credentials, but also leave viewers able to create data feed without needing to know detailed credentials. |
-## <span id ='jump1'>Create a credential entity to manage your credential in secure</span>
-You can create a **credential entity** to store credential-related information, and use it for authenticating to your data sources. You can share the credential entity to others and enable them to connect to your data sources without sharing the real credentials. It can be created in 'Adding data feed' tab or 'Credential entity' tab. After creating a credential entity for a specific authentication type, you can just choose one credential entity you created when adding new data feed, this will be helpful when creating multiple data feeds. The procedure of creating and using a credential entity is shown below:
-
-1. Select '+' to create a new credential entity in 'Adding data feed' tab (you can also create one in 'Credential entity feed' tab).
-
- ![create credential entity](media/create-credential-entity.png)
-
-2. Set the credential entity name, description (if needed), and credential type (equals to *authentication types*).
-
- ![set credential entity](media/set-credential-entity.png)
-
-3. After creating a credential entity, you can choose it when specifying authentication type.
-
- ![choose credential entity](media/choose-credential-entity.png)
-
## Data sources supported and corresponding authentication types | Data sources | Authentication Types |
The following sections specify the parameters required for all authentication ty
Sample query: ``` Kusto
- [TableName] | where [TimestampColumn] >= @IntervalStart and [TimestampColumn] < @IntervalEnd;
+ [TableName] | where [TimestampColumn] >= datetime(@IntervalStart) and [TimestampColumn] < datetime(@IntervalEnd);
``` You can also refer to the [Tutorial: Write a valid query](tutorials/write-a-valid-query.md) for more specific examples.
The following sections specify the parameters required for all authentication ty
**2. Manage Azure Data Explorer database permissions.** See [Manage Azure Data Explorer database permissions](/azure/data-explorer/manage-database-permissions) to know about Service Principal and manage permissions.
- **3. Create a credential entity in Metrics Advisor.** See how to [create a credential entity](#jump1) in Metrics Advisor, so that you can choose that entity when adding data feed for Service Principal authentication type.
+ **3. Create a credential entity in Metrics Advisor.** See how to [create a credential entity](how-tos/credential-entity.md) in Metrics Advisor, so that you can choose that entity when adding data feed for Service Principal authentication type.
Here's an example of connection string:
The following sections specify the parameters required for all authentication ty
Data Source=<URI Server>;Initial Catalog=<Database> ```
- * **Service Principal From Key Vault**: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. You should create a service principal first, and then store the service principal inside Key Vault. You can go through [Store service principal credentials in Azure Stack Hub Key Vault](/azure-stack/user/azure-stack-key-vault-store-credentials) to follow detailed procedure to set service principal from key vault.
+ * **Service Principal From Key Vault**: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. You should create a service principal first, and then store the service principal inside Key Vault. You can go through [Create a credential entity for Service Principal from Key Vault](how-tos/credential-entity.md#sp-from-kv) to follow detailed procedure to set service principal from key vault.
Here's an example of connection string: ``` Data Source=<URI Server>;Initial Catalog=<Database>
The following sections specify the parameters required for all authentication ty
* **Basic**: The **Account Name** of your Azure Data Lake Storage Gen2. This can be found in your Azure Storage Account (Azure Data Lake Storage Gen2) resource in **Access keys**.
- * **Azure Data Lake Storage Gen2 Shared Key**: First, you should specify the account key to access your Azure Data Lake Storage Gen2 (the same as Account Key in *Basic* authentication type). This could be found in Azure Storage Account (Azure Data Lake Storage Gen2) resource in **Access keys** setting. Then you should [create a credential entity](#jump1) for *Azure Data Lake Storage Gen2 Shared Key* type and fill in the account key.
+ * **Azure Data Lake Storage Gen2 Shared Key**: First, you should specify the account key to access your Azure Data Lake Storage Gen2 (the same as Account Key in *Basic* authentication type). This could be found in Azure Storage Account (Azure Data Lake Storage Gen2) resource in **Access keys** setting. Then you should [create a credential entity](how-tos/credential-entity.md) for *Azure Data Lake Storage Gen2 Shared Key* type and fill in the account key.
The account name is the same as *Basic* authentication type.
The following sections specify the parameters required for all authentication ty
4. Click **+ Add** and select **Add role assignment** from the dropdown menu. 5. Set the **Select** field to the Azure AD application name and set role to **Storage Blob Data Contributor**. Click **Save**.
- ![lake-service-principals](media/datafeeds/adls-gen2-app-reg-assign-roles.png)
+ ![lake-service-principals](media/datafeeds/adls-gen-2-app-reg-assign-roles.png)
- **Step 3:** [Create a credential entity](#jump1) in Metrics Advisor, so that you can choose that entity when adding data feed for Service Principal authentication type.
+ **Step 3:** [Create a credential entity](how-tos/credential-entity.md) in Metrics Advisor, so that you can choose that entity when adding data feed for Service Principal authentication type.
- * **Service Principal From Key Vault** authentication type: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. You should create a service principal first, and then store the service principal inside Key Vault. You can go through [Store service principal credentials in Azure Stack Hub Key Vault](/azure-stack/user/azure-stack-key-vault-store-credentials) to follow detailed procedure to set service principal from key vault.
+ * **Service Principal From Key Vault** authentication type: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. You should create a service principal first, and then store the service principal inside Key Vault. You can go through [Create a credential entity for Service Principal from Key Vault](how-tos/credential-entity.md#sp-from-kv) to follow detailed procedure to set service principal from key vault.
The account name is the same as *Basic* authentication type.
There are three authentication types for Azure Log Analytics, they are **Basic**
3. Click **+ Add** and select **Add role assignment** from the dropdown menu. 4. Set the **Select** field to the Azure AD application name and set role to **Storage Blob Data Contributor**. Click **Save**.
- ![lake-service-principals](media/datafeeds/adls-gen2-app-reg-assign-roles.png)
+ ![lake-service-principals](media/datafeeds/adls-gen-2-app-reg-assign-roles.png)
- **Step 3:** [Create a credential entity](#jump1) in Metrics Advisor, so that you can choose that entity when adding data feed for Service Principal authentication type.
+ **Step 3:** [Create a credential entity](how-tos/credential-entity.md) in Metrics Advisor, so that you can choose that entity when adding data feed for Service Principal authentication type.
-* **Service Principal From Key Vault** authentication type: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. You should create a service principal first, and then store the service principal inside Key Vault. You can go through [Store service principal credentials in Azure Stack Hub Key Vault](/azure-stack/user/azure-stack-key-vault-store-credentials) to follow detailed procedure to set service principal from key vault.
+* **Service Principal From Key Vault** authentication type: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. You should create a service principal first, and then store the service principal inside Key Vault. You can go through [Create a credential entity for Service Principal from Key Vault](how-tos/credential-entity.md#sp-from-kv) to follow detailed procedure to set service principal from key vault.
* **Query**: Specify the query of Log Analytics. For more information, see [Log queries in Azure Monitor](../../azure-monitor/logs/log-query-overview.md) Sample query:
- ```
+ ``` Kusto
[TableName]
- | where [TimestampColumn] >= @IntervalStart and [TimestampColumn] < @IntervalEnd
+ | where [TimestampColumn] >= datetime(@IntervalStart) and [TimestampColumn] < datetime(@IntervalEnd)
| summarize [count_per_dimension]=count() by [Dimension] ```
There are three authentication types for Azure Log Analytics, they are **Basic**
``` * <span id='jump'>**Managed Identity**</span>: Managed identity for Azure resources can authorize access to blob and queue data using Azure AD credentials from applications running in Azure virtual machines (VMs), function apps, virtual machine scale sets, and other services. By using managed identity for Azure resources together with Azure AD authentication, you can avoid storing credentials with your applications that run in the cloud. To [enable your managed entity](../../active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-sql.md), you can refer to following steps:
- 1. Enabling a system-assigned managed identity is a one-click experience. In Azure portal for your Metrics Advisor workspace, set the status as `on` in **Settings > Identity > System assigned**.
+ 1. **Enabling a system-assigned managed identity is a one-click experience.** In Azure portal for your Metrics Advisor workspace, set the status as `on` in **Settings > Identity > System assigned**.
![set status as on](media/datafeeds/set-identity-status.png)
- 1. Enable Azure AD authentication. In the Azure portal for your data source, click **Set admin** in **Settings > Active Directory admin**, select an **Azure AD user account** to be made an administrator of the server, and click **Select**.
+ 1. **Enable Azure AD authentication.** In the Azure portal for your data source, click **Set admin** in **Settings > Active Directory admin**, select an **Azure AD user account** to be made an administrator of the server, and click **Select**.
![set admin](media/datafeeds/set-admin.png)
- 1. In your database management tool, select **Active Directory - Universal with MFA support** in the authentication field. In the User name field, enter the name of the Azure AD account that you set as the server administrator in step 2, for example, test@contoso.com
+ 1. **Enable managed identity(MI) in Metrics Advisor.** There are 2 ways to choose: edit query in a **database management tool** or **Azure portal**.
+
+ **Management tool**: In your database management tool, select **Active Directory - Universal with MFA support** in the authentication field. In the User name field, enter the name of the Azure AD account that you set as the server administrator in step 2, for example, test@contoso.com
![set connection detail](media/datafeeds/connection-details.png)
+
+ **Azure portal**: Select Query editor in your SQL database, sign in admin account.
+ ![edit query in Azure Portal](media/datafeeds/query-editor.png)
-
- 1. The last step is to enable managed identity(MI) in Metrics Advisor. In the **Object Explorer**, expand the **Databases** folder. Right-click on a user database and click **New query**. In the query window, you should enter the following line, and click Execute in the toolbar:
+ Then in the query window, you should execute the following lines (same for management tool method):
``` CREATE USER [MI Name] FROM EXTERNAL PROVIDER
There are three authentication types for Azure Log Analytics, they are **Basic**
**Step 2:** Follow the same steps with [managed identity in SQL Server](#jump), which is mentioned above.
- **Step 3:** [Create a credential entity](#jump1) in Metrics Advisor, so that you can choose that entity when adding data feed for Service Principal authentication type.
+ **Step 3:** [Create a credential entity](how-tos/credential-entity.md) in Metrics Advisor, so that you can choose that entity when adding data feed for Service Principal authentication type.
Here's an example of connection string:
There are three authentication types for Azure Log Analytics, they are **Basic**
Data Source=<Server>;Initial Catalog=<Database> ```
- * **Service Principal From Key Vault**: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. You should create a service principal first, and then store the service principal inside Key Vault. You can go through [Store service principal credentials in Azure Stack Hub Key Vault](/azure-stack/user/azure-stack-key-vault-store-credentials) to follow detailed procedure to set service principal from key vault. Also, your connection string could be found in Azure SQL Server resource in **Settings > Connection strings** section.
+ * **Service Principal From Key Vault**: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. You should create a service principal first, and then store the service principal inside Key Vault. You can go through [Create a credential entity for Service Principal from Key Vault](how-tos/credential-entity.md#sp-from-kv) to follow detailed procedure to set service principal from key vault. Also, your connection string could be found in Azure SQL Server resource in **Settings > Connection strings** section.
Here's an example of connection string:
There are three authentication types for Azure Log Analytics, they are **Basic**
For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md) for more specific examples.
+<!--
## <span id="es">Elasticsearch</span> * **Host**: Specify the master host of Elasticsearch Cluster. * **Port**: Specify the master port of Elasticsearch Cluster. * **Authorization Header**: Specify the authorization header value of Elasticsearch Cluster.
-* **Query**: Specify the query to get data. Placeholder `@StartTime` is supported. For example, when data of `2020-06-21T00:00:00Z` is ingested, `@StartTime = 2020-06-21T00:00:00`.
+* **Query**: Specify the query to get data. Placeholder `@IntervalStart` is supported. For example, when data of `2020-06-21T00:00:00Z` is ingested, `@IntervalStart = 2020-06-21T00:00:00`.
* **Request URL**: An HTTP url that can return a JSON. The placeholders %Y,%m,%d,%h,%M are supported: %Y=year in format yyyy, %m=month in format MM, %d=day in format dd, %h=hour in format HH, %M=minute in format mm. For example: `http://microsoft.com/ProjectA/%Y/%m/X_%Y-%m-%d-%h-%M`. * **Request HTTP method**: Use GET or POST. * **Request header**: Could add basic authentication.
-* **Request payload**: Only JSON payload is supported. Placeholder @StartTime is supported in the payload. The response should be in the following JSON format: `[{"timestamp": "2018-01-01T00:00:00Z", "market":"en-us", "count":11, "revenue":1.23}, {"timestamp": "2018-01-01T00:00:00Z", "market":"zh-cn", "count":22, "revenue":4.56}]`. For example, when data of `2020-06-21T00:00:00Z` is ingested, `@StartTime = 2020-06-21T00:00:00.0000000+00:00)`.
+* **Request payload**: Only JSON payload is supported. Placeholder @IntervalStart is supported in the payload. The response should be in the following JSON format: `[{"timestamp": "2018-01-01T00:00:00Z", "market":"en-us", "count":11, "revenue":1.23}, {"timestamp": "2018-01-01T00:00:00Z", "market":"zh-cn", "count":22, "revenue":4.56}]`. For example, when data of `2020-06-21T00:00:00Z` is ingested, `@IntervalStart = 2020-06-21T00:00:00.0000000+00:00)`.
+-->
## <span id="influxdb">InfluxDB (InfluxQL)</span>
cognitive-services Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/encryption.md
Previously updated : 09/10/2020 Last updated : 07/02/2021 #Customer intent: As a user of the Metrics Advisor service, I want to learn how encryption at rest works. # Metrics Advisor service encryption of data at rest
-The Metrics Advisor service automatically encrypts your data when persisted it to the cloud. The Metrics Advisor service encryption protects your data and to help you to meet your organizational security and compliance commitments.
+Metrics Advisor service automatically encrypts your data when it is persisted to the cloud. The Metrics Advisor service encryption protects your data and helps you to meet your organizational security and compliance commitments.
[!INCLUDE [cognitive-services-about-encryption](../includes/cognitive-services-about-encryption.md)]
-> [!IMPORTANT]
-> Customer-managed keys are only available on the E0 pricing tier. To request the ability to use customer-managed keys, fill out and submit the [Metrics Advisor Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with the Metrics Advisor service, you will need to create a new Metrics Advisor resource and select E0 as the Pricing Tier. Once your Metrics Advisor resource with the E0 pricing tier is created, you can use Azure Key Vault to set up your managed identity.
+Metrics Advisor supports CMK and double encryption by using BYOS (bring your own storage).
+## Steps to create a Metrics Advisor with BYOS
+### Step1. Create an Azure Database for PostgreSQL and set admin
+
+- Create an Azure Database for PostgreSQL
+
+ Log in to the Azure portal and create a resource of the Azure Database for PostgreSQL. Couple of things to notice:
+
+ 1. Please select the **'Single Server'** deployment option.
+ 2. When choosing 'Datasource', please specify as **'None'**.
+ 3. For the 'Location', please make sure to create within the **same location** as Metrics Advisor resource.
+ 4. 'Version' should be set to **11**.
+ 5. 'Compute + storage' should choose a 'Memory Optimized' SKU with at least **32 vCores**.
+
+ ![Create an Azure Database for PostgreSQL](media/cmk-create.png)
+
+- Set Active Directory Admin for newly created PG
+
+ After successfully creating your Azure Database for PostgreSQL. Go to the resource page of the newly created Azure PG resource. Select 'Active Directory admin' tab and set yourself as the Admin.
++
+### Step2. Create a Metrics Advisor resource and enable Managed Identity
+
+- Create a Metrics Advisor resource in the Azure portal
+
+ Go to Azure portal again and search 'Metrics Advisor'. When creating Metrics Advisor, do remember the following:
+
+ 1. Choose the **same 'region'** as you created Azure Database for PostgreSQL.
+ 2. Mark 'Bring your own storage' as **'Yes'** and select the Azure Database for PostgreSQL you just created in the dropdown list.
+
+- Enable the Managed Identity for Metrics Advisor
+
+ After creating the Metrics Advisor resource, select 'Identity' and set 'Status' to **'On'** to enable Managed Identity.
+
+- Get Application ID of Managed Identity
+
+ Go to Azure Active Directory, and select 'Enterprise applications'. Change 'Application type' to **'Managed Identity'**, copy resource name of Metrics Advisor, and search. Then you're able to view the 'Application ID' from the query result, copy it.
+
+### Step3. Grant Metrics Advisor access permission to your Azure Database for PostgreSQL
+
+- Grant **'Owner'** role for the Managed Identity on your Azure Database for PostgreSQL
+
+- Set firewall rules
+
+ 1. Set 'Allow access to Azure services' as 'Yes'.
+ 2. Add your clientIP address to log in to Azure Database for PostgreSQL.
+
+- Get the access-token for your account with resource type 'https://ossrdbms-aad.database.windows.net'. The access token is the password you need to log in to the Azure Database for PostgreSQL by your account. An example using `az` client:
+
+ ```
+ az login
+ az account get-access-token --resource https://ossrdbms-aad.database.windows.net
+ ```
+
+- After getting the token, use it to log in to your Azure Database for PostgreSQL. Replace the 'servername' as the one that you can find in the 'overview' of your Azure Database for PostgreSQL.
+
+ ```
+ export PGPASSWORD=<access-token>
+ psql -h <servername> -U <adminaccount@servername> -d postgres
+ ```
+
+- After login, execute the following commands to grant Metrics Advisor access permission to Azure Database for PostgreSQL. Replace the 'appid' with the one that you get in Step 2.
+
+ ```
+ SET aad_validate_oids_in_tenant = off;
+ CREATE ROLE metricsadvisor WITH LOGIN PASSWORD '<appid>' IN ROLE azure_ad_user;
+ ALTER ROLE metricsadvisor CREATEDB;
+ GRANT azure_pg_admin TO metricsadvisor;
+ ```
+
+By completing all the above steps, you've successfully created a Metrics Advisor resource with CMK supported. Wait for a couple of minutes until your Metrics Advisor is accessible.
## Next steps
cognitive-services Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/how-tos/alerts.md
After you select **OK**, a Teams hook will be created. You can use it in any ale
### Web hook
-> [!Note]
-> * Use the **POST** request method.
-> * The request body wil be similar to:
- `{"timestamp":"2019-09-11T00:00:00Z","alertSettingGuid":"49635104-1234-4c1c-b94a-744fc920a9eb"}`
-> * When a web hook is created or modified, the API will be called as a test with an empty request body. Your API needs to return a 200 HTTP code.
+A web hook is another notification channel by using an endpoint that is provided by the customer. Any anomaly detected on the time series will be notified through a web hook. There're several steps to enable a web hook as alert notification channel within Metrics Advisor.
+
+**Step1.** Enable Managed Identity in your Metrics Advisor resource
+
+A system assigned managed identity is restricted to one per resource and is tied to the lifecycle of this resource. You can grant permissions to the managed identity by using Azure role-based access control (Azure RBAC). The managed identity is authenticated with Azure AD, so you donΓÇÖt have to store any credentials in code.
+
+Go to Metrics Advisor resource in Azure portal, and select "Identity", turn it to "on" then Managed Identity is enabled.
+
+**Step2.** Create a web hook in Metrics Advisor workspace
+
+Log in to you workspace and select "Hooks" tab, then select "Create hook" button.
-A web hook is the entry point for all the information available from the Metrics Advisor service, and calls a user-provided API when an alert is triggered. All alerts can be sent through a web hook.
To create a web hook, you will need to add the following information: |Parameter |Description | |||
-|Endpoint | The API address to be called when an alert is triggered. |
+|Endpoint | The API address to be called when an alert is triggered. **MUST be Https**. |
|Username / Password | For authenticating to the API address. Leave this black if authentication isn't needed. | |Header | Custom headers in the API call. |
+|Certificate identifier in Azure Key vaults| If accessing the endpoint needs to be authenticated by a certificate, the certificate should be stored in Azure Key vaults. Input the identifier here.
+
+> [!Note]
+> When a web hook is created or modified, the endpoint will be called as a test with **an empty request body**. Your API needs to return a 200 HTTP code to successfully pass the validation.
:::image type="content" source="../media/alerts/create-web-hook.png" alt-text="web hook creation window.":::
-When a notification is pushed through a web hook, you can use the following APIs to get details of the alert. Set the *timestamp* and *alertSettingGuid* in your API service, which is being pushed to, then use the following queries:
-- `query_alert_result_anomalies`-- `query_alert_result_incidents`
+- Request method is **POST**
+- Timeout 30s
+- Retry for 5xx error, ignore other error. Will not follow 301/302 redirect request.
+- Request body:
+```
+{
+"value": [{
+ "hookId": "b0f27e91-28cf-4aa2-aa66-ac0275df14dd",
+ "alertType": "Anomaly",
+ "alertInfo": {
+ "anomalyAlertingConfigurationId": "1bc6052e-9a2a-430b-9cbd-80cd07a78c64",
+ "alertId": "172536dbc00",
+ "timestamp": "2020-05-27T00:00:00Z",
+ "createdTime": "2020-05-29T10:04:45.590Z",
+ "modifiedTime": "2020-05-29T10:04:45.590Z"
+ },
+ "callBackUrl": "https://kensho2-api.azurewebsites.net/alert/anomaly/configurations/1bc6052e-9a2a-430b-9cbd-80cd07a78c64/alerts/172536dbc00/incidents"
+}]
+}
+```
+
+**Step3. (optional)** Store your certificate in Azure Key vaults and get identifier
+As mentioned, if accessing the endpoint needs to be authenticated by a certificate, the certificate should be stored in Azure Key vaults.
+
+- Check [Set and retrieve a certificate from Azure Key Vault using the Azure portal](../../../key-vault/certificates/quick-create-portal.md)
+- Click on the certificate you've added, then you're able to copy the "Certificate identifier".
+- Then select "Access policies" and "Add access policy", grant "get" permission for "Key permissions", "Secrete permissions" and "Certificate permissions". Select principal as the name of your Metrics Advisor resource. Select "Add" and "Save" button in "Access policies" page.
+
+**Step4.** Receive anomaly notification
+When a notification is pushed through a web hook, you can fetch incidents data by calling the "callBackUrl" in Webhook Request. Details for this api:
+
+- [/alert/anomaly/configurations/{configurationId}/alerts/{alertId}/incidents](https://westus2.dev.cognitive.microsoft.com/docs/services/MetricsAdvisor/operations/getIncidentsFromAlertByAnomalyAlertingConfiguration)
By using web hook and Azure Logic Apps, it's possible to send email notification **without an SMTP server configured**. Refer to the tutorial of [enable anomaly notification in Metrics Advisor](../tutorials/enable-anomaly-notification.md#send-notifications-with-logic-apps-teams-and-smtp) for detailed steps.
cognitive-services Credential Entity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/how-tos/credential-entity.md
+
+ Title: Create a credential entity
+
+description: How to create a credential entity to manage your credential in secure.
++++++ Last updated : 06/22/2021+++
+# How-to: Create a credential entity
+
+When onboarding a data feed, you should select an authentication type, some authentication types like *Azure SQL Connection String* and *Service Principal* need a credential entity to store credential-related information, in order to manage your credential in secure. This article will tell how to create a credential entity for different credential types in Metrics Advisor.
+
+
+## Basic procedure: Create a credential entity
+
+You can create a **credential entity** to store credential-related information, and use it for authenticating to your data sources. You can share the credential entity to others and enable them to connect to your data sources without sharing the real credentials. It can be created in 'Adding data feed' tab or 'Credential entity' tab. After creating a credential entity for a specific authentication type, you can just choose one credential entity you created when adding new data feed, this will be helpful when creating multiple data feeds. The general procedure of creating and using a credential entity is shown below:
+
+1. Select '+' to create a new credential entity in 'Adding data feed' tab (you can also create one in 'Credential entity feed' tab).
+
+ ![create credential entity](../media/create-credential-entity.png)
+
+2. Set the credential entity name, description (if needed), credential type (equals to *authentication type*) and other settings.
+
+ ![set credential entity](../media/set-credential-entity.png)
+
+3. After creating a credential entity, you can choose it when specifying authentication type.
+
+ ![choose credential entity](../media/choose-credential-entity.png)
+
+There are **four credential types** in Metrics Advisor: Azure SQL Connection String, Azure Data Lake Storage Gen2 Shared Key Entity, Service Principal, Service Principal from Key Vault. For different credential type settings, see following instructions.
+
+## Azure SQL Connection String
+
+You should set the **Name** and **Connection String**, then select 'create'.
+
+![set credential entity for sql connection string](../media/credential-entity/credential-entity-sql-connection-string.png)
+
+## Azure Data Lake Storage Gen2 Shared Key Entity
+
+You should set the **Name** and **Account Key**, then select 'create'. Account key could be found in Azure Storage Account (Azure Data Lake Storage Gen2) resource in **Access keys** setting.
+
+<!-- 增加basic说明,tips是错的;增加一下怎么管理;加一个step1的link
+-->
+![set credential entity for data lake](../media/credential-entity/credential-entity-data-lake.png)
+
+## Service principal
+
+To create service principal for your data source, you can follow detailed instructions in [Connect different data sources](../data-feeds-from-different-sources.md). After creating a service principal, you need to fill in the following configurations in credential entity.
+
+![sp credential entity](../media/credential-entity/credential-entity-service-principal.png)
+
+* **Name:** Set a name for your service principal credential entity.
+* **Tenant ID & Client ID:** After creating a service principal in Azure portal, you can find `Tenant ID` and `Client ID` in **Overview**.
+
+ ![sp client ID and tenant ID](../media/credential-entity/sp-client-tenant-id.png)
+
+* **Client Secret:** After creating a service principal in Azure portal, you should go to **Certificates & Secrets** to create a new client secret, and the **value** should be used as `Client Secret` in credential entity. (Note: The value only appears once, so it's better to store it somewhere.)
++
+ ![sp Client secret value](../media/credential-entity/sp-secret-value.png)
+
+## <span id="sp-from-kv">Service principal from Key Vault</span>
+
+There are several steps to create a service principal from key vault.
+
+**Step 1. Create a Service Principal and grant it access to your database.** You can follow detailed instructions in [Connect different data sources](../data-feeds-from-different-sources.md), in creating service principal section for each data source.
+
+After creating a service principal in Azure portal, you can find `Tenant ID` and `Client ID` in **Overview**. The **Directory (tenant) ID** should be `Tenant ID` in credential entity configurations.
+
+![sp client ID and tenant ID](../media/credential-entity/sp-client-tenant-id.png)
+
+**Step 2. Create a new client secret.** You should go to **Certificates & Secrets** to create a new client secret, and the **value** will be used in next steps. (Note: The value only appears once, so it's better to store it somewhere.)
+
+![sp Client secret value](../media/credential-entity/sp-secret-value.png)
+
+**Step 3. Create a key vault.** In [Azure portal](https://ms.portal.azure.com/#home), select **Key vaults** to create one.
+
+![create a key vault in azure portal](../media/credential-entity/create-key-vault.png)
+
+After creating a key vault, the **Vault URI** is the `Key Vault Endpoint` in MA (Metrics Advisor) credential entity.
+
+![key vault endpoint](../media/credential-entity/key-vault-endpoint.png)
+
+**Step 4. Create secrets for Key Vault.** In Azure portal for key vault, generate two secrets in **Settings->Secrets**.
+The first is for `Service Principal Client Id`, the other is for `Service Principal Client Secret`, both of their name will be used in credential entity configurations.
+
+![generate secrets](../media/credential-entity/generate-secrets.png)
+
+* **Service Principal Client ID:** Set a `Name` for this secret, the name will be used in credential entity configuration, and the value should be your Service Principal `Client ID` in **Step 1**.
+
+ ![secret1: sp client id](../media/credential-entity/secret-1-sp-client-id.png)
+
+* **Service Principal Client Secret:** Set a `Name` for this secret, the name will be used in credential entity configuration, and the value should be your Service Principal `Client Secret Value` in **Step 2**.
+
+ ![secret2: sp client secret](../media/credential-entity/secret-2-sp-secret-value.png)
+
+Until now, the *client ID* and *client secret* of service principal are finally stored in Key Vault. Next, you need to create another service principal to store the key vault. Therefore, you should **create two service principals**, one to save client ID and client secret, which will be stored in a key vault, the other is to store the key vault.
+
+**Step 5. Create a service principal to store the key vault.**
+
+1. Go to [Azure portal AAD (Azure Active Directory)](https://portal.azure.com/?trace=diagnostics&feature.customportal=false#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) and create a new registration.
+
+ ![create a new registration](../media/credential-entity/create-registration.png)
+
+ After creating the service principal, the **Application (client) ID** in Overview will be the `Key Vault Client ID` in credential entity configuration.
+
+2. In **Manage->Certificates & Secrets**, create a client secret by selecting 'New client secret'. Then you should **copy down the value**, because it appears only once. The value is `Key Vault Client Secret` in credential entity configuration.
+
+ ![add client secret](../media/credential-entity/add-client-secret.png)
+
+**Step 6. Grant Service Principal access to Key Vault.** Go to the key vault resource you created, in **Settings->Access polices**, by selecting 'Add Access Policy' to make connection between key vault and the second service principal in **Step 5**, and 'Save'.
+
+![grant sp to key vault](../media/credential-entity/grant-sp-to-kv.png)
++
+## Configurations conclusion
+To conclude, the credential entity configurations in Metrics Advisor for *Service Principal from Key Vault* and the way to get them are shown in table below:
+
+| Configuration | How to get |
+|-| |
+| Key Vault Endpoint | **Step 3:** Vault URI of key vault. |
+| Tenant ID | **Step 1:** Directory (tenant) ID of your first service principal. |
+| Key Vault Client ID | **Step 5:** The Application (client) ID of your second service principal. |
+| Key Vault Client Secret | **Step 5:** The client secret value of your second service principal. |
+| Service Principal Client ID Name | **Step 4:** The secret name you set for Client ID. |
+| Service Principal Client Secret Name | **Step 4:** The secret name you set for Client Secret Value. |
++
+## Next steps
+
+- [Onboard your data](onboard-your-data.md)
+- [Connect different data sources](../data-feeds-from-different-sources.md)
cognitive-services Metrics Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/how-tos/metrics-graph.md
# How-to: Build a metrics graph to analyze related metrics
-Each metric in Metrics Advisor is monitored separately by a model that learns from historical data to predict future trends. Each metric has a separate model that is applied to it. In some cases however, several metrics may relate to each other, and anomalies need to be analyzed across multiple metrics. The **Metrics Graph** helps with this.
+Each time series in Metrics Advisor is monitored separately by a model that learns from historical data to predict future trends. Anomalies will be detected if any data point falls out of the historical pattern. In some cases, however, several metrics may relate to each other, and anomalies need to be analyzed across multiple metrics. **Metrics graph** is just the tool that helps with this.
-As an example, if you have different streams of telemetry in separate metrics, Metrics Advisor will monitor them separately. If anomalies in one metric cause anomalies in other metrics, finding those relationships and the root cause in your data can be helpful when addressing incidents. The metrics graph enables you to create a visual topology graph of found anomalies.
+For example, if you have several metrics that monitor your business from different perspectives, anomaly detection will be applied respectively. However, in the real business case, anomalies detected on multiple metrics may have a relation with each other, discovering those relations and analyzing root cause base on that would be helpful when addressing real issues. The metrics graph helps automatically correlate anomalies detected on related metrics to accelerate the troubleshooting process.
## Select a metric to put the first node to the graph
-Click the **Metrics Graph** tab in the navigation bar. The first step in building a metrics graph is to put a node onto the graph. Select a data feed and metric at the top of the page. A node will appear in the bottom panel.
+Click the **Metrics graph** tab in the navigation bar. The first step for building a metrics graph is to put a node onto the graph. Select a data feed and a metric at the top of the page. A node will appear in the bottom panel.
:::image type="content" source="../media/graph/metrics-graph-select.png" alt-text="Select metric"::: ## Add a node/relation on existing node
-Next you need to add another node and specify a relation to an existing node(s). Select an existing node and right click on it. A context menu will appear with several options.
+Next, you need to add another node and specify a relation to an existing node(s). Select an existing node and right-click on it. A context menu will appear with several options.
-Click **Add relation**, and you will be able to choose another metric, and specify the relation type between the two nodes. You can also apply specific dimension filters.
+Select **Add relation**, and you will be able to choose another metric and specify the relation type between the two nodes. You can also apply specific dimension filters.
:::image type="content" source="../media/graph/metrics-graph-node-action.png" alt-text="Add a node and relation"::: After repeating the above steps, you will have a metrics graph describing the relations between all related metrics.
-**Hint on node colors**
-> [!TIP]
-> - When you select a metric and dimension filter, all the nodes with the same metric and dimension filter in the graph will be colored as **<font color=blue>blue</font>**.
-> - Unselected nodes that represent a metric in the graph will be colored **<font color=green>green</font>**.
-> - If there's an anomaly observed in the current metric, the node will be colored **<font color=red>red</font>**.
+
+There're other actions you can take on the graph:
+1. Delete a node
+2. Go to metrics
+3. Go to Incident Hub
+4. Expand
+5. Delete relation
+
+## Legend of metrics graph
+
+Each node on the graph represents a metric. There are four kinds of nodes in the metrics graph:
+
+- **Green node**: The node that represents current metric incident severity is low.
+- **Orange node**: The node that represents current metric incident severity is medium.
+- **Red node**: The node that represents current metric incident severity is high.
+- **Blue node**: The node which doesn't have anomaly severity.
+ ## View related metrics anomaly status in incident hub
-When the metrics graph is built, whenever an anomaly is detected on metrics within the graph, you will able to view related anomaly statuses, and get a high-level view of the incident.
+When the metrics graph is built, whenever an anomaly is detected on metrics within the graph, you will able to view related anomaly statuses and get a high-level view of the incident.
Click into an incident within the graph and scroll down to **cross metrics analysis**, below the diagnostic information.
Click into an incident within the graph and scroll down to **cross metrics analy
- [Adjust anomaly detection using feedback](anomaly-feedback.md) - [Diagnose an incident](diagnose-an-incident.md).-- [Configure metrics and fine tune detection configuration](configure-metrics.md)
+- [Configure metrics and fine tune detection configuration](configure-metrics.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/overview.md
The workflow is simple: after onboarding your data, you can fine-tune the anomal
* [Introducing Metrics Advisor](https://www.youtube.com/watch?v=0Y26cJqZMIM) * [New to Cognitive Services](https://www.youtube.com/watch?v=7tCLJHdBZgM)
+## Data retention & limitation:
+
+Metrics Advisor will keep at most **10,000** time intervals ([what is an interval?](tutorials/write-a-valid-query.md#what-is-an-interval)) forward counting from current timestamp, no matter there's data available or not. Data falls out of the window will be deleted. Data retention mapping to count of days for different metric granularity:
+
+| Granularity(min) | Retention(day) |
+|| |
+| 1 | 6.94 |
+| 5 | 34.72|
+| 15 | 104.1|
+| 60(=hourly) | 416.67 |
+| 1440(=daily)|10000.00|
+
+ThereΓÇÖre also further limitations, please refer to [FAQ](faq.yml#what-are-the-data-retention-and-limitations-of-metrics-advisor-) for more details.
+ ## Next steps * Explore a quickstart: [Monitor your first metric on web](quickstarts/web-portal.md).
-* Explore a quickstart: [Use the REST APIs to customize your solution](./quickstarts/rest-api-and-client-library.md).
+* Explore a quickstart: [Use the REST APIs to customize your solution](./quickstarts/rest-api-and-client-library.md).
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/known-issues.md
-# Known issues: Azure Communication Services Calling SDKs
-This article provides information about limitations and known issues related to the Azure Communication Services Calling SDKs.
+# Known issues
+This article provides information about limitations and known issues related to the Azure Communication Services Calling SDKs and Azure Communication Services Call Automation APIs
> [!IMPORTANT] > There are multiple factors that can affect the quality of your calling experience. Refer to the **[network requirements](./voice-video-calling/network-requirements.md)** documentation to learn more about Communication Services network configuration and testing best practices.
+## Azure Communication Services Calling SDKs
-## JavaScript SDK
+### JavaScript SDK
This section provides information about known issues associated with the Azure Communication Services JavaScript voice and video calling SDKs.
-### Refreshing a page doesn't immediately remove the user from their call
+#### Refreshing a page doesn't immediately remove the user from their call
If a user is in a call and decides to refresh the page, the Communication Services media service won't remove this user immediately from the call. It will wait for the user to rejoin. The user will be removed from the call after the media service times out.
If the user rejoins with the same Communication Services user ID, they'll be rep
If the user was sending video before refreshing, the `videoStreams` collection will keep the previous stream information until the service times out and removes it. In this scenario, the application may decide to observe any new streams added to the collection and render one with the highest `id`.
-### It's not possible to render multiple previews from multiple devices on web
+#### It's not possible to render multiple previews from multiple devices on web
This is a known limitation. For more information, see the [calling SDK overview](./voice-video-calling/calling-sdk-features.md).
-### Enumerating devices isn't possible in Safari when the application runs on iOS or iPadOS
+#### Enumerating devices isn't possible in Safari when the application runs on iOS or iPadOS
Applications can't enumerate/select mic/speaker devices (like Bluetooth) on Safari iOS/iPad. This is a known operating system limitation. If you're using Safari on macOS, your app won't be able to enumerate/select speakers through the Communication Services Device Manager. In this scenario, devices must be selected via the OS. If you use Chrome on macOS, the app can enumerate/select devices through the Communication Services Device Manager.
-### Audio connectivity is lost when receiving SMS messages or calls during an ongoing VoIP call
+#### Audio connectivity is lost when receiving SMS messages or calls during an ongoing VoIP call
This problem may occur due to multiple reasons: - Some mobile browsers don't maintain connectivity while in the background state. This can lead to a degraded call experience if the VoIP call was interrupted by an event that pushes your application into the background.
This problem may occur due to multiple reasons:
<br/>Browsers: Safari, Chrome <br/>Operating System: iOS, Android
-### Repeatedly switching video devices may cause video streaming to temporarily stop
+#### Repeatedly switching video devices may cause video streaming to temporarily stop
Switching between video devices may cause your video stream to pause while the stream is acquired from the selected device.
-#### Possible causes
+##### Possible causes
Switching between devices frequently can cause performance degradation. Developers are encouraged to stop one device stream before starting another.
-### Bluetooth headset microphone is not detected therefore is not audible during the call on Safari on iOS
+#### Bluetooth headset microphone is not detected therefore is not audible during the call on Safari on iOS
Bluetooth headsets aren't supported by Safari on iOS. Your Bluetooth device won't be listed in available microphone options and other participants won't be able to hear you if you try using Bluetooth over Safari.
-#### Possible causes
+##### Possible causes
This is a known macOS/iOS/iPadOS operating system limitation. With Safari on **macOS** and **iOS/iPadOS**, it is not possible to enumerating/selecting speaker devices through the Communication Services Device Manager since speakers enumeration/selection is not supported by Safari. In this scenario, your device selection should be updated via the operating system.
-### Rotation of a device can create poor video quality
+#### Rotation of a device can create poor video quality
Users may experience degraded video quality when devices are rotated. <br/>Devices affected: Google Pixel 5, Google Pixel 3a, Apple iPad 8, and Apple iPad X
Users may experience degraded video quality when devices are rotated.
<br/>Operating System: iOS, Android
-### Camera switching makes the screen freeze
+#### Camera switching makes the screen freeze
When a Communication Services user joins a call using the JavaScript calling SDK and then hits the camera switch button, the UI may become unresponsive until the application is refreshed or browser is pushed to the background by user. <br/>Devices affected: Google Pixel 4a
When a Communication Services user joins a call using the JavaScript calling SDK
<br/>Operating System: iOS, Android
-#### Possible causes
+##### Possible causes
Under investigation.
-### If the video signal was stopped while the call is in "connecting" state, the video will not be sent after the call started
+#### If the video signal was stopped while the call is in "connecting" state, the video will not be sent after the call started
If users decide to quickly turn video on/off while call is in `Connecting` state - this may lead to problem with stream acquired for the call. We encourage developers to build their apps in a way that doesn't require video to be turned on/off while call is in `Connecting` state. This issue may cause degraded video performance in the following scenarios: - If the user starts with audio and then start and stop video while the call is in `Connecting` state. - If the user starts with audio and then start and stop video while the call is in `Lobby` state.
-#### Possible causes
+##### Possible causes
Under investigation.
-### Enumerating/accessing devices for Safari on MacOS and iOS
+#### Enumerating/accessing devices for Safari on MacOS and iOS
If access to devices are granted, after some time, device permissions are reset. Safari on MacOS and on iOS does not keep permissions for very long time unless there is a stream acquired. The simplest way to work around this is to call DeviceManager.askDevicePermission() API before calling the device manager's device enumeration APIs (DeviceManager.getCameras(), DeviceManager.getSpeakers(), and DeviceManager.getMicrophones()). If the permissions are there, then user will not see anything, if not, it will re-prompt. <br/>Devices affected: iPhone
If access to devices are granted, after some time, device permissions are reset.
<br/>Browsers: Safari <br/>Operating System: iOS
-### Sometimes it takes a long time to render remote participant videos
+#### Sometimes it takes a long time to render remote participant videos
During an ongoing group call, _User A_ sends video and then _User B_ joins the call. Sometimes, User B doesn't see video from User A, or User A's video begins rendering after a long delay. This issue could be caused by a network environment that requires further configuration. Refer to the [network requirements](./voice-video-calling/network-requirements.md) documentation for network configuration guidance.
-### Using 3rd party libraries to access GUM during the call may result in audio loss
+#### Using 3rd party libraries to access GUM during the call may result in audio loss
Using getUserMedia separately inside the application will result in losing audio stream since a third party library takes over device access from ACS library. Developers are encouraged to do the following: 1. Don't use 3rd party libraries that are using internally GetUserMedia API during the call.
Developers are encouraged to do the following:
<br/>Browsers: Safari <br/>Operating System: iOS
-#### Possible causes
+##### Possible causes
In some browsers (Safari for example), acquiring your own stream from the same device will have a side-effect of running into race conditions. Acquiring streams from other devices may lead the user into insufficient USB/IO bandwidth, and sourceUnavailableError rate will skyrocket. ++
+## Azure Communication Services Call Automation APIs
+
+Following are the known issues in the Azure Communication Services Call Automation APIs
+
+- The only authentication supported at this time for server applications is using a connection string.
+
+- Calls should be make only between entities of the same Azure Communication Service resource. Cross resource communication is blocked.
+
+- Calls between Teams' tenant users and Azure Communication Service user or server application entities are not allowed.
+
+- If an application dials out to two or more PSTN identities and then quits the call, the call between the other PSTN entities would be dropped.
+
communication-services Manage Teams Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/manage-teams-identity.md
The Administrator role has extended permissions in AAD. Members of this role can
1. Contoso's Admin creates or selects existing *Application* in Azure Active Directory. Property *Supported account types* defines whether users from different tenant can authenticate to the *Application*. Property *Redirect URI* redirects successful authentication request to Contoso's *Server*. 1. Contoso's Admin extends *Application*'s manifest with Azure Communication Services' VoIP permission.
+1. Contoso's Admin allows public client flow for the *Application*
+1. Contoso's Admin can optionality update
1. Contoso's Admin enables experience via [this form](https://forms.office.com/r/B8p5KqCH19) 1. Contoso's Admin creates or selects existing Communication Services, that will be used for authentication of the exchanging requests. AAD user tokens will be exchanged for Teams access tokens. You can read more about creation of [new Azure Communication Services resources here](./create-communication-resource.md). 1. Fabrikam's Admin provisions new service principal for Azure Communication Services in the Fabrikam's tenant
When the *Application* is registered, you'll see an identifier in the overview.
In the *Authentication* pane of your *Application*, you can see Configured platform for *Public client/native(mobile & desktop)* with Redirect URI pointing to *localhost*. In the bottom of the screen, you can find toggle *Allow public client flows*, which for this quickstart will be set to **Yes**.
-### 3. Verify application (Optional)
-In the *Branding* pane, you can verify your platform within Microsoft identity platform. This one time process will remove requirement for Fabrikam's admin to give admin consent to this application. You can find details on how to verify your application [here](/azure/active-directory/develop/howto-configure-publisher-domain).
+### 3. Update publisher domain (Optional)
+In the *Branding* pane, you can update your publisher domain for the *Application*. This is useful for multitenant applications, where the application will be marked as verified by Azure. You can find details on how to verify publisher and how to update domain of your application [here](/azure/active-directory/develop/howto-configure-publisher-domain).
### 4. Define Azure Communication Services' VoIP permission in application
confidential-computing Confidential Nodes Aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-faq.md
- Title: Frequently asked questions for confidential nodes support on Azure Kubernetes Service (AKS)
-description: Find answers to some of the common questions about Azure Kubernetes Service (AKS) & Azure Confidential Computing (ACC) nodes support.
---- Previously updated : 02/09/2020---
-# Frequently asked questions about Confidential Computing Nodes on Azure Kubernetes Service (AKS)
-
-This article addresses frequent questions about Intel SGX based confidential computing nodes on Azure Kubernetes Service (AKS). If you have any further questions, email **acconaks@microsoft.com**.
-
-<a name="1"></a>
-### Are the confidential computing nodes on AKS in GA? ###
-Yes
-
-<a name="2"></a>
-### What is attestation and how can we do attestation of apps running in enclaves? ###
-Attestation is the process of demonstrating and validating that a piece of software has been properly instantiated on the specific hardware platform. It also ensures its evidence is verifiable to provide assurances that it is running in a secure platform and has not been tampered with. [Read more](attestation.md) on how attestation is done for enclave apps.
-
-<a name="3"></a>
-### Can I enable Accelerated Networking with Azure confidential computing AKS Clusters? ###
-No. Accelerated Networking is not supported on DCSv2 Virtual machines that makeup confidential computing nodes on AKS.
-
-<a name="4"></a>
-### Can I bring my existing containerized applications and run it on AKS with Azure Confidential Computing? ###
-Yes, review the [confidential containers page](confidential-containers.md) for more details on platform enablers.
-
-<a name="5"></a>
-### What version of Intel SGX Driver version is on the AKS Image for confidential nodes? ###
-Currently, Azure confidential computing DCSv2 VMs are installed with Intel SGX DCAP 1.33.
-
-<a name="6"></a>
-### Can I inject post install scripts/customize drivers to the Nodes provisioned by AKS? ###
-No. [AKS-Engine based confidential computing nodes](https://github.com/Azure/aks-engine/blob/master/docs/topics/sgx.md) support confidential computing nodes that allow custom installations and have full control over your Kubernetes control plane.
-<a name="7"></a>
-
-### Should I be using a Docker base image to get started on enclave applications? ###
-Various enablers (ISVs and OSS projects) provide ways to enable confidential containers. Review the [confidential containers page](confidential-containers.md) for more details and individual references to implementations.
-
-<a name="8"></a>
-### Can I run ACC Nodes with other standard AKS SKUs (build a heterogenous node pool cluster)? ###
-
-Yes, you can run different node pools within the same AKS cluster including ACC nodes. To target your enclave applications on a specific node pool, consider adding node selectors or applying EPC limits. Refer to more details on the quick start on confidential nodes [here](confidential-nodes-aks-get-started.md).
-
-<a name="9"></a>
-### Can I run Windows Nodes and windows containers with ACC? ###
-Not at this time. Contact the product team at *acconaks@microsoft.com* if you have Windows nodes or container needs.
-
-<a name="10"></a>
-### What if my container size is more than available EPC memory? ###
-The EPC memory applies to the part of your application that is programmed to execute in the enclave. The total size of your container is not the right way to compare it with the max available EPC memory. In fact, DCSv2 machines with SGX, allow maximum VM memory of 32 GB where your untrusted part of the application would utilize. However, if your container consumes more than available EPC memory, then the performance of the portion of the program running in the enclave might be impacted.
-
-To better manage the EPC memory in the worker nodes, consider the EPC memory-based limits management through Kubernetes. Follow the example below as reference.
-
-```yaml
-apiVersion: batch/v1
-kind: Job
-metadata:
- name: sgx-test
- labels:
- app: sgx-test
-spec:
- template:
- metadata:
- labels:
- app: sgx-test
- spec:
- containers:
- - name: sgxtest
- image: oeciteam/sgx-test:1.0
- resources:
- limits:
- kubernetes.azure.com/sgx_epc_mem_in_MiB: 10 # This limit will automatically place the job into confidential computing node. Alternatively, you can target deployment to nodepools
- restartPolicy: Never
- backoffLimit: 0
-```
-<a name="11"></a>
-### What happens if my enclave consumes more than maximum available EPC memory? ###
-
-Total available EPC memory is shared between the enclave applications in the same VMs or worker nodes. If your application uses EPC memory more than available, then the application performance might be impacted. For this reason, we recommend you setting toleration per application in your deployment yaml file to better manage the available EPC memory per worker nodes as shown in the examples above. Alternatively, you can always choose to move up on the worker node pool VM sizes or add more nodes.
-
-<a name="12"></a>
-### Why can't I do forks () and exec to run multiple processes in my enclave application? ###
-
-Currently, Azure confidential computing DCsv2 SKU VMs support a single address space for the program executing in an enclave. Single process is a current limitation designed around high security. However, confidential container enablers may have alternate implementations to overcome this limitation.
-<a name="13"></a>
-### Do you automatically install any additional daemonset to expose the SGX drivers? ###
-
-Yes. The name of the daemonset is sgx-device-plugin. Read more on their respective purposes [here](confidential-nodes-aks-overview.md).
-
-<a name="14"></a>
-### What is the VM SKU I should be choosing for confidential computing nodes? ###
-
-DCSv2 SKUs. The [DCSv2 SKUs](../virtual-machines/dcv2-series.md) are available in the [supported regions](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines&regions=all).
-
-<a name="15"></a>
-### Can I still schedule and run non-enclave containers on confidential computing nodes? ###
-
-Yes. The VMs also have a regular memory that can run standard container workloads. Consider the security and threat model of your applications before you decide on the deployment models.
-<a name="16"></a>
-
-### Can I provision AKS with DCSv2 Node Pools through Azure portal? ###
-
-Yes. Azure CLI could also be used as an alternative as documented [here](confidential-nodes-aks-get-started.md).
-
-<a name="17"></a>
-### What Ubuntu version and VM generation is supported? ###
-18.04 on Gen 2.
-
-<a name="18"></a>
-### Can we change the current Intel SGX DCAP diver version on AKS? ###
-
-No. To perform any custom installations, we recommend you choose [AKS-Engine Confidential Computing Worker Nodes](https://github.com/Azure/aks-engine/blob/master/docs/topics/sgx.md) deployments.
-
-<a name="19"></a>
-
-### What version of Kubernetes do you support and recommend? ###
-
-We support and recommend Kubernetes version 1.16 and above.
-
-<a name="20"></a>
-### What are the known current limitations of the product? ###
--- Supports Ubuntu 18.04 Gen 2 VM Nodes only -- No Windows Nodes Support or Windows Containers Support-- EPC Memory based Horizontal Pod Autoscaling is not supported. CPU and regular memory-based scaling is supported.-- Dev Spaces on AKS for confidential apps are not currently supported-
-<a name="21"></a>
-### Will only signed and trusted images be loaded in the enclave for confidential computing? ###
-Not natively during enclave initialization but yes through attestation process signature can be validated. Ref [here](../attestation/basic-concepts.md#benefits-of-policy-signing).
-
-### Next Steps
-Review the [confidential containers page](confidential-containers.md) for more details around confidential containers.
connectors Apis List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/apis-list.md
Title: Connectors for Azure Logic Apps
-description: Overview about using connectors to build automated workflows with Azure Logic Apps. Learn how different triggers, actions, and connectors work.
+ Title: Connectors overview for Azure Logic Apps
+description: Learn about connectors and how they help you quickly and easily build automated integration workflows using Azure Logic Apps.
ms.suite: integration-+ Previously updated : 04/20/2021 Last updated : 07/01/2021
-# Connectors for Azure Logic Apps
+# About connectors in Azure Logic Apps
-In Azure Logic Apps, *connectors* help you quickly access data, events, and other resources from other apps, services, systems, protocols, and platforms. When you use connectors, you can build logic app workflows that use, process, and integrate information across cloud-based, on-premises, and hybrid environments - often without having to write any code.
+When you build workflows using Azure Logic Apps, you can use *connectors* to help you quickly and easily access data, events, and resources in other apps, services, systems, protocols, and platforms - often without writing any code. A connector provides prebuilt operations that you can use as steps in your workflows. Azure Logic Apps provides hundreds of connectors that you can use. If no connector is available for the resource that you want to access, you can use the generic HTTP operation to communicate with the service, or you can [create a custom connector](#custom-apis-and-connectors).
-You can choose from hundreds of connectors to use in your workflows. As a result, this documentation focuses on some popular and commonly used connectors for Logic Apps. For complete information about connectors across Logic Apps, Microsoft Power Automate, and Microsoft Power Apps, review the [Connectors documentation](/connectors). For information on pricing, review the [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md), and [Logic Apps pricing details](https://azure.microsoft.com/pricing/details/logic-apps/).
+This overview offers an introduction to connectors, how they generally work, and the more popular and commonly used connectors in Azure Logic Apps. For more information, review the following documentation:
-> [!NOTE]
-> To integrate your workflow with a service or API that doesn't have a connector, you can either call
-> the service over a protocol, such as HTTP, or [create a custom connector](#custom-apis-and-connectors).
+* [Connectors overview for Azure Logic Apps, Microsoft Power Automate, and Microsoft Power Apps](/connectors)
+* [Connectors reference for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
+* [Pricing and billing models in Azure Logic Apps](../logic-apps/logic-apps-pricing.md)
+* [Azure Logic Apps pricing details](https://azure.microsoft.com/pricing/details/logic-apps/)
## What are connectors?
-Connectors provide *triggers* and *actions* that you use to perform tasks in your logic app's workflow. Each trigger and action has properties that you can configure. Some triggers and actions require that you [create and configure connections](#connection-configuration) so that your workflow can access a specific service or system.
+Technically, a connector is a proxy or a wrapper around an API that the underlying service uses to communicate with Azure Logic Apps. This connector provides operations that you use in your workflows to perform tasks. An operation is available either as a *trigger* or *action* with properties you can configure. Some triggers and actions also require that you first [create and configure a connection](#connection-configuration) to the underlying service or system, for example, so that you can authenticate access to a user account.
### Triggers
An *action* is an operation that follows the trigger and performs some kind of t
## Connector categories
-In Logic Apps, most triggers and actions are available in either a *built-in* version or *managed connector* version. A small number of triggers and actions are available in both versions. The versions available depend on whether you create a multi-tenant logic app or a single-tenant logic app, which is currently available only in [single-tenant Azure Logic Apps](../logic-apps/single-tenant-overview-compare.md).
+In Logic Apps, most triggers and actions are available in either a *built-in* version or *managed connector* version. A few triggers and actions are available in both versions. The versions available depend on whether you create a multi-tenant logic app or a single-tenant logic app, which is currently available only in [single-tenant Azure Logic Apps](../logic-apps/single-tenant-overview-compare.md).
[Built-in triggers and actions](built-in.md) run natively on the Logic Apps runtime, don't require creating connections, and perform these kinds of tasks:
To make sure that your workflow runs at your specified start time and doesn't mi
* When DST takes effect, manually adjust the recurrence so that your workflow continues to run at the expected time. Otherwise, the start time shifts one hour forward when DST starts and one hour backward when DST ends. For more information and examples, review [Recurrence for daylight saving time and standard time](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#daylight-saving-standard-time).
-* If you're using a **Recurrence** trigger, specify a time zone, a start date and time. In addition, configure specific times to run subsequent recurrences in the properties **At these hours** and **At these minutes**, which are available only for the **Day** and **Week** frequencies. However, some time windows might still cause problems when the time shifts.
+* If you're using a **Recurrence** trigger, specify a time zone, a start date, and start time. In addition, configure specific times to run subsequent recurrences in the properties **At these hours** and **At these minutes**, which are available only for the **Day** and **Week** frequencies. However, some time windows might still cause problems when the time shifts.
* Consider using a [**Sliding Window** trigger](connectors-native-sliding-window.md) instead of a **Recurrence** trigger to avoid missed recurrences.
For workflows that need direct access to resources in an Azure virtual network,
Custom connectors created within an ISE don't work with the on-premises data gateway. However, these connectors can directly access on-premises data sources that are connected to an Azure virtual network hosting the ISE. So, logic apps in an ISE most likely don't need the data gateway when communicating with those resources. If you have custom connectors that you created outside an ISE that require the on-premises data gateway, logic apps in an ISE can use those connectors.
-In the Logic Apps Designer, when you browse the built-in triggers and actions or managed connectors that you want to use for logic apps in an ISE, the **CORE** label appears on built-in triggers and actions, while the **ISE** label appears on managed connectors that are specifically designed to work with an ISE.
+In the Logic Apps Designer, when you browse the built-in triggers and actions or managed connectors that you want to use for logic apps in an ISE, the **CORE** label appears on built-in triggers and actions, while the **ISE** label appears on managed connectors that are designed to work with an ISE.
:::row::: :::column:::
cosmos-db Consistency Levels https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/consistency-levels.md
Read consistency applies to a single read operation scoped within a logical part
You can configure the default consistency level on your Azure Cosmos account at any time. The default consistency level configured on your account applies to all Azure Cosmos databases and containers under that account. All reads and queries issued against a container or a database use the specified consistency level by default. To learn more, see how to [configure the default consistency level](how-to-manage-consistency.md#configure-the-default-consistency-level). You can also override the default consistency level for a specific request, to learn more, see how to [Override the default consistency level](how-to-manage-consistency.md?#override-the-default-consistency-level) article.
+> [!TIP]
+> Overriding the default consistency level only applies to reads within the SDK client. An account configured for strong consistency by default will still write and replicate data synchronously to every region in the account. When the SDK client instance or request overrides this with Session or weaker consistency, reads will be performed using a single replica. See [Consistency levels and throughput](consistency-levels.md#consistency-levels-and-throughput) for more details.
+ > [!IMPORTANT] > It is required to recreate any SDK instance after changing the default consistency level. This can be done by restarting the application. This ensures the SDK uses the new default consistency level.
cosmos-db Create Cassandra Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-cassandra-python.md
Now go back to the Azure portal to get your connection string information and co
2. Run the following commands to install the required modules: ```python
- python -m pip install cassandra-driver
+ python -m pip install cassandra-driver==3.20.2
python -m pip install prettytable python -m pip install requests python -m pip install pyopenssl ```
+ > [!NOTE]
+ > We recommend Python driver version **3.20.2** for use with Cassandra API. Higher versions may cause errors.
+ 2. Run the following command to start your Python application: ```
cosmos-db How To Manage Consistency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-manage-consistency.md
Title: Manage consistency in Azure Cosmos DB description: Learn how to configure and manage consistency levels in Azure Cosmos DB using Azure portal, .NET SDK, Java SDK and various other SDKs-+ Previously updated : 06/10/2020- Last updated : 07/02/2021+
Update-AzCosmosDBAccount -ResourceGroupName $resourceGroupName `
Clients can override the default consistency level that is set by the service. Consistency level can be set on a per request, which overrides the default consistency level set at the account level. > [!TIP]
-> Consistency can only be **relaxed** at the request level. To move from weaker to stronger consistency, update the default consistency for the Cosmos account.
+> Consistency can only be **relaxed** at the SDK instance or request level. To move from weaker to stronger consistency, update the default consistency for the Cosmos account.
+
+> [!TIP]
+> Overriding the default consistency level only applies to reads within the SDK client. An account configured for strong consistency by default will still write and replicate data synchronously to every region in the account. When the SDK client instance or request overrides this with Session or weaker consistency, reads will be performed using a single replica. See [Consistency levels and throughput](consistency-levels.md#consistency-levels-and-throughput) for more details.
### <a id="override-default-consistency-dotnet"></a>.NET SDK
cosmos-db How To Write Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-write-stored-procedures-triggers-udfs.md
function async_sample() {
if (err) reject(err); resolve({ feed, options }); });
- if (!isAccepted) reject(new Error(ERROR_CODE.NotAccepted, "replaceDocument was not accepted."));
+ if (!isAccepted) reject(new Error(ERROR_CODE.NotAccepted, "queryDocuments was not accepted."));
}); },
cosmos-db Managed Identity Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/managed-identity-based-authentication.md
Previously updated : 06/08/2021 Last updated : 07/02/2021
# Use system-assigned managed identities to access Azure Cosmos DB data [!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)]
+> [!TIP]
+> [Data plane role-based access control (RBAC)](how-to-setup-rbac.md) is now available on Azure Cosmos DB, providing a seamless way to authorize your requests with Azure Active Directory.
+ In this article, you'll set up a *robust, key rotation agnostic* solution to access Azure Cosmos DB keys by using [managed identities](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md). The example in this article uses Azure Functions, but you can use any service that supports managed identities. You'll learn how to create a function app that can access Azure Cosmos DB data without needing to copy any Azure Cosmos DB keys. The function app will wake up every minute and record the current temperature of an aquarium fish tank. To learn how to set up a timer-triggered function app, see the [Create a function in Azure that is triggered by a timer](../azure-functions/functions-create-scheduled-function.md) article.
-To simplify the scenario, a [Time To Live](./time-to-live.md) setting is already configured to clean up older temperature documents.
+To simplify the scenario, a [Time To Live](./time-to-live.md) setting is already configured to clean up older temperature documents.
+
+> [!IMPORTANT]
+> Because this approach fetches your account's primary key through the Azure Cosmos DB control plane, it will not work if [a read-only lock has been applied](../azure-resource-manager/management/lock-resources.md) to your account. In this situation, consider using the Azure Cosmos DB [data plane RBAC](how-to-setup-rbac.md) instead.
## Assign a system-assigned managed identity to a function app
In this step, you'll assign a role to the function app's system-assigned managed
|[DocumentDB Account Contributor](../role-based-access-control/built-in-roles.md#documentdb-account-contributor)|Can manage Azure Cosmos DB accounts. Allows retrieval of read/write keys. | |[Cosmos DB Account Reader Role](../role-based-access-control/built-in-roles.md#cosmos-db-account-reader-role)|Can read Azure Cosmos DB account data. Allows retrieval of read keys. |
-> [!IMPORTANT]
-> Support for role-based access control in Azure Cosmos DB applies to control plane operations only. Data plane operations are secured through primary keys or resource tokens. To learn more, see the [Secure access to data](secure-access-to-data.md) article.
- > [!TIP] > When you assign roles, assign only the needed access. If your service requires only reading data, then assign the **Cosmos DB Account Reader** role to the managed identity. For more information about the importance of least privilege access, see the [Lower exposure of privileged accounts](../security/fundamentals/identity-management-best-practices.md#lower-exposure-of-privileged-accounts) article.
cost-management-billing Track Consumption Commitment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/track-consumption-commitment.md
+
+ Title: Track your Microsoft Azure Consumption Commitment (MACC)
+description: Learn how to track your Microsoft Azure Consumption Commitment (MACC) for a Microsoft Customer Agreement.
++
+tags: billing
+++ Last updated : 06/30/2021+++
+# Track your Microsoft Azure Consumption Commitment (MACC)
+
+The Microsoft Azure Consumption Commitment (MACC) is a contractual commitment that your organization may have made to Microsoft Azure spend over time. If your organization has a MACC for a Microsoft Customer Agreement (MCA) billing account, you can check important aspects of your commitment, including start and end dates, remaining commitment, and eligible spend in the Azure portal or through REST APIs. MACC or CTC for Enterprise Agreement (EA) billing accounts aren't yet available in the Azure portal or through REST APIs.
+
+## Track your MACC Commitment
+
+### [Azure portal](#tab/portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Search for **Cost Management + Billing**.
+ :::image type="content" source="./media/track-consumption-commitment/billing-search-cost-management-billing.png" alt-text="Screenshot showing search in portal for Cost Management + Billing." lightbox="./media/track-consumption-commitment/billing-search-cost-management-billing.png" :::
+3. In the billing scopes page, select the billing account for which you want to track the commitment. The billing account should be of type **Microsoft Customer Agreement**.
+ :::image type="content" source="./media/track-consumption-commitment/list-of-scopes.png" alt-text="Screenshot that shows Billing Scopes." lightbox="./media/track-consumption-commitment/list-of-scopes.png" :::
+ > [!NOTE]
+ > Azure portal remembers the last billing scope that you access and displays the scope the next time you come to Cost Management + Billing page. You won't see the billing scopes page if you have visited Cost Management + Billing earlier. If so, check that you are in the [right scope](#check-access-to-a-microsoft-customer-agreement). If not, [switch the scope](view-all-accounts.md#switch-billing-scope-in-the-azure-portal) to select the billing account for a Microsoft Customer Agreement.
+4. Select **Properties** from the left-hand side and then select **Microsoft Azure Consumption Commitment (MACC)**.
+ :::image type="content" source="./media/track-consumption-commitment/select-macc-tab.png" alt-text="Screenshot that shows selecting the MACC tab." lightbox="./media/track-consumption-commitment/select-macc-tab.png" :::
+5. The Microsoft Azure Consumption Commitment (MACC) tab has the following sections.
+
+#### Remaining Commitment
+
+The remaining commitment displays the remaining commitment amount since your last invoice.
++
+#### Details
+
+The Details section displays other important aspects of your commitment.
++
+| Term | Definition |
+|||
+| ID | An identifier that uniquely identifies your MACC. This identifier is used to get your MACC information through REST APIs. |
+| Purchase date | The date when you made the commitment. |
+| Start date | The date when the commitment became effective. |
+| End date | The date when the commitment expired. |
+| Commitment amount | The amount that youΓÇÖve committed to spend on MACC-eligible products/services. |
+| Status | The status of your commitment. |
+
+Your MACC can have one of the following statutes:
+
+- Active: MACC is active. Any eligible spend will contribute towards your MACC commitment.
+- Completed: YouΓÇÖve completed your MACC commitment.
+- Expired: MACC is expired. Contact your Microsoft Account team for more information.
+- Canceled: MACC is canceled. New Azure spend won't contribute towards your MACC commitment.
+
+#### Events
+
+The Events section displays events (invoiced spend) that decremented your MACC commitment.
++
+| Term | Definition |
+|||
+| Date | The date when the event happened |
+| Description | A description of the event |
+| Billing profile | The billing profile for which the event happened |
+| MACC decrement | The amount of MACC decrement from the event |
+| Remaining commitment | The remaining MACC commitment after the event |
+
+### [REST API](#tab/rest)
+
+You can use the [Azure Billing](/rest/api/billing/) and the [Consumption](/rest/api/consumption/) APIs to programmatically get Microsoft Azure Consumption Commitment (MACC) for your billing account.
+
+The examples shown below use REST APIs. Currently, PowerShell and Azure CLI aren't supported.
+
+### Find billing accounts you have access to
+
+```json
+GET https://management.azure.com/providers/Microsoft.Billing/billingAccounts?api-version=2020-05-01
+```
+The API response returns a list of billing accounts.
+
+```json
+{
+ "value": [
+ {
+ "id": "/providers/Microsoft.Billing/billingAccounts/9a157b81-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx",
+ "name": "9a157b81-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx",
+ "type": "Microsoft.Billing/billingAccounts",
+ "properties": {
+ "displayName": "Contoso",
+ "agreementType": "MicrosoftCustomerAgreement",
+ "accountStatus": "Active",
+ "accountType": "Enterprise",
+ "hasReadAccess": true,
+ }
+ },
+ {
+ "id": "/providers/Microsoft.Billing/billingAccounts/9a12f056-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx",
+ "name": "9a12f056-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx",
+ "type": "Microsoft.Billing/billingAccounts",
+ "properties": {
+ "displayName": "Connie Wilson",
+ "agreementType": "MicrosoftCustomerAgreement",
+ "accountStatus": "Active",
+ "accountType": "Individual",
+ "hasReadAccess": true,
+ }
+ }
+ ]
+}
+```
+
+Use the `displayName` property of the billing account to identify the billing account for which you want to track MACC. Copy the `name` of the billing account. For example, if you want to track MACC for **Contoso** billing account, you'd copy `9a157b81-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx`. Paste this value somewhere so that you can use it in the next step.
+
+### Get list of Microsoft Azure Consumption Commitments
+
+Make the following request, replacing `<billingAccountName>` with the `name` copied in the first step (`9a157b81-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx`).
+
+```json
+GET https://management.azure.com/providers/Microsoft.Billing/billingAccounts/<billingAccountName>/providers/Microsoft.Consumption/lots?api-version=2021-05-01&$filter=source%20eq%20%27ConsumptionCommitment%27
+```
+The API response returns lists of MACCs for your billing account.
+
+```json
+ {
+ "value": [
+ {
+ "id": "/providers/Microsoft.Billing/billingAccounts/9a157b81-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx/providers/Microsoft.Consumption/lots/G2021032459206000XXXX",
+ "name": "G2021032459206000XXXX",
+ "type": "Microsoft.Consumption/lots",
+ "eTag": null,
+ "properties": {
+ "purchasedDate": "2021-03-24T16:26:46.0000000Z",
+ "status": "Active",
+ "originalAmount": {
+ "currency": "USD",
+ "value": 10000.0
+ },
+ "closedBalance": {
+ "currency": "USD",
+ "value": 9899.42
+ },
+ "source": "ConsumptionCommitment",
+ "startDate": "2021-03-01T00:00:00.0000000Z",
+ "expirationDate": "2024-02-28T00:00:00.0000000Z"
+ }
+ },
+ {
+ "id": "/providers/Microsoft.Billing/billingAccounts/9a157b81-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx/providers/Microsoft.Consumption/lots/G1011082459206000XXXX",
+ "name": "G1011082459206000XXXX",
+ "type": "Microsoft.Consumption/lots",
+ "eTag": null,
+ "properties": {
+ "purchasedDate": "2021-03-24T16:26:46.0000000Z",
+ "status": "Complete",
+ "originalAmount": {
+ "currency": "USD",
+ "value": 10000.0
+ },
+ "closedBalance": {
+ "currency": "USD",
+ "value": 0.00
+ },
+ "source": "ConsumptionCommitment",
+ "startDate": "2020-03-01T00:00:00.0000000Z",
+ "expirationDate": "2021-02-28T00:00:00.0000000Z"
+ }
+ }
+ ]
+ }
+```
+
+| Element name | Description |
+||--|
+| `purchasedDate` | The date when the MACC was purchased. |
+| `status` | The status of your commitment. |
+| `originalAmount` | The original commitment amount. |
+| `closedBalance` | The remaining commitment since the last invoice. |
+| `source` | For MACC, the source will always be ConsumptionCommitment. |
+| `startDate` | The date when the MACC became active. |
+| `expirationDate` | The date when the MACC expires. |
+
+Your MACC can have one of the following statutes:
+
+- Active: MACC is active. Any eligible spend will contribute towards your MACC commitment.
+- Completed: YouΓÇÖve completed your MACC commitment.
+- Expired: MACC is expired. Contact your Microsoft Account team for more information.
+- Canceled: MACC is canceled. New Azure spend won't contribute towards your MACC commitment.
+
+### Get events that affected MACC commitment
+
+Make the following request, replacing `<billingAccountName>` with the `name` copied in the first step (`5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx`). You would need to pass a **startDate** and an **endDate** to get events for your required duration.
+
+```json
+GET https://management.azure.com/providers/Microsoft.Billing/billingAccounts/<billingAccountName>/providers/Microsoft.Consumption/events?api-version=2021-05-01&startDate=<startDate>&endDate=<endDate>&$filter=lotsource%20eq%20%27ConsumptionCommitment%27
+```
+
+The API response returns all events that affected your MACC commitment.
+
+```json
+{
+ "value": [
+ {
+ "id": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx/providers/Microsoft.Consumption/events/103axxxx-2c25-7xx3-f2a0-ad9a3f1c91xx",
+ "name": "103axxxx-2c25-7xx3-f2a0-ad9a3f1c91xx",
+ "type": "Microsoft.Consumption/events",
+ "eTag": null,
+ "properties": {
+ "billingProfileId": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx/billingProfiles/SWFF-DVM4-XXX-XXX",
+ "billingProfileDisplayName": "Finance",
+ "lotId": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx/providers/Microsoft.Consumption/lots/G2021032459206000XXXX",
+ "lotSource": "ConsumptionCommitment",
+ "transactionDate": "2021-05-05T00:09:13.0000000Z",
+ "description": "Balance after invoice T00075XXXX",
+ "charges": {
+ "currency": "USD",
+ "value": -100.0
+ },
+ "closedBalance": {
+ "currency": "USD",
+ "value": 9899.71
+ },
+ "eventType": "SettledCharges",
+ "invoiceNumber": "T00075XXXX"
+ }
+ },
+ {
+ "id": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx/providers/Microsoft.Consumption/events/203axxxx-2c25-7xx3-f2a0-ad9a3f1c91xx",
+ "name": "203axxxx-2c25-7xx3-f2a0-ad9a3f1c91xx",
+ "type": "Microsoft.Consumption/events",
+ "eTag": null,
+ "properties": {
+ "billingProfileId": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx/billingProfiles/SWFF-DVM4-XXX-XXX",
+ "billingProfileDisplayName": "Engineering",
+ "lotId": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_xxxx-xx-xx/providers/Microsoft.Consumption/lots/G2021032459206000XXXX",
+ "lotSource": "ConsumptionCommitment",
+ "transactionDate": "2021-04-05T00:09:13.0000000Z",
+ "description": "Balance after invoice T00074XXXX",
+ "charges": {
+ "currency": "USD",
+ "value": -0.29
+ },
+ "closedBalance": {
+ "currency": "USD",
+ "value": 9999.71
+ },
+ "eventType": "SettledCharges",
+ "invoiceNumber": "T00074XXXX"
+ }
+ }
+ ]
+}
+
+```
+| Element name | Description |
+||--|
+| `billingProfileId` | The unique identifier for the billing profile for which the event happened. |
+| `billingProfileDisplayName` | The display name for the billing profile for which the event happened. |
+| `lotId` | The unique identifier for the MACC. |
+| `lotSource` | It will be ConsumptionCommitment for MACC. |
+| `transactionDate` | The date when the event happened. |
+| `description` | The description of the event. |
+| `charges` | The amount of MACC decrement. |
+| `closedBalance` | The balance after the event. |
+| `eventType` | Only SettledCharges events are supported for MACC. |
+| `invoiceNumber` | The unique ID of the invoice whose charges decremented MACC. |
+++
+## Azure Services and Marketplace Offers that are eligible for MACC
+
+You can determine which Azure services and Marketplace offers are eligible for MACC decrement in the Azure portal. For more information, see [Determine which offers are eligible for Azure consumption commitments (MACC/CtC)](/marketplace/azure-consumption-commitment-benefit#determine-which-offers-are-eligible-for-azure-consumption-commitments-maccctc).
+
+## Azure Credits and MACC
+
+If your organization received Azure credits from Microsoft, the consumption or purchases that are covered by credits won't contribute towards your MACC commitment.
+
+If your organization purchased Azure Prepayment, the consumption or purchases that are covered by credits won't contribute towards your MACC commitment. However, the actual Prepayment purchase itself will decrement your MACC commitment.
+
+For example, Contoso made a MACC commitment of $50,000 in May. In June, they purchased an Azure Prepayment of $10,000. The purchase will decrement their MACC commitment and the remaining commitment will be $40,000. In June, Contoso consumed $10,000 of Azure Prepayment-eligible services. The service charges will be covered by their Azure Prepayment; however, the service charges wonΓÇÖt decrement their MACC commitment. Once the Azure Prepayment is fully used, all Azure service consumption and other eligible purchases will decrement their MACC commitment.
+
+## Check access to a Microsoft Customer Agreement
+
+## Need help? Contact support.
+
+If you need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your issue resolved quickly.
+
+## Next steps
+
+- [Determine which offers are eligible for Azure consumption commitments (MACC/CTC)](/marketplace/azure-consumption-commitment-benefit#determine-which-offers-are-eligible-for-azure-consumption-commitments-maccctc)
+- [Track your Azure credits balance](mca-check-azure-credits-balance.md)
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/prepare-buy-reservation.md
You have three options to scope a reservation, depending on your needs:
- **Single resource group scope** ΓÇö Applies the reservation discount to the matching resources in the selected resource group only. - **Single subscription scope** ΓÇö Applies the reservation discount to the matching resources in the selected subscription.-- **Shared scope** ΓÇö Applies the reservation discount to matching resources in eligible subscriptions that are in the billing context.
+- **Shared scope** ΓÇö Applies the reservation discount to matching resources in eligible subscriptions that are in the billing context. If a subscription was moved to different billing context, the benefit will no longer be applied to this subscription and will continue to apply to other subscriptions in the billing context.
- For Enterprise Agreement customers, the billing context is the enrollment. The reservation shared scope would include multiple Active Directory tenants in an enrollment. - For Microsoft Customer Agreement customers, the billing scope is the billing profile. - For individual subscriptions with pay-as-you-go rates, the billing scope is all eligible subscriptions created by the account administrator.
data-factory Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-data-tool.md
Previously updated : 06/01/2021 Last updated : 06/04/2021 # Copy Data tool in Azure Data Factory
The following table provides guidance on when to use the Copy Data tool vs. per-
| You want to easily build a data loading task without learning about Azure Data Factory entities (linked services, datasets, pipelines, etc.) | You want to implement complex and flexible logic for loading data into lake. | | You want to quickly load a large number of data artifacts into a data lake. | You want to chain Copy activity with subsequent activities for cleansing or processing data. |
-To start the Copy Data tool, click the **Copy Data** tile on the home page of your data factory.
+To start the Copy Data tool, click the **Ingest** tile on the home page of your data factory.
-![Get started page - link to Copy Data tool](./media/doc-common-process/get-started-page.png)
+![Screenshot that shows the home page - link to Copy Data tool.](./media/doc-common-process/get-started-page.png)
## Intuitive flow for loading data into a data lake
data-factory Create Azure Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-azure-integration-runtime.md
Title: Create Azure integration runtime in Azure Data Factory
description: Learn how to create Azure integration runtime in Azure Data Factory, which is used to copy data and dispatch transform activities. Previously updated : 06/09/2020 Last updated : 06/04/2021
You can configure an existing Azure IR to change its location using the Set-AzDa
### Create an Azure IR via Azure Data Factory UI Use the following steps to create an Azure IR using Azure Data Factory UI.
-1. On the **Let's get started** page of Azure Data Factory UI, select the [Manage tab](./author-management-hub.md) from the leftmost pane.
+1. On the home page of Azure Data Factory UI, select the [Manage tab](./author-management-hub.md) from the leftmost pane.
![The home page Manage button](media/doc-common-process/get-started-page-manage-button.png)
data-factory Create Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-azure-ssis-integration-runtime.md
Title: Create an Azure-SSIS integration runtime in Azure Data Factory
description: Learn how to create an Azure-SSIS integration runtime in Azure Data Factory so you can deploy and run SSIS packages in Azure. Previously updated : 04/09/2021 Last updated : 06/04/2021
After your data factory is created, open its overview page in the Azure portal.
### Provision an Azure-SSIS integration runtime
-On the **Let's get started** page, select the **Configure SSIS Integration** tile to open the **Integration runtime setup** pane.
+On the home page, select the **Configure SSIS** tile to open the **Integration runtime setup** pane.
- ![Configure SSIS Integration Runtime tile](./media/tutorial-create-azure-ssis-runtime-portal/configure-ssis-integration-runtime-tile.png)
+ ![Screenshot that shows the ADF home page.](./media/doc-common-process/get-started-page.png)
The **Integration runtime setup** pane has three pages where you successively configure general, deployment, and advanced settings.
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-self-hosted-integration-runtime.md
To create and set up a self-hosted integration runtime, use the following proced
Use the following steps to create a self-hosted IR using Azure Data Factory UI.
-1. On the **Let's get started** page of Azure Data Factory UI, select the [Manage tab](./author-management-hub.md) from the leftmost pane.
+1. On the home page of Azure Data Factory UI, select the [Manage tab](./author-management-hub.md) from the leftmost pane.
:::image type="content" source="media/doc-common-process/get-started-page-manage-button.png" alt-text="The home page Manage button":::
data-factory Data Flow Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-create.md
Previously updated : 02/12/2019 Last updated : 06/04/2021 # Create Azure Data Factory Data Flow
Get started by first creating a new V2 Data Factory from the Azure portal. After
![Screenshot shows the New data factory pane with V2 selected for Version.](media/data-flow/v2portal.png "data flow create")
-Once you are in the Data Factory UI, you can use sample Data Flows. The samples are available from the ADF Template Gallery. In ADF, create "Pipeline from Template" and select the Data Flow category from the template gallery.
+Once you are in the Data Factory UI, you can use sample Data Flows. The samples are available from the ADF Template Gallery. In ADF, select "Pipeline templates" tile in the 'Discover more' section of the homepage, and select the Data Flow category from the template gallery.
![Screenshot shows the Data Flow tab with Transform data using data flow selected.](media/data-flow/template.png "data flow create")
data-factory How To Invoke Ssis Package Ssis Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-invoke-ssis-package-ssis-activity.md
Previously updated : 07/20/2020 Last updated : 06/04/2021 # Run an SSIS package with the Execute SSIS Package activity in Azure Data Factory
In this step, you use the Data Factory UI or app to create a pipeline. You add a
![Data Factory home page](./media/how-to-invoke-ssis-package-stored-procedure-activity/data-factory-home-page.png)
- On the **Let's get started** page, select **Create pipeline**.
+ On the home page, select **Orchestrate**.
- ![Get started page](./media/how-to-invoke-ssis-package-stored-procedure-activity/get-started-page.png)
+ ![Screenshot that shows the ADF home page.](./media/doc-common-process/get-started-page.png)
1. In the **Activities** toolbox, expand **General**. Then drag an **Execute SSIS Package** activity to the pipeline designer surface.
data-factory How To Invoke Ssis Package Stored Procedure Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-invoke-ssis-package-stored-procedure-activity.md
ms.devlang: powershell Previously updated : 07/09/2020 Last updated : 06/04/2021
First step is to create a data factory by using the Azure portal.
### Create a pipeline with stored procedure activity In this step, you use the Data Factory UI to create a pipeline. You add a stored procedure activity to the pipeline and configure it to run the SSIS package by using the sp_executesql stored procedure.
-1. In the get started page, click **Create pipeline**:
+1. In the home page, click **Orchestrate**:
+
+ ![Screenshot that shows the ADF home page.](./media/doc-common-process/get-started-page.png)
- ![Get started page](./media/how-to-invoke-ssis-package-stored-procedure-activity/get-started-page.png)
2. In the **Activities** toolbox, expand **General**, and drag-drop **Stored Procedure** activity to the pipeline designer surface. ![Drag-and-drop stored procedure activity](./media/how-to-invoke-ssis-package-stored-procedure-activity/drag-drop-sproc-activity.png)
data-factory How To Schedule Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-schedule-azure-ssis-integration-runtime.md
description: This article describes how to schedule the starting and stopping of
ms.devlang: powershell Previously updated : 07/09/2020 Last updated : 06/04/2021
If you create a third trigger that is scheduled to run daily at midnight and ass
### Create your pipelines
-1. In **Let's get started** page, select **Create pipeline**.
+1. In the home page, select **Orchestrate**.
- ![Get started page](./media/how-to-schedule-azure-ssis-integration-runtime/get-started-page.png)
+ ![Screenshot that shows the ADF home page.](./media/doc-common-process/get-started-page.png)
2. In **Activities** toolbox, expand **General** menu, and drag & drop a **Web** activity onto the pipeline designer surface. In **General** tab of the activity properties window, change the activity name to **startMyIR**. Switch to **Settings** tab, and do the following actions.
data-factory Lab Data Flow Data Share https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/lab-data-flow-data-share.md
Previously updated : 04/16/2021 Last updated : 06/04/2021 # Data integration using Azure Data Factory and Azure Data Share
In Azure Data Factory linked services define the connection information to exter
![Portal 3](media/lab-data-flow-data-share/portal3.png) 1. You'll be redirected to the homepage of the ADF UX. This page contains quick-starts, instructional videos, and links to tutorials to learn data factory concepts. To start authoring, click on the pencil icon in left side-bar.
- ![Portal configure](media/lab-data-flow-data-share/configure1.png)
+ ![Portal configure](./media/doc-common-process/get-started-page-author-button.png)
### Create an Azure SQL Database linked service
data-factory Load Azure Data Lake Storage Gen2 From Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/load-azure-data-lake-storage-gen2-from-gen1.md
Previously updated : 02/18/2021 Last updated : 06/04/2021 # Copy data from Azure Data Lake Storage Gen1 to Gen2 with Azure Data Factory
This article shows you how to use the Data Factory copy data tool to copy data f
## Load data into Azure Data Lake Storage Gen2
-1. On the **Get started** page, select the **Copy Data** tile to launch the copy data tool.
+1. On the home page, select the **Ingest** tile to launch the copy data tool.
- ![Copy data tool tile](./media/load-azure-data-lake-storage-gen2-from-gen1/copy-data-tool-tile.png)
+ ![Screenshot that shows the ADF home page.](./media/doc-common-process/get-started-page.png )
2. On the **Properties** page, specify **CopyFromADLSGen1ToGen2** for the **Task name** field. Select **Next**. ![Properties page](./media/load-azure-data-lake-storage-gen2-from-gen1/copy-data-tool-properties-page.png)
data-factory Load Azure Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/load-azure-data-lake-storage-gen2.md
Previously updated : 02/18/2021 Last updated : 06/04/2021 # Load data into Azure Data Lake Storage Gen2 with Azure Data Factory
This article shows you how to use the Data Factory Copy Data tool to load data f
## Load data into Azure Data Lake Storage Gen2
-1. In the **Get started** page, select the **Copy Data** tile to launch the Copy Data tool.
+1. In the home page of Azure Data Factory, select the **Ingest** tile to launch the Copy Data tool.
2. In the **Properties** page, specify **CopyFromAmazonS3ToADLS** for the **Task name** field, and select **Next**.
data-factory Load Azure Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/load-azure-data-lake-store.md
Previously updated : 02/18/2021 Last updated : 06/04/2021 # Load data into Azure Data Lake Storage Gen1 by using Azure Data Factory
This article shows you how to use the Data Factory Copy Data tool to _load data
## Load data into Data Lake Storage Gen1
-1. In the **Get started** page, select the **Copy Data** tile to launch the Copy Data tool:
+1. In the home page, select the **Ingest** tile to launch the Copy Data tool:
- ![Copy Data tool tile](./media/load-data-into-azure-data-lake-store/copy-data-tool-tile.png)
+ ![Screenshot that shows the ADF home page.](./media/doc-common-process/get-started-page.png)
2. In the **Properties** page, specify **CopyFromAmazonS3ToADLS** for the **Task name** field, and select **Next**: ![Properties page](./media/load-data-into-azure-data-lake-store/copy-data-tool-properties-page.png)
data-factory Load Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/load-azure-sql-data-warehouse.md
Previously updated : 01/29/2020 Last updated : 06/04/2021 # Load data into Azure Synapse Analytics by using Azure Data Factory
This article shows you how to use the Data Factory Copy Data tool to _load data
## Load data into Azure Synapse Analytics
-1. In the **Get started** page, select the **Copy Data** tile to launch the Copy Data tool.
+1. In the home page of Azure Data Factory, select the **Ingest** tile to launch the Copy Data tool.
2. In the **Properties** page, specify **CopyFromSQLToSQLDW** for the **Task name** field, and select **Next**.
data-factory Load Office 365 Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/load-office-365-data.md
description: 'Use Azure Data Factory to copy data from Office 365'
Previously updated : 02/18/2021 Last updated : 06/04/2021
This article shows you how to use the Data Factory _load data from Office 365 in
## Create a pipeline
-1. On the "Let's get started" page, select **Create pipeline**.
+1. On the home page, select **Orchestrate**.
- ![Create pipeline](./media/load-office-365-data/create-pipeline-entry.png)
+ ![Screenshot that shows the ADF home page.](./media/doc-common-process/get-started-page.png)
2. In the **General tab** for the pipeline, enter "CopyPipeline" for **Name** of the pipeline.
data-factory Load Sap Bw Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/load-sap-bw-data.md
Previously updated : 05/22/2019 Last updated : 06/04/2021 # Copy data from SAP Business Warehouse by using Azure Data Factory
This article shows how to use Azure Data Factory to copy data from SAP Business
In the Azure portal, go to your data factory. Select **Author & Monitor** to open the Data Factory UI in a separate tab.
-1. On the **Let's get started** page, select **Copy Data** to open the Copy Data tool.
+1. On the home page, select **Ingest** to open the Copy Data tool.
2. On the **Properties** page, specify a **Task name**, and then select **Next**.
Incremental copy uses a "high-watermark" mechanism that's based on the **request
![Incremental copy workflow flow chart](media/load-sap-bw-data/incremental-copy-workflow.png)
-On the data factory **Let's get started** page, select **Create pipeline from template** to use the built-in template.
+On the data factory home page, select **Pipeline templates** in the **Discover more** section to use the built-in template.
1. Search for **SAP BW** to find and select the **Incremental copy from SAP BW to Azure Data Lake Storage Gen2** template. This template copies data into Azure Data Lake Storage Gen2. You can use a similar workflow to copy to other sink types.
data-factory Quickstart Create Data Factory Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-copy-data-tool.md
Previously updated : 06/01/2021 Last updated : 06/04/2021 # Quickstart: Use the Copy Data tool to copy data
In this quickstart, you use the Azure portal to create a data factory. Then, you
## Start the Copy Data tool
-1. On the **Let's get started** page, select the **Copy Data** tile to start the Copy Data tool.
+1. On the home page of Azure Data Factory, select the **Ingest** tile to start the Copy Data tool.
- !["Copy Data" tile](./media/doc-common-process/get-started-page.png)
+ ![Screenshot that shows the Azure Data Factory home page.](./media/doc-common-process/get-started-page.png)
1. On the **Properties** page of the Copy Data tool, choose **Built-in copy task** under **Task type**, then select **Next**.
data-factory Solution Templates Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/solution-templates-introduction.md
Previously updated : 01/04/2019 Last updated : 06/04/2021 # Templates
Templates are predefined Azure Data Factory pipelines that allow you to get star
You can get started creating a Data Factory pipeline from a template in the following two ways:
-1. Select **Create pipeline from template** on the Overview page to open the template gallery.
+1. Select **Pipeline templates** in the **Discover more** section of the Data Factory home page to open the template gallery.
- ![Open the template gallery from the Overview page](media/solution-templates-introduction/templates-intro-image1.png)
+ ![Open the template gallery from the Overview page](media/doc-common-process/home-page-pipeline-templates-tile.png)
1. On the Author tab in Resource Explorer, select **+**, then **Pipeline from template** to open the template gallery.
data-factory Source Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/source-control.md
Previously updated : 02/26/2021 Last updated : 06/04/2021 # Source control in Azure Data Factory
There are four different ways to connect a Git repository to your data factory f
### Configuration method 1: Home page
-In the Azure Data Factory home page, select **Set up code repository**.
+In the Azure Data Factory home page, select **Set up code repository** at the top.
-![Configure a code repository from home page](media/author-visually/configure-repo.png)
+![Configure a code repository from home page](media/doc-common-process/set-up-code-repository.png)
### Configuration method 2: Authoring canvas
data-factory Transform Data Using Databricks Notebook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/transform-data-using-databricks-notebook.md
Previously updated : 03/12/2018 Last updated : 06/07/2021 # Run a Databricks notebook with the Databricks Notebook Activity in Azure Data Factory
In this section, you author a Databricks linked service. This linked service con
### Create an Azure Databricks linked service
-1. On the **Let's get started** page, switch to the **Edit** tab in the left panel.
+1. On the home page, switch to the **Manage** tab in the left panel.
- ![Edit the new linked service](media/transform-data-using-databricks-notebook/get-started-page.png)
+ ![Edit the new linked service](media/doc-common-process/get-started-page-manage-button.png)
1. Select **Connections** at the bottom of the window, and then select **+ New**.
data-factory Tutorial Control Flow Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-control-flow-portal.md
Previously updated : 01/11/2018 Last updated : 06/07/2021 # Branching and chaining activities in an Azure Data Factory pipeline using the Azure portal
In this step, you create a pipeline with one Copy activity and two Web activitie
- Connecting one activity with another activity (on success and failure) - Using output from an activity as an input to the subsequent activity
-1. In the **get started** page of Data Factory UI, click the **Create pipeline** tile.
+1. In the home page of Data Factory UI, click the **Orchestrate** tile.
- ![Get started page](./media/tutorial-control-flow-portal/get-started-page.png)
+ ![Screenshot that shows the ADF home page.](./media/doc-common-process/get-started-page.png)
3. In the properties window for the pipeline, switch to the **Parameters** tab, and use the **New** button to add the following three parameters of type String: sourceBlobContainer, sinkBlobContainer, and receiver. - **sourceBlobContainer** - parameter in the pipeline consumed by the source blob dataset.
data-factory Tutorial Copy Data Portal Private https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-copy-data-portal-private.md
Previously updated : 04/14/2021 Last updated : 06/04/2021
In this step, you create a pipeline with a copy activity in the data factory. Th
In this tutorial, you start by creating a pipeline. Then you create linked services and datasets when you need them to configure the pipeline.
-1. On the **Let's get started** page, select **Create pipeline**.
+1. On the home page, select **Orchestrate**.
- ![Screenshot that shows creating a pipeline.](./media/doc-common-process/get-started-page.png)
+ ![Screenshot that shows the ADF home page.](./media/doc-common-process/get-started-page.png)
1. In the properties pane for the pipeline, enter **CopyPipeline** for the pipeline name. 1. In the **Activities** tool box, expand the **Move and Transform** category, and drag the **Copy data** activity from the tool box to the pipeline designer surface. Enter **CopyFromBlobToSql** for the name.
data-factory Tutorial Copy Data Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-copy-data-portal.md
Previously updated : 02/18/2021 Last updated : 06/04/2021
In this step, you create a pipeline with a copy activity in the data factory. Th
In this tutorial, you start with creating the pipeline. Then you create linked services and datasets when you need them to configure the pipeline.
-1. On the **Let's get started** page, select **Create pipeline**.
+1. On the home page, select **Orchestrate**.
- ![Create pipeline](./media/doc-common-process/get-started-page.png)
+ ![Screenshot that shows the ADF home page.](./media/doc-common-process/get-started-page.png)
1. In the General panel under **Properties**, specify **CopyPipeline** for **Name**. Then collapse the panel by clicking the Properties icon in the top-right corner.
data-factory Tutorial Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-copy-data-tool.md
Previously updated : 02/18/2021 Last updated : 06/04/2021 # Copy data from Azure Blob storage to a SQL Database by using the Copy Data tool
Prepare your Blob storage and your SQL Database for the tutorial by performing t
## Use the Copy Data tool to create a pipeline
-1. On the **Let's get started** page, select the **Copy Data** tile to launch the Copy Data tool.
+1. On the home page of Azure Data Factory, select the **Ingest** tile to launch the Copy Data tool.
- ![Copy Data tool tile](./media/doc-common-process/get-started-page.png)
+ ![Screenshot that shows the Azure Data Factory home page.](./media/doc-common-process/get-started-page.png)
1. On the **Properties** page, under **Task name**, enter **CopyFromBlobToSqlPipeline**. Then select **Next**. The Data Factory UI creates a pipeline with the specified task name.
data-factory Tutorial Data Flow Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-data-flow-delta-lake.md
Previously updated : 04/16/2021 Last updated : 06/04/2021 # Transform data in delta lake using mapping data flows
In this step, you create a data factory and open the Data Factory UX to create a
In this step, you'll create a pipeline that contains a data flow activity.
-1. On the **Let's get started** page, select **Create pipeline**.
+1. On the home page, select **Orchestrate**.
- ![Create pipeline](./media/doc-common-process/get-started-page.png)
+ ![Screenshot that shows the ADF home page.](./media/doc-common-process/get-started-page.png)
1. In the **General** tab for the pipeline, enter **DeltaLake** for **Name** of the pipeline. 1. In the **Activities** pane, expand the **Move and Transform** accordion. Drag and drop the **Data Flow** activity from the pane to the pipeline canvas.
data-factory Tutorial Data Flow Private https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-data-flow-private.md
Previously updated : 04/14/2021 Last updated : 06/04/2021 # Transform data securely by using mapping data flow
In this step, you create an Azure IR and enable Data Factory Managed Virtual Net
In this step, you'll create a pipeline that contains a data flow activity.
-1. On the **Let's get started** page, select **Create pipeline**.
+1. On the home page of Azure Data Factory, select **Orchestrate**.
![Screenshot that shows creating a pipeline.](./media/doc-common-process/get-started-page.png)
data-factory Tutorial Data Flow Write To Lake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-data-flow-write-to-lake.md
Previously updated : 04/01/2021 Last updated : 06/04/2021 # Best practices for writing to files to data lake with data flows
In this step, you create a data factory and open the Data Factory UX to create a
In this step, you'll create a pipeline that contains a data flow activity.
-1. On the **Let's get started** page, select **Create pipeline**.
+1. On the home page of Azure Data Factory, select **Orchestrate**.
- ![Create pipeline](./media/doc-common-process/get-started-page.png)
+ ![Screenshot that show the ADF home page.](./media/doc-common-process/get-started-page.png)
1. In the **General** tab for the pipeline, enter **DeltaLake** for **Name** of the pipeline. 1. In the factory top bar, slide the **Data Flow debug** slider on. Debug mode allows for interactive testing of transformation logic against a live Spark cluster. Data Flow clusters take 5-7 minutes to warm up and users are recommended to turn on debug first if they plan to do Data Flow development. For more information, see [Debug Mode](concepts-data-flow-debug-mode.md).
data-factory Tutorial Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-data-flow.md
Previously updated : 04/16/2021 Last updated : 06/04/2021 # Transform data using mapping data flows
In this step, you create a data factory and open the Data Factory UX to create a
In this step, you'll create a pipeline that contains a Data Flow activity.
-1. On the **Let's get started** page, select **Create pipeline**.
+1. On the home page of Azure Data Factory, select **Orchestrate**.
- ![Create pipeline](./media/doc-common-process/get-started-page.png)
+ ![Screenshot that shows the ADF home page.](./media/doc-common-process/get-started-page.png)
1. In the **General** tab for the pipeline, enter **TransformMovies** for **Name** of the pipeline. 1. In the **Activities** pane, expand the **Move and Transform** accordion. Drag and drop the **Data Flow** activity from the pane to the pipeline canvas.
data-factory Tutorial Deploy Ssis Packages Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-deploy-ssis-packages-azure.md
description: Learn how to provision the Azure-SSIS integration runtime in Azure
Previously updated : 04/02/2021 Last updated : 06/04/2021
After your data factory is created, open its overview page in the Azure portal.
### From the Data Factory overview
-1. On the **Let's get started** page, select the **Configure SSIS Integration** tile.
+1. On the home page, select the **Configure SSIS** tile.
- !["Configure SSIS Integration Runtime" tile](./media/tutorial-create-azure-ssis-runtime-portal/configure-ssis-integration-runtime-tile.png)
+ ![Screenshot that shows the Azure Data Factory home page.](./media/doc-common-process/get-started-page.png)
1. For the remaining steps to set up an Azure-SSIS IR, see the [Provision an Azure-SSIS integration runtime](#provision-an-azure-ssis-integration-runtime) section.
data-factory Tutorial Hybrid Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-hybrid-copy-data-tool.md
Previously updated : 03/26/2021 Last updated : 06/04/2021 # Copy data from a SQL Server database to Azure Blob storage by using the Copy Data tool
You use the name and key of your storage account in this tutorial. To get the na
## Use the Copy Data tool to create a pipeline
-1. On the **Let's get started** page, select **Copy Data** to launch the Copy Data tool.
+1. On the Azure Data Factory home page, select **Ingest** to launch the Copy Data tool.
- ![Get started page](./media/doc-common-process/get-started-page.png)
+ ![Screenshot that shows the Azure Data Factory home page.](./media/doc-common-process/get-started-page.png)
1. On the **Properties** page of the Copy Data tool, under **Task name**, enter **CopyFromOnPremSqlToAzureBlobPipeline**. Then select **Next**. The Copy Data tool creates a pipeline with the name you specify for this field. ![Task name](./media/tutorial-hybrid-copy-data-tool/properties-page.png)
data-factory Tutorial Hybrid Copy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-hybrid-copy-portal.md
Previously updated : 02/18/2021 Last updated : 06/04/2021 # Copy data from a SQL Server database to Azure Blob storage
In this step, you create a data factory and start the Data Factory UI to create
## Create a pipeline
-1. On the **Let's get started** page, select **Create pipeline**. A pipeline is automatically created for you. You see the pipeline in the tree view, and its editor opens.
+1. On the Azure Data Factory home page, select **Orchestrate**. A pipeline is automatically created for you. You see the pipeline in the tree view, and its editor opens.
- ![Let's get started page](./media/doc-common-process/get-started-page.png)
+ ![Screenshot that shows the Azure Data Factory home page.](./media/doc-common-process/get-started-page.png)
1. In the General panel under **Properties**, specify **SQLServerToBlobPipeline** for **Name**. Then collapse the panel by clicking the Properties icon in the top-right corner.
data-factory Tutorial Incremental Copy Change Data Capture Feature Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-incremental-copy-change-data-capture-feature-portal.md
Previously updated : 02/18/2021 Last updated : 06/07/2021 # Incrementally load data from Azure SQL Managed Instance to Azure Storage using change data capture (CDC)
If you don't have an Azure subscription, create a [free](https://azure.microsoft
![Screenshot shows the data factory that you deployed.](./media/tutorial-incremental-copy-change-data-capture-feature-portal/data-factory-home-page.png) 10. Click **Author & Monitor** tile to launch the Azure Data Factory user interface (UI) in a separate tab.
-11. In the **get started** page, switch to the **Edit** tab in the left panel as shown in the following image:
+11. In the home page, switch to the **Manage** tab in the left panel as shown in the following image:
- ![Create pipeline button](./media/tutorial-incremental-copy-change-data-capture-feature-portal/get-started-page.png)
+ ![Screenshot that shows the Manage button.](media/doc-common-process/get-started-page-manage-button.png)
## Create linked services You create linked services in a data factory to link your data stores and compute services to the data factory. In this section, you create linked services to your Azure Storage account and Azure SQL MI.
data-factory Tutorial Incremental Copy Change Tracking Feature Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-incremental-copy-change-tracking-feature-portal.md
Previously updated : 02/18/2021 Last updated : 06/07/2021 # Incrementally load data from Azure SQL Database to Azure Blob Storage using change tracking information using the Azure portal
Install the latest Azure PowerShell modules by following instructions in [How t
![Data factory home page](./media/tutorial-incremental-copy-change-tracking-feature-portal/data-factory-home-page.png) 10. Click **Author & Monitor** tile to launch the Azure Data Factory user interface (UI) in a separate tab.
-11. In the **get started** page, switch to the **Edit** tab in the left panel as shown in the following image:
+11. In the home page, switch to the **Manage** tab in the left panel as shown in the following image:
- ![Create pipeline button](./media/tutorial-incremental-copy-change-tracking-feature-portal/get-started-page.png)
+ ![Screenshot that shows the Manage button.](media/doc-common-process/get-started-page-manage-button.png)
## Create linked services You create linked services in a data factory to link your data stores and compute services to the data factory. In this section, you create linked services to your Azure Storage account and your database in Azure SQL Database.
data-factory Tutorial Incremental Copy Lastmodified Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-incremental-copy-lastmodified-copy-data-tool.md
Previously updated : 02/18/2021 Last updated : 06/04/2021 # Incrementally copy new and changed files based on LastModifiedDate by using the Copy Data tool
Prepare your Blob storage for the tutorial by completing these steps:
## Use the Copy Data tool to create a pipeline
-1. On the **Let's get started** page, select the **Copy Data** tile to open the Copy Data tool:
+1. On the Azure Data Factory home page, select the **Ingest** tile to open the Copy Data tool:
- ![Copy Data tile](./media/doc-common-process/get-started-page.png)
+ ![Screenshot that shows the ADF home page.](./media/doc-common-process/get-started-page.png)
2. On the **Properties** page, take the following steps:
data-factory Tutorial Incremental Copy Multiple Tables Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-incremental-copy-multiple-tables-portal.md
Previously updated : 02/18/2021 Last updated : 06/04/2021 # Incrementally load data from multiple tables in SQL Server to a database in Azure SQL Database using the Azure portal
END
## Create self-hosted integration runtime As you are moving data from a data store in a private network (on-premises) to an Azure data store, install a self-hosted integration runtime (IR) in your on-premises environment. The self-hosted IR moves data between your private network and Azure.
-1. On the **Let's get started** page of Azure Data Factory UI, select the [Manage tab](./author-management-hub.md) from the leftmost pane.
+1. On the home page of Azure Data Factory UI, select the [Manage tab](./author-management-hub.md) from the leftmost pane.
![The home page Manage button](media/doc-common-process/get-started-page-manage-button.png)
data-factory Tutorial Incremental Copy Multiple Tables Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-incremental-copy-multiple-tables-powershell.md
Previously updated : 02/18/2021 Last updated : 06/04/2021 # Incrementally load data from multiple tables in SQL Server to Azure SQL Database using PowerShell
The pipeline takes a list of table names as a parameter. The **ForEach activity*
4. On the **Data factory** page, select **Author & Monitor** to launch Azure Data Factory in a separate tab.
-5. On the **Let's get started** page, select **Monitor** on the left side.
-![Screenshot shows the Let's get started page for Azure Data Factory.](media/doc-common-process/get-started-page-monitor-button.png)
+5. On the Azure Data Factory home page, select **Monitor** on the left side.
+
+ ![Screenshot shows the home page for Azure Data Factory.](media/doc-common-process/get-started-page-monitor-button.png)
6. You can see all the pipeline runs and their status. Notice that in the following example, the status of the pipeline run is **Succeeded**. To check parameters passed to the pipeline, select the link in the **Parameters** column. If an error occurred, you see a link in the **Error** column.
data-factory Tutorial Incremental Copy Partitioned File Name Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-incremental-copy-partitioned-file-name-copy-data-tool.md
Previously updated : 02/18/2021 Last updated : 06/04/2021 # Incrementally copy new files based on time partitioned file name by using the Copy Data tool
Prepare your Blob storage for the tutorial by performing these steps.
## Use the Copy Data tool to create a pipeline
-1. On the **Let's get started** page, select the **Copy Data** title to launch the Copy Data tool.
+1. On the Azure Data Factory home page, select the **Ingest** title to launch the Copy Data tool.
- ![Copy Data tool tile](./media/doc-common-process/get-started-page.png)
+ ![Screenshot that shows the ADF home page.](./media/doc-common-process/get-started-page.png)
2. On the **Properties** page, take the following steps:
data-factory Tutorial Incremental Copy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-incremental-copy-portal.md
Previously updated : 02/18/2021 Last updated : 06/04/2021 # Incrementally load data from Azure SQL Database to Azure Blob storage using the Azure portal
END
## Create a pipeline In this tutorial, you create a pipeline with two Lookup activities, one Copy activity, and one StoredProcedure activity chained in one pipeline.
-1. In the **get started** page of Data Factory UI, click the **Create pipeline** tile.
+1. On the home page of Data Factory UI, click the **Orchestrate** tile.
- ![Get started page of Data Factory UI](./media/doc-common-process/get-started-page.png)
+ ![Screenshot that shows the home page of Data Factory UI.](./media/doc-common-process/get-started-page.png)
3. In the General panel under **Properties**, specify **IncrementalCopyPipeline** for **Name**. Then collapse the panel by clicking the Properties icon in the top-right corner. 4. Let's add the first lookup activity to get the old watermark value. In the **Activities** toolbox, expand **General**, and drag-drop the **Lookup** activity to the pipeline designer surface. Change the name of the activity to **LookupOldWaterMarkActivity**.
data-factory Tutorial Transform Data Hive Virtual Network Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-transform-data-hive-virtual-network-portal.md
Previously updated : 01/04/2018 Last updated : 06/07/2021 # Transform data in Azure Virtual Network using Hive activity in Azure Data Factory using the Azure portal
If you don't have an Azure subscription, create a [free](https://azure.microsoft
![Data factory home page](./media/tutorial-transform-data-using-hive-in-vnet-portal/data-factory-home-page.png) 10. Click **Author & Monitor** to launch the Data Factory User Interface (UI) in a separate tab.
-11. In the **get started** page, switch to the **Edit** tab in the left panel as shown in the following image:
+11. In the home page, switch to the **Manage** tab in the left panel as shown in the following image:
- ![Edit tab](./media/tutorial-transform-data-using-hive-in-vnet-portal/get-started-page.png)
+ ![Screenshot that shows the Manage tab.](media/doc-common-process/get-started-page-manage-button.png)
## Create a self-hosted integration runtime As the Hadoop cluster is inside a virtual network, you need to install a self-hosted integration runtime (IR) in the same virtual network. In this section, you create a new VM, join it to the same virtual network, and install self-hosted IR on it. The self-hosted IR allows Data Factory service to dispatch processing requests to a compute service such as HDInsight inside a virtual network. It also allows you to move data to/from data stores inside a virtual network to Azure. You use a self-hosted IR when the data store or compute is in an on-premises environment as well.
data-factory Tutorial Transform Data Spark Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-transform-data-spark-portal.md
Previously updated : 01/10/2018 Last updated : 06/07/2021 # Transform data in the cloud by using a Spark activity in Azure Data Factory
You author two linked services in this section:
### Create an Azure Storage linked service
-1. On the **Let's get started** page, switch to the **Edit** tab in the left panel.
+1. On the home page, switch to the **Manage** tab in the left panel.
- !["Let's get started" page](./media/tutorial-transform-data-spark-portal/get-started-page.png)
+ ![Screenshot that shows the Manage tab.](media/doc-common-process/get-started-page-manage-button.png)
1. Select **Connections** at the bottom of the window, and then select **+ New**.
event-grid Consume Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/consume-private-endpoints.md
Title: Deliver events using private link service description: This article describes how to work around the limitation of not able to deliver events using private link service. Previously updated : 02/12/2021 Last updated : 07/01/2021 # Deliver events using private link service
To deliver events to Storage queues using managed identity, follow these steps:
1. Enable system-assigned identity: [system topics](enable-identity-system-topics.md), [custom topics, and domains](enable-identity-custom-topics-domains.md). 1. [Add the identity to the **Storage Queue Data Message Sender**](../storage/common/storage-auth-aad-rbac-portal.md) role on Azure Storage queue.
-1. [Configure the event subscription](managed-service-identity.md#create-event-subscriptions-that-use-an-identity) that uses a Service Bus queue or topic as an endpoint to use the system-assigned identity.
+1. [Configure the event subscription](managed-service-identity.md#create-event-subscriptions-that-use-an-identity) that uses a Storage queue as an endpoint to use the system-assigned identity.
## Next steps
event-grid Custom Event Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/custom-event-quickstart-portal.md
Title: 'Quickstart: Send custom events to web endpoint - Event Grid, Azure portal'
+ Title: 'Send custom events to web endpoint - Event Grid, Azure portal'
description: 'Quickstart: Use Azure Event Grid and Azure portal to publish a custom topic, and subscribe to events for that topic. The events are handled by a web application.' Previously updated : 04/22/2021 Last updated : 07/01/2021
-# Quickstart: Route custom events to web endpoint with the Azure portal and Event Grid
+# Route custom events to web endpoint with the Azure portal and Event Grid
+Event Grid is a fully managed service that enables you to easily manage events across many different Azure services and applications. It simplifies building event-driven and serverless applications. For an overview of the service, see [Event Grid overview](overview.md).
+
+In this article, you use the Azure portal to do the following tasks:
+
+1. Create a custom topic.
+1. Subscribe to the custom topic.
+1. Trigger the event.
+1. View the result. Typically, you send events to an endpoint that processes the event data and takes actions. However, to simplify this article, you send the events to a web app that collects and displays the messages.
-Azure Event Grid is an eventing service for the cloud. In this article, you use the Azure portal to create a custom topic, subscribe to the custom topic, and trigger the event to view the result. Typically, you send events to an endpoint that processes the event data and takes actions. However, to simplify this article, you send the events to a web app that collects and displays the messages.
## Prerequisites [!INCLUDE [quickstarts-free-trial-note.md](../../includes/quickstarts-free-trial-note.md)]
Azure Event Grid is an eventing service for the cloud. In this article, you use
[!INCLUDE [event-grid-register-provider-portal.md](../../includes/event-grid-register-provider-portal.md)] ## Create a custom topic- An event grid topic provides a user-defined endpoint that you post your events to. 1. Sign in to [Azure portal](https://portal.azure.com/). 2. In the search bar at the topic, type **Event Grid Topics**, and then select **Event Grid Topics** from the drop-down list. :::image type="content" source="./media/custom-event-quickstart-portal/select-event-grid-topics.png" alt-text="Search for and select Event Grid Topics":::
-3. On the **Event Grid Topics** page, select **+ Add** on the toolbar.
-
- :::image type="content" source="./media/custom-event-quickstart-portal/add-event-grid-topic-button.png" alt-text="Add Event Grid Topic button":::
+3. On the **Event Grid Topics** page, select **+ Create** on the toolbar.
4. On the **Create Topic** page, follow these steps: 1. Select your Azure **subscription**. 2. Select an existing resource group or select **Create new**, and enter a **name** for the **resource group**.
An event grid topic provides a user-defined endpoint that you post your events t
6. On the **Review + create** tab of the **Create topic** page, select **Create**. :::image type="content" source="./media/custom-event-quickstart-portal/review-create-page.png" alt-text="Review settings and create":::
-5. After the deployment succeeds, type **Event Grid Topics** in the search bar again, and select **Event Grid Topics** from the drop-down list as you did before.
-6. Select the topic you created from the list.
-
- :::image type="content" source="./media/custom-event-quickstart-portal/select-event-grid-topic.png" alt-text="Select your topic from the list":::
+5. After the deployment succeeds, select **Go to resource** to navigate to the **Event Grid Topic** page for your topic. Keep this page open. You use it later in the quickstart.
-7. You see the **Event Grid Topic** page for your topic. Keep this page open. You use it later in the quickstart.
-
- :::image type="content" source="./media/custom-event-quickstart-portal/event-grid-topic-home-page.png" alt-text="Event Grid Topic home page":::
+ :::image type="content" source="./media/custom-event-quickstart-portal/event-grid-topic-home-page.png" alt-text="Screenshot showing the Event Grid Topic home page":::
## Create a message endpoint Before you create a subscription for the custom topic, create an endpoint for the event message. Typically, the endpoint takes actions based on the event data. To simplify this quickstart, you deploy a [pre-built web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub.
Before you create a subscription for the custom topic, create an endpoint for th
1. In the article page, select **Deploy to Azure** to deploy the solution to your subscription. In the Azure portal, provide values for the parameters. <a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fazure-event-grid-viewer%2Fmaster%2Fazuredeploy.json" target="_blank"><img src="https://azuredeploy.net/deploybutton.png" alt="Button to Deploy to Aquent." /></a>
-1. The deployment may take a few minutes to complete. After the deployment has succeeded, view your web app to make sure it's running. In a web browser, navigate to:
-`https://<your-site-name>.azurewebsites.net`
+2. On the **Custom deployment** page, do the following steps:
+ 1. For **Resource group**, select the resource group that you created when creating the storage account. It will be easier for you to clean up after you are done with the tutorial by deleting the resource group.
+ 2. For **Site Name**, enter a name for the web app.
+ 3. For **Hosting plan name**, enter a name for the App Service plan to use for hosting the web app.
+ 5. Select **Review + create**.
+
+ :::image type="content" source="./media/blob-event-quickstart-portal/template-deploy-parameters.png" alt-text="Screenshot showing the Custom deployment page.":::
+1. On the **Review + create** page, select **Create**.
+1. The deployment may take a few minutes to complete. Select Alerts (bell icon) in the portal, and then select **Go to resource group**.
+
+ ![Alert - navigate to resource group.](./media/blob-event-quickstart-portal/navigate-resource-group.png)
+4. On the **Resource group** page, in the list of resources, select the web app that you created. You also see the App Service plan and the storage account in this list.
+
+ ![Select web site.](./media/blob-event-quickstart-portal/resource-group-resources.png)
+5. On the **App Service** page for your web app, select the URL to navigate to the web site. The URL should be in this format: `https://<your-site-name>.azurewebsites.net`.
+
+ ![Navigate to web site.](./media/blob-event-quickstart-portal/web-site.png)
- If the deployment fails, check the error message. It may be because the web site name is already taken. Deploy the template again and choose a different name for the site.
-1. You see the site but no events have been posted to it yet.
+6. Confirm that you see the site but no events have been posted to it yet.
- ![View new site](./media/custom-event-quickstart-portal/view-site.png)
+ ![View new site.](./media/blob-event-quickstart-portal/view-site.png)
## Subscribe to custom topic
The first example uses Azure CLI. It gets the URL and key for the custom topic,
```azurecli endpoint=$(az eventgrid topic show --name <topic name> -g <resource group name> --query "endpoint" --output tsv) ```
-2. Run the following command to get the **key** for the custom topic: After you copy and paste the command, update the **topic name** and **resource group** name before you run the command. It's the primary key of the Event Grid topic. To get this key from the Azure portal, switch to the **Access keys** tab of the **Event Grid Topic** page. To be able post an event to a custom topic, you need the access key.
+2. Run the following command to get the **key** for the custom topic: After you copy and paste the command, update the **topic name** and **resource group** name before you run the command. It's the primary key of the event grid topic. To get this key from the Azure portal, switch to the **Access keys** tab of the **Event Grid Topic** page. To be able post an event to a custom topic, you need the access key.
```azurecli key=$(az eventgrid topic key list --name <topic name> -g <resource group name> --query "key1" --output tsv)
event-grid Custom Event Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/custom-event-quickstart-powershell.md
Title: 'Quickstart: Send custom events to web endpoint - Event Grid, PowerShell' description: 'Quickstart: Use Azure Event Grid and PowerShell to publish a custom topic, and subscribe to events for that topic. The events are handled by a web application.' Previously updated : 04/22/2021 Last updated : 07/01/2021
event-grid Custom Event Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/custom-event-quickstart.md
Title: 'Quickstart: Send custom events with Event Grid and Azure CLI' description: 'Quickstart Use Azure Event Grid and Azure CLI to publish a custom topic, and subscribe to events for that topic. The events are handled by a web application.' Previously updated : 04/22/2021 Last updated : 07/01/2021
iot-central How To Connect Devices X509 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/how-to-connect-devices-x509.md
Title: Connect devices with X.509 certificates in an Azure IoT Central applicati
description: How to connect devices with X.509 certificates using Node.js device SDK for IoT Central Application Previously updated : 08/12/2020 Last updated : 06/30/2021
+zone_pivot_groups: programming-languages-set-ten
+
+# - id: programming-languages-set-ten
+# # Owner: aahill
+# Title: Programming languages
+# prompt: Choose a programming language
+# pivots:
+# - id: programming-language-csharp
+# Title: C#
+# - id: programming-language-java
+# Title: Java
+# - id: programming-language-javascript
+# Title: JavaScript
+# - id: programming-language-python
+# Title: Python
-# How to connect devices with X.509 certificates using Node.js device SDK for IoT Central Application
+# How to connect devices with X.509 certificates to IoT Central Application
-IoT Central supports both shared access signatures (SAS) and X.509 certificates to secure the communication between a device and your application. The [Create and connect a client application to your Azure IoT Central application](./tutorial-connect-device.md) tutorial uses SAS. In this article, you learn how to modify the code sample to use X.509. X.509 certificates are recommended in production environments. For more information, see [Get connected to Azure IoT Central](./concepts-get-connected.md).
+IoT Central supports both shared access signatures (SAS) and X.509 certificates to secure the communication between a device and your application. The [Create and connect a client application to your Azure IoT Central application](./tutorial-connect-device.md) tutorial uses SAS. In this article, you learn how to modify the code sample to use X.509 certificates. X.509 certificates are recommended in production environments. For more information, see [Get connected to Azure IoT Central](./concepts-get-connected.md).
-This article shows two ways of using X.509 - [group enrollments](how-to-connect-devices-x509.md#use-a-group-enrollment) typically used in a production environment, and [individual enrollments](how-to-connect-devices-x509.md#use-an-individual-enrollment) useful for testing.
+This guide shows two ways to use X.509 certificates - [group enrollments](how-to-connect-devices-x509.md#use-group-enrollment) typically used in a production environment, and [individual enrollments](how-to-connect-devices-x509.md#use-individual-enrollment) useful for testing. The article also describes how to [roll device certificates](#roll-x509-device-certificates) to maintain connectivity when certificates expire.
-The code snippets in this article use JavaScript. For code samples in other languages, see:
--- [C](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples/iothub_ll_client_x509_sample)-- [C#](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/master/iot-hub/Samples/device/X509DeviceCertWithChainSample)-- [Java](https://github.com/Azure/azure-iot-sdk-java/tree/master/device/iot-device-samples/send-event-x509)-- [Python](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples/sync-samples)
+This guide builds on the samples shown in the [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) tutorial that use C#, Java, JavaScript, and Python. For an example that uses the C programming language, see the [Provision multiple X.509 devices using enrollment groups](../../iot-dps/tutorial-custom-hsm-enrollment-group-x509.md).
## Prerequisites -- Completion of [Create and connect a client application to your Azure IoT Central application (JavaScript)](./tutorial-connect-device.md) tutorial.-- [Git](https://git-scm.com/download/).-- Download and install [OpenSSL](https://www.openssl.org/). If you're using Windows, you can use the binaries from the [OpenSSL page on SourceForge](https://sourceforge.net/projects/openssl/).
+Complete the [Create and connect a client application to your Azure IoT Central application](./tutorial-connect-device.md) tutorial. This includes installing the prerequisites for your choice of programming language.
+
+In this how-to guide, you generate some test X.509 certificates. To be able to generate these certificates, you need:
+
+- A development machine with [Node.js](https://nodejs.org/) version 6 or later installed. You can run `node --version` in the command line to check your version. The instructions in this tutorial assume you're running the **node** command at the Windows command prompt. However, you can use Node.js on many other operating systems.
+- A local copy of the [Microsoft Azure IoT SDK for Node.js](https://github.com/Azure/azure-iot-sdk-node) GitHub repository that contains the scripts to generate the test X.509 certificates. Use this link to download a copy of the repository: [Download ZIP](https://github.com/Azure/azure-iot-sdk-node/archive/master.zip). Then unzip the file to a suitable location on your local machine.
-## Use a group enrollment
+## Use group enrollment
Use X.509 certificates with a group enrollment in a production environment. In a group enrollment, you add a root or intermediate X.509 certificate to your IoT Central application. Devices with leaf certificates derived from the root or intermediate certificate can connect to your application.
-## Generate root and device cert
+### Generate root and device certificates
-In this section, you use an X.509 certificate to connect a device with a cert derived from the enrollment group's cert, which can connect to your IoT Central application.
+In this section, you use an X.509 certificate to connect a device with a certificate derived from the IoT Central enrollment group's certificate.
> [!WARNING] > This way of generating X.509 certs is for testing only. For a production environment you should use your official, secure mechanism for certificate generation.
-1. Open a command prompt. Clone the GitHub repository for the certificate generation scripts:
-
- ```cmd/sh
- git clone https://github.com/Azure/azure-iot-sdk-node.git
- ```
-
-1. Navigate to the certificate generator script and install the required packages:
+1. Navigate to the certificate generator script in the Microsoft Azure IoT SDK for Node.js you downloaded. Install the required packages:
```cmd/sh cd azure-iot-sdk-node/provisioning/tools
In this section, you use an X.509 certificate to connect a device with a cert de
> [!TIP] > A device ID can contain letters, numbers, and the `-` character.
-These commands produce three files each for the root and the device certificate
+These commands produce the following root and the device certificate
+
+| filename | contents |
+| -- | -- |
+| mytestrootcert_cert.pem | The public portion of the root X509 certificate |
+| mytestrootcert_key.pem | The private key for the root X509 certificate |
+| mytestrootcert_fullchain.pem | The entire keychain for the root X509 certificate. |
+| sampleDevice01_cert.pem | The public portion of the device X509 certificate |
+| sampleDevice01_key.pem | The private key for the device X509 certificate |
+| sampleDevice01_fullchain.pem | The entire keychain for the device X509 certificate. |
-filename | contents
| --
-\<name\>_cert.pem | The public portion of the X509 certificate
-\<name\>_key.pem | The private key for the X509 certificate
-\<name\>_fullchain.pem | The entire keychain for the X509 certificate.
+Make a note of the location of these files. You need it later.
-## Create a group enrollment
+### Create a group enrollment
1. Open your IoT Central application and navigate to **Administration** in the left pane and select **Device connection**.
filename | contents
1. Open the enrollment group you created and select **Manage Primary**.
-1. Select file option and upload the root certificate file called _mytestrootcert_cert.pem_ that you generated previously:
+1. Select file option to upload the root certificate file called _mytestrootcert_cert.pem_ that you generated previously:
![Certificate Upload](./media/how-to-connect-devices-x509/certificate-upload.png)
filename | contents
node create_test_cert.js verification --ca mytestrootcert_cert.pem --key mytestrootcert_key.pem --nonce {verification-code} ```
-1. Upload the signed verification certificate _verification_cert.pem_ to complete the verification:
+1. Select **Verify** to upload the signed verification certificate _verification_cert.pem_ to complete the verification:
![Verified Certificate](./media/how-to-connect-devices-x509/verified.png)
You can now connect devices that have an X.509 certificate derived from this pri
After you save the enrollment group, make a note of the ID Scope.
-## Run sample device code
+### Run sample device code
++
+If you're using Windows, the X.509 certificates must be in the Windows certificate store for the sample to work. To add the certificates to the store:
+
+1. Use `openssl` to create PFX files from the PEM files. When you run these commands, you're prompted to create a password. Make a note of the password, you need it in the next step:
+
+ ```bash
+ openssl pkcs12 -inkey sampleDevice001_key.pem -in sampleDevice001_cert.pem -export -out sampledevice001.pfx
+ openssl pkcs12 -inkey mytestrootcert_key.pem -in mytestrootcert_cert.pem -export -out mytestrootcert.pfx
+ ```
+
+1. In Windows Explorer, double-click on each PFX file. In the **Certificate Import Wizard**, select **Current User** as the store location, enter the password from the previous step, and let the wizard choose the certificate store automatically. The wizard imports the certificates to the current user's personal store.
+
+To modify the sample code to use the certificates:
+
+1. In the **IoTHubDeviceSamples** Visual Studio solution, open the *Parameter.cs* file in the **TemperatureController** project.
+
+1. Add the following two parameter definitions to the class:
+
+ ```csharp
+ [Option(
+ 'x',
+ "CertificatePath",
+ HelpText = "(Required if DeviceSecurityType is \"dps\"). \nThe device PFX file to use during device provisioning." +
+ "\nDefaults to environment variable \"IOTHUB_DEVICE_X509_CERT\".")]
+ public string CertificatePath { get; set; } = Environment.GetEnvironmentVariable("IOTHUB_DEVICE_X509_CERT");
+
+ [Option(
+ 'p',
+ "CertificatePassword",
+ HelpText = "(Required if DeviceSecurityType is \"dps\"). \nThe password of the PFX certificate file." +
+ "\nDefaults to environment variable \"IOTHUB_DEVICE_X509_PASSWORD\".")]
+ public string CertificatePassword { get; set; } = Environment.GetEnvironmentVariable("IOTHUB_DEVICE_X509_PASSWORD");
+ ```
+
+ Save the changes.
+
+1. In the **IoTHubDeviceSamples** Visual Studio solution, open the *Program.cs* file in the **TemperatureController** project.
+
+1. Add the following `using` statements:
+
+ ```csharp
+ using System.Security.Cryptography.X509Certificates;
+ using System.IO;
+ ```
+
+1. Add the following method to the class:
+
+ ```csharp
+ private static X509Certificate2 LoadProvisioningCertificate(Parameters parameters)
+ {
+ var certificateCollection = new X509Certificate2Collection();
+ certificateCollection.Import(
+ parameters.CertificatePath,
+ parameters.CertificatePassword,
+ X509KeyStorageFlags.UserKeySet);
+
+ X509Certificate2 certificate = null;
+
+ foreach (X509Certificate2 element in certificateCollection)
+ {
+ Console.WriteLine($"Found certificate: {element?.Thumbprint} {element?.Subject}; PrivateKey: {element?.HasPrivateKey}");
+ if (certificate == null && element.HasPrivateKey)
+ {
+ certificate = element;
+ }
+ else
+ {
+ element.Dispose();
+ }
+ }
+
+ if (certificate == null)
+ {
+ throw new FileNotFoundException($"{parameters.CertificatePath} did not contain any certificate with a private key.");
+ }
+
+ Console.WriteLine($"Using certificate {certificate.Thumbprint} {certificate.Subject}");
+
+ return certificate;
+ }
+ ```
+
+1. In the `SetupDeviceClientAsync` method, replace the block of code for `case "dps"` with the following code:
+
+ ```csharp
+ case "dps":
+ s_logger.LogDebug($"Initializing via DPS");
+ Console.WriteLine($"Loading the certificate...");
+ X509Certificate2 certificate = LoadProvisioningCertificate(parameters);
+ DeviceRegistrationResult dpsRegistrationResult = await ProvisionDeviceAsync(parameters, certificate, cancellationToken);
+ var authMethod = new DeviceAuthenticationWithX509Certificate(dpsRegistrationResult.DeviceId, certificate);
+ deviceClient = InitializeDeviceClient(dpsRegistrationResult.AssignedHub, authMethod);
+ break;
+ ```
+
+1. Replace the `ProvisionDeviceAsync` method with the following code:
+
+ ```csharp
+ private static async Task<DeviceRegistrationResult> ProvisionDeviceAsync(Parameters parameters, X509Certificate2 certificate, CancellationToken cancellationToken)
+ {
+ SecurityProvider security = new SecurityProviderX509Certificate(certificate);
+ ProvisioningTransportHandler mqttTransportHandler = new ProvisioningTransportHandlerMqtt();
+ ProvisioningDeviceClient pdc = ProvisioningDeviceClient.Create(parameters.DpsEndpoint, parameters.DpsIdScope, security, mqttTransportHandler);
+
+ var pnpPayload = new ProvisioningRegistrationAdditionalData
+ {
+ JsonData = PnpConvention.CreateDpsPayload(ModelId),
+ };
+ return await pdc.RegisterAsync(pnpPayload, cancellationToken);
+ }
+ ```
+
+ Save the changes.
+
+1. Add the following environment variables to the project:
+
+ - `IOTHUB_DEVICE_X509_CERT`: `<full path to folder that contains PFX files>sampleDevice01.pfx`
+ - `IOTHUB_DEVICE_X509_PASSWORD`: The password you used when you created the *sampleDevice01.pfx* file.
+
+1. Build and run the application. Verify the device provisions successfully.
+++
+1. Navigate to the _azure-iot-sdk-java/device/iot-device-samples/pnp-device-sample/temperature-controller-device-sample_ folder that contains the *pom.xml* file and *src* folder for the temperature controller device sample.
-1. Copy the **sampleDevice01_key.pem** and **sampleDevice01_cert.pem** files to the _azure-iot-sdk-node/device/samples/pnp_ folder that contains the **simple_thermostat.js** application. You used this application when you completed the [Connect a device (JavaScript) tutorial](./tutorial-connect-device.md).
+1. Edit the *pom.xml* file to add the following dependency configuration in the `<dependencies>` node:
-1. Navigate to the _azure-iot-sdk-node/device/samples/pnp_ folder that contains the **simple_thermostat.js** application and run the following command to install the X.509 package:
+ ```xml
+ <dependency>
+ <groupId>com.microsoft.azure.sdk.iot.provisioning.security</groupId>
+ <artifactId>${x509-provider-artifact-id}</artifactId>
+ <version>${x509-provider-version}</version>
+ </dependency>
+ ```
+
+ Save the changes.
+
+1. Open the *src/main/java/samples/com/microsoft/azure/sdk/iot/device/TemperatureController.java* file in your text editor.
+
+1. Replace the `SecurityProviderSymmetricKey` import with the following imports:
+
+ ```java
+ import com.microsoft.azure.sdk.iot.provisioning.security.SecurityProvider;
+ import com.microsoft.azure.sdk.iot.provisioning.security.hsm.SecurityProviderX509Cert;
+ import com.microsoft.azure.sdk.iot.provisioning.security.exceptions.SecurityProviderException;
+ ```
+
+1. Add the following import:
+
+ ```java
+ import java.nio.file.*;
+ ```
+
+1. Add `SecurityProviderException` to the list of exceptions that the `main` method throws:
+
+ ```java
+ public static void main(String[] args) throws IOException, URISyntaxException, ProvisioningDeviceClientException, InterruptedException, SecurityProviderException {
+ ```
+
+1. Replace the `initializeAndProvisionDevice` method with the following code:
+
+ ```java
+ private static void initializeAndProvisionDevice() throws ProvisioningDeviceClientException, IOException, URISyntaxException, InterruptedException, SecurityProviderException {
+ String deviceX509Key = new String(Files.readAllBytes(Paths.get(System.getenv("IOTHUB_DEVICE_X509_KEY"))));
+ String deviceX509Cert = new String(Files.readAllBytes(Paths.get(System.getenv("IOTHUB_DEVICE_X509_CERT"))));
+ SecurityProvider securityProviderX509 = new SecurityProviderX509Cert(deviceX509Cert, deviceX509Key, null);
+ ProvisioningDeviceClient provisioningDeviceClient;
+ ProvisioningStatus provisioningStatus = new ProvisioningStatus();
+
+ provisioningDeviceClient = ProvisioningDeviceClient.create(globalEndpoint, scopeId, provisioningProtocol, securityProviderX509);
+
+ AdditionalData additionalData = new AdditionalData();
+ additionalData.setProvisioningPayload(com.microsoft.azure.sdk.iot.provisioning.device.plugandplay.PnpHelper.createDpsPayload(MODEL_ID));
+
+ provisioningDeviceClient.registerDevice(new ProvisioningDeviceClientRegistrationCallbackImpl(), provisioningStatus, additionalData);
+
+ while (provisioningStatus.provisioningDeviceClientRegistrationInfoClient.getProvisioningDeviceClientStatus() != ProvisioningDeviceClientStatus.PROVISIONING_DEVICE_STATUS_ASSIGNED)
+ {
+ if (provisioningStatus.provisioningDeviceClientRegistrationInfoClient.getProvisioningDeviceClientStatus() == ProvisioningDeviceClientStatus.PROVISIONING_DEVICE_STATUS_ERROR ||
+ provisioningStatus.provisioningDeviceClientRegistrationInfoClient.getProvisioningDeviceClientStatus() == ProvisioningDeviceClientStatus.PROVISIONING_DEVICE_STATUS_DISABLED ||
+ provisioningStatus.provisioningDeviceClientRegistrationInfoClient.getProvisioningDeviceClientStatus() == ProvisioningDeviceClientStatus.PROVISIONING_DEVICE_STATUS_FAILED)
+ {
+ provisioningStatus.exception.printStackTrace();
+ System.out.println("Registration error, bailing out");
+ break;
+ }
+ System.out.println("Waiting for Provisioning Service to register");
+ Thread.sleep(MAX_TIME_TO_WAIT_FOR_REGISTRATION);
+ }
+
+ ClientOptions options = new ClientOptions();
+ options.setModelId(MODEL_ID);
+
+ if (provisioningStatus.provisioningDeviceClientRegistrationInfoClient.getProvisioningDeviceClientStatus() == ProvisioningDeviceClientStatus.PROVISIONING_DEVICE_STATUS_ASSIGNED) {
+ System.out.println("IotHUb Uri : " + provisioningStatus.provisioningDeviceClientRegistrationInfoClient.getIothubUri());
+ System.out.println("Device ID : " + provisioningStatus.provisioningDeviceClientRegistrationInfoClient.getDeviceId());
+
+ String iotHubUri = provisioningStatus.provisioningDeviceClientRegistrationInfoClient.getIothubUri();
+ String deviceId = provisioningStatus.provisioningDeviceClientRegistrationInfoClient.getDeviceId();
+
+ log.debug("Opening the device client.");
+ deviceClient = DeviceClient.createFromSecurityProvider(iotHubUri, deviceId, securityProviderX509, IotHubClientProtocol.MQTT, options);
+ deviceClient.open();
+ }
+ }
+ ```
+
+ Save the changes.
+
+1. In your shell environment, add the following two environment variables. Make sure that you provide the full path to the PEM files and use the correct path delimiter for your operating system:
+
+ ```cmd/sh
+ set IOTHUB_DEVICE_X509_CERT=<full path to folder that contains PEM files>sampleDevice01_cert.pem
+ set IOTHUB_DEVICE_X509_KEY=<full path to folder that contains PEM files>sampleDevice01_key.pem
+ ```
+
+ > [!TIP]
+ > You set the other required environment variables when you completed the [Create and connect a client application to your Azure IoT Central application](./tutorial-connect-device.md) tutorial.
+
+1. Build and run the application. Verify the device provisions successfully.
+++
+1. Navigate to the _azure-iot-sdk-node/device/samples/pnp_ folder that contains the **pnpTemperatureController.js** application and run the following command to install the X.509 package:
```cmd/sh npm install azure-iot-security-x509 --save ```
-1. Open the **simple_thermostat.js** file in a text editor.
+1. Open the **pnpTemperatureController.js** file in a text editor.
-1. Edit the `require` statements to include the following:
+1. Edit the `require` statements to include the following code:
```javascript const fs = require('fs');
After you save the enrollment group, make a note of the ID Scope.
}; ```
-1. Edit the `provisionDevice` function that creates the client by replacing the first line with the following:
+1. Edit the `provisionDevice` function that creates the client by replacing the first line with the following code:
```javascript var provSecurityClient = new X509Security(registrationId, deviceCert);
After you save the enrollment group, make a note of the ID Scope.
client.setOptions(deviceCert); ```
-1. In your shell environment, set the following two environment variables:
+ Save the changes.
+
+1. In your shell environment, add the following two environment variables. Make sure that you provide the full path to the PEM files and use the correct path delimiter for your operating system:
```cmd/sh
- set IOTHUB_DEVICE_X509_CERT=sampleDevice01_cert.pem
- set IOTHUB_DEVICE_X509_KEY=sampleDevice01_key.pem
+ set IOTHUB_DEVICE_X509_CERT=<full path to folder that contains PEM files>sampleDevice01_cert.pem
+ set IOTHUB_DEVICE_X509_KEY=<full path to folder that contains PEM files>sampleDevice01_key.pem
``` > [!TIP] > You set the other required environment variables when you completed the [Create and connect a client application to your Azure IoT Central application](./tutorial-connect-device.md) tutorial.
-1. Execute the script and verify the device was provisioned successfully:
+1. Execute the script and verify the device provisions successfully:
```cmd/sh node simple_thermostat.js ```
- You can also verify that telemetry appears on the dashboard.
++
+1. Navigate to the _azure-iot-device/samples/pnp_ folder and open the **temp_controller_with_thermostats.py** file in a text editor.
+
+1. Add the following `from` statement to import the X.509 functionality:
+
+ ```python
+ from azure.iot.device import X509
+ ```
+
+1. Modify the first part of the `provision_device` function as follows:
+
+ ```python
+ async def provision_device(provisioning_host, id_scope, registration_id, x509, model_id):
+ provisioning_device_client = ProvisioningDeviceClient.create_from_x509_certificate(
+ provisioning_host=provisioning_host,
+ registration_id=registration_id,
+ id_scope=id_scope,
+ x509=x509,
+ )
+ ```
+
+1. In the `main` function, replace the line that sets the `symmetric_key` variable with the following code:
+
+ ```python
+ x509 = X509(
+ cert_file=os.getenv("IOTHUB_DEVICE_X509_CERT"),
+ key_file=os.getenv("IOTHUB_DEVICE_X509_KEY"),
+ )
+ ```
+
+1. In the `main` function, replace the call to the `provision_device` function with the following code:
+
+ ```python
+ registration_result = await provision_device(
+ provisioning_host, id_scope, registration_id, x509, model_id
+ )
+ ```
+
+1. In the `main` function, replace the call to the `IoTHubDeviceClient.create_from_symmetric_key` function with the following code:
+
+ ```python
+ device_client = IoTHubDeviceClient.create_from_x509_certificate(
+ x509=x509,
+ hostname=registration_result.registration_state.assigned_hub,
+ device_id=registration_result.registration_state.device_id,
+ product_info=model_id,
+ )
+ ```
+
+ Save the changes.
+
+1. In your shell environment, add the following two environment variables. Make sure that you provide the full path to the PEM files and use the correct path delimiter for your operating system:
+
+ ```cmd/sh
+ set IOTHUB_DEVICE_X509_CERT=<full path to folder that contains PEM files>sampleDevice01_cert.pem
+ set IOTHUB_DEVICE_X509_KEY=<full path to folder that contains PEM files>sampleDevice01_key.pem
+ ```
+
+ > [!TIP]
+ > You set the other required environment variables when you completed the [Create and connect a client application to your Azure IoT Central application](./tutorial-connect-device.md) tutorial.
+
+1. Execute the script and verify the device provisions successfully:
+
+ ```cmd/sh
+ python temp_controller_with_thermostats.py
+ ```
- ![Verify Device Telemetry](./media/how-to-connect-devices-x509/telemetry.png)
-## Use an individual enrollment
+Verify that telemetry appears on the dashboard in your IoT Central application:
+
+![Screenshot that shows telemetry arriving in your IoT Central application.](./media/how-to-connect-devices-x509/telemetry.png)
+
+## Use individual enrollment
Use X.509 certificates with an individual enrollment to test your device and solution. In an individual enrollment, there's no root or intermediate X.509 certificate in your IoT Central application. Devices use a self-signed X.509 certificate to connect to your application.
-## Generate self-signed device cert
+### Generate self-signed device certificate
In this section, you use a self-signed X.509 certificate to connect devices for individual enrollment, which are used to enroll a single device. Self-signed certificates are for testing only.
-Create a self-signed X.509 device certificate by running the script. Be sure to only use lower-case alphanumerics and hyphens for certificate name:
+> [!WARNING]
+> This way of generating X.509 certs is for testing only. For a production environment you should use your official, secure mechanism for certificate generation.
- ```cmd/sh
- cd azure-iot-sdk-node/provisioning/tools
- node create_test_cert.js device mytestselfcertprimary
- node create_test_cert.js device mytestselfcertsecondary
- ```
+Create a self-signed X.509 device certificate by running the following commands:
-## Create individual enrollment
+```cmd/sh
+ cd azure-iot-sdk-node/provisioning/tools
+ node create_test_cert.js device mytestselfcertprimary
+ node create_test_cert.js device mytestselfcertsecondary
+```
+
+> [!TIP]
+> A device ID can contain letters, numbers, and the `-` character.
+
+### Create individual enrollment
1. In the Azure IoT Central application, select **Devices**, and create a new device with **Device ID** as _mytestselfcertprimary_ from the thermostat device template. Make a note of the **ID Scope**, you use it later.
Create a self-signed X.509 device certificate by running the script. Be sure to
The device is now provisioned with X.509 certificate.
-## Run a sample individual enrollment device
+### Run a sample individual enrollment device
1. Copy the _mytestselfcertprimary_key.pem_ and _mytestselfcertprimary_cert.pem_ files to the _azure-iot-sdk-node/device/samples/pnp_ folder that contains the **simple_thermostat.js** application. You used this application when you completed the [Connect a device (JavaScript) tutorial](./tutorial-connect-device.md).
The device is now provisioned with X.509 certificate.
set IOTHUB_DEVICE_X509_KEY=mytestselfcertprimary_key.pem ```
-1. Execute the script and verify the device was provisioned successfully:
+1. Execute the script and verify the device is provisioned successfully:
```cmd/sh node environmentalSensor.js
The device is now provisioned with X.509 certificate.
You can repeat the above steps for _mytestselfcertsecondary_ certificate as well.
+## Connect an IoT Edge device
+
+This section assumes you're using a group enrollment to connect your IoT Edge device. Follow the steps in the previous sections to:
+
+- [Generate root and device certificates](#generate-root-and-device-certificates)
+- [Create a group enrollment](#create-a-group-enrollment) <!-- No slightly different type of enrollment group - UPDATE!! -->
+
+To connect the IoT Edge device to IoT Central using the X.509 device certificate:
+
+- Copy the device certificate and key files onto your IoT Edge device. In the previous group enrollment example, these files were called **sampleDevice01_key.pem** and **sampleDevice01_cert.pem**.
+- On the IoT Edge device, edit `provisioning` section in the **/etc/iotedge/config.yaml** configuration file as follows:
+
+ ```yaml
+ # DPS X.509 provisioning configuration
+ provisioning:
+ source: "dps"
+ global_endpoint: "https://global.azure-devices-provisioning.net"
+ scope_id: "<SCOPE_ID>"
+ attestation:
+ method: "x509"
+ # registration_id: "<OPTIONAL REGISTRATION ID. LEAVE COMMENTED OUT TO REGISTER WITH CN OF identity_cert>"
+ identity_cert: "file:///<path>/sampleDevice01_cert.pem"
+ identity_pk: "file:///<path>/sampleDevice01_key.pem"
+ # always_reprovision_on_startup: true
+ # dynamic_reprovisioning: false
+ ```
+
+ > [!TIP]
+ > You don't need to add a value for the `registration_id`. IoT Edge can use the **CN** value from the X.509 certificate.
+
+- Run the following command to restart the IoT Edge runtime:
+
+ ```bash
+ sudo systemctl restart iotedge
+ ```
+
+To learn more, see [Create and provision an IoT Edge device using X.509 certificates](../../iot-edge/how-to-auto-provision-x509-certs.md).
+
+## Connect an IoT Edge leaf device
+
+IoT Edge uses X.509 certificates to secure the connection between leaf devices and an IoT Edge device acting as a gateway. To learn more about configuring this scenario, see [Connect a downstream device to an Azure IoT Edge gateway](../../iot-edge/how-to-connect-downstream-device.md).
+
+## Roll X.509 device certificates
+
+During the lifecycle of your IoT Central application, you'll need to roll your x.509 certificates. For example:
+
+- If you have a security breach, rolling certificates is a security best practice to help secure your system.
+- x.509 certificates have expiry dates. The frequency in which you roll your certificates depends on the security needs of your solution. Customers with solutions involving highly sensitive data may roll certificates daily, while others roll their certificates every couple years.
+
+For uninterrupted connectivity, IoT Central lets you configure primary and secondary X.509 certificates. If the primary and secondary certificates have different expiry dates, you can roll the expired certificate while devices continue to connect with the other certificate.
+
+To learn more, see [Assume Breach Methodology](https://download.microsoft.com/download/C/1/9/C1990DBA-502F-4C2A-848D-392B93D9B9C3/Microsoft_Enterprise_Cloud_Red_Teaming.pdf).
+
+This section describes how to roll the certificates in IoT Central. When you roll a certificate in IoT Central, you also need to copy the new device certificate to your devices.
+
+### Obtain new X.509 certificates
+
+Obtain new X.509 certificates from your certificate provider. You can create your own X.509 certificates using a tool like OpenSSL. This approach is useful for testing X.509 certificates but provides few security guarantees. Only use this approach for testing unless you're prepared to act as your own CA provider.
+
+### Enrollment groups and security breaches
+
+To update a group enrollment in response to a security breach, you should use the following approach to update the current certificate immediately. Complete these steps for the primary and secondary certificates if both are compromised:
+
+1. Navigate to **Administration** in the left pane and select **Device connection**.
+
+2. Select **Enrollment Groups**, and select the group name in the list.
+
+3. For certificate update, select **Manage primary** or **Manage Secondary**.
+
+4. Add and verify root X.509 certificate in the enrollment group.
+
+### Individual enrollments and security breaches
+
+If you're rolling certificates in response to a security breach, use the following approach to update the current certificate immediately. Complete these steps for the primary and secondary certificates, if both are compromised:
+
+1. Select **Devices**, and select the device.
+
+1. Select **Connect**, and select connect method as **Individual Enrollment**
+
+1. Select **Certificates (X.509)** as mechanism.
+
+1. For certificate update, select the folder icon to select the new certificate to be uploaded for the enrollment entry. Select **Save**.
+
+### Enrollment groups and certificate expiration
+
+To handle certificate expirations, use the following approach to update the current certificate immediately:
+
+1. Navigate to **Administration** in the left pane and select **Device connection**.
+
+2. Select **Enrollment Groups**, and select the group name in the list.
+
+3. For certificate update, select **Manage Primary**.
+
+4. Add and verify root X.509 certificate in the enrollment group.
+
+5. Later when the secondary certificate has expired, come back and update the secondary certificate.
+
+### Individual enrollments and certificate expiration
+
+If you're rolling certificates to handle certificate expirations, you should use the secondary certificate configuration as follows to reduce downtime for devices attempting to provision.
+
+When the secondary certificate nears expiration, and needs to be rolled, you can rotate to using the primary configuration. Rotating between the primary and secondary certificates in this way reduces downtime for devices attempting to provision.
+
+1. Select **Devices**, and select the device.
+
+2. Select **Connect**, and select connect method as **Individual Enrollment**
+
+3. Select **Certificates (X.509)** as mechanism.
+
+4. For secondary certificate update, select the folder icon to select the new certificate to be uploaded for the enrollment entry. Select **Save**.
+
+5. Later when the primary certificate has expired, come back and update the primary certificate.
+ ## Next steps
-Now that you've learned how to connect devices using X.509 certificates, the suggested next step is to learn how to [Monitor device connectivity using Azure CLI](howto-monitor-devices-azure-cli.md)
+Now that you've learned how to connect devices using X.509 certificates, the suggested next step is to learn how to [Monitor device connectivity using Azure CLI](howto-monitor-devices-azure-cli.md).
iot-central How To Roll X509 Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/how-to-roll-x509-certificates.md
- Title: Roll X.509 certificates in Azure IoT Central
-description: How to roll X.509 certificates with your IoT Central Application
-- Previously updated : 07/31/2020------
-# How to roll X.509 device certificates in IoT Central Application
-
-During the lifecycle of your IoT solution, you'll need to roll certificates. Two of the main reasons for rolling certificates would be a security breach, and certificate expirations.
-
-If you have a security breach, rolling certificates is a security best practice to help secure your system. As part of [Assume Breach Methodology](https://download.microsoft.com/download/C/1/9/C1990DBA-502F-4C2A-848D-392B93D9B9C3/Microsoft_Enterprise_Cloud_Red_Teaming.pdf), Microsoft advocates the need for having reactive security processes in place along with preventative measures. Rolling your device certificates should be included as part of these security processes. The frequency in which you roll your certificates will depend on the security needs of your solution. Customers with solutions involving highly sensitive data may roll certificate daily, while others roll their certificates every couple years.
--
-## Obtain new X.509 certificates
-
-You can create your own X.509 certificates using a tool like OpenSSL. This approach is great for testing X.509 certificates but provides few guarantees around security. Only use this approach for testing unless you are prepared to act as your own CA provider.
-
-## Enrollment groups and security breaches
-
-To update a group enrollment in response to a security breach, you should use the following approach that updates the current certificate immediately:
-
-1. Navigate to **Administration** in the left pane and select **Device connection**.
-
-2. Select **Enrollment Groups**, and select the group name in the list.
-
-3. For certificate update, select **Manage primary** or **Manage Secondary**.
-
-4. Add and verify root X.509 certificate in the enrollment group.
-
- Complete these steps for the primary and secondary certificates, if both are compromised.
-
-## Enrollment groups and certificate expiration
-
-If you're rolling certificates to handle certificate expirations, use the following approach to update the current certificate immediately:
-
-1. Navigate to **Administration** in the left pane and select **Device connection**.
-
-2. Select **Enrollment Groups**, and select the group name in the list.
-
-3. For certificate update, select **Manage Primary**.
-
-4. Add and verify root X.509 certificate in the enrollment group.
-
-5. Later when the secondary certificate has expired, come back and update that secondary certificate.
-
-## Individual enrollments and security breaches
-
-If you're rolling certificates in response to a security breach, use the following approach to update the current certificate immediately:
-
-1. Select **Devices**, and select the device.
-
-2. Select **Connect**, and select connect method as **Individual Enrollment**
-
-3. Select **Certificates (X.509)** as mechanism.
-
- ![Manage individual enrollments](./media/how-to-roll-x509-certificates/certificate-update.png)
-
-4. For certificate update, select the folder icon to select the new certificate to be uploaded for the enrollment entry. Select **Save**.
-
- Complete these steps for the primary and secondary certificates, if both are compromised
-
-## Individual enrollments and certificate expiration
-
-If you're rolling certificates to handle certificate expirations, you should use the secondary certificate configuration as follows to reduce downtime for devices attempting to provision.
-
-When the secondary certificate nears expiration, and needs to be rolled, you can rotate to using the primary configuration. Rotating between the primary and secondary certificates in this way reduces downtime for devices attempting to provision.
-
-1. Select **Devices**, and select the device.
-
-2. Select **Connect**, and select connect method as **Individual Enrollment**
-
-3. Select **Certificates (X.509)** as mechanism.
-
- ![Manage individual enrollments](./media/how-to-roll-x509-certificates/certificate-update.png)
-
-4. For secondary certificate update, select the folder icon to select the new certificate to be uploaded for the enrollment entry. Select **Save**.
-
-5. Later when the primary certificate has expired, come back and update that primary certificate.
-
-## Next steps
-
-Now that you've learned how to roll X.509 certificates in your Azure IoT Central application, you can [Get connected to Azure IoT Central](concepts-get-connected.md).
--
iot-central Tutorial Define Gateway Device Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-define-gateway-device-type.md
When you connect a downstream device, you can modify the provisioning payload to
```json {
- "iotcModelId": "dtmi:rigado:S1Sensor;2",
+ "modelId": "dtmi:rigado:S1Sensor;2",
"iotcGateway":{ "iotcGatewayId": "gateway-device-001" }
iot-central Tutorial Use Device Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-use-device-groups.md
To analyze the telemetry for a device group:
:::image type="content" source="media/tutorial-use-device-groups/create-analysis.png" alt-text="Screenshot that shows the telemetry types selected for analysis":::
- Use the gear-wheel icons next to the telemetry types to select an aggregation type. The default is **Average**. Use **Group by** to change how the aggregate data is shown. For example, if you split by device ID you see a plot for each device when you select **Analyze**.
+ Use the ellipsis icons next to the telemetry types to select an aggregation type. The default is **Average**. Use **Group by** to change how the aggregate data is shown. For example, if you split by device ID you see a plot for each device when you select **Analyze**.
1. Select **Analyze** to view the average telemetry values: :::image type="content" source="media/tutorial-use-device-groups/view-analysis.png" alt-text="Screenshot that shows average values for all the Contoso devices":::
- You can customize the view, change the time period shown, and export the data.
+ You can customize the view, change the time period shown, and export the data as CSV or view data as table.
+
+ :::image type="content" source="media/tutorial-use-device-groups/export-data.png" alt-text="Screenshot that shows how to export data for the Contoso devices":::
+
+To learn more about analytics, see [How to use analytics to analyze device data](howto-create-analytics.md).
## Clean up resources
iot-edge How To Configure Proxy Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-configure-proxy-support.md
With the environment variables included, your module definition should look like
"type": "docker", "settings": { "image": "mcr.microsoft.com/azureiotedge-hub:1.1",
- "createOptions": ""
+ "createOptions": "{}"
}, "env": { "https_proxy": {
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
The API proxy module was designed to be customized to handle most common gateway
"edgeAgent": { "settings": { "image": "mcr.microsoft.com/azureiotedge-agent:1.2",
- "createOptions": ""
+ "createOptions": "{}"
}, "type": "docker" },
iot-edge How To Deploy Cli At Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-deploy-cli-at-scale.md
Here's a basic layered deployment manifest with one module as an example:
"properties.desired.modules.SimulatedTemperatureSensor": { "settings": { "image": "mcr.microsoft.com/azureiotedge-simulated-temperature-sensor:1.0",
- "createOptions": ""
+ "createOptions": "{}"
}, "type": "docker", "status": "running",
iot-edge Module Composition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/module-composition.md
The following example shows what a valid deployment manifest document may look l
"type": "docker", "settings": { "image": "mcr.microsoft.com/azureiotedge-agent:1.1",
- "createOptions": ""
+ "createOptions": "{}"
} }, "edgeHub": {
iot-hub Iot Hub Devguide Messages Construct https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-messages-construct.md
Title: Understand Azure IoT Hub message format | Microsoft Docs description: Developer guide - describes the format and expected content of IoT Hub messages. - Previously updated : 05/07/2021 Last updated : 07/01/2021
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-limits-and-config.md
ms.suite: integration Previously updated : 06/24/2021 Last updated : 07/01/2021 # Limits and configuration reference for Azure Logic Apps
This section lists the outbound IP addresses for the Azure Logic Apps service an
| Multi-tenant region | Logic Apps IP | Managed connectors IP | |||--|
-| Australia East | 13.75.149.4, 104.210.91.55, 104.210.90.241, 52.187.227.245, 52.187.226.96, 52.187.231.184, 52.187.229.130, 52.187.226.139 | 52.237.214.72, 13.72.243.10, 13.70.72.192 - 13.70.72.207, 13.70.78.224 - 13.70.78.255 |
-| Australia Southeast | 13.73.114.207, 13.77.3.139, 13.70.159.205, 52.189.222.77, 13.77.56.167, 13.77.58.136, 52.189.214.42, 52.189.220.75 | 52.255.48.202, 13.70.136.174, 13.77.50.240 - 13.77.50.255, 13.77.55.160 - 13.77.55.191 |
-| Brazil South | 191.235.82.221, 191.235.91.7, 191.234.182.26, 191.237.255.116, 191.234.161.168, 191.234.162.178, 191.234.161.28, 191.234.162.131 | 191.232.191.157, 104.41.59.51, 191.233.203.192 - 191.233.203.207, 191.233.207.160 - 191.233.207.191 |
-| Brazil Southeast | 20.40.32.81, 20.40.32.19, 20.40.32.85, 20.40.32.60, 20.40.32.116, 20.40.32.87, 20.40.32.61, 20.40.32.113 | 23.97.120.109, 23.97.121.26 |
-| Canada Central | 52.233.29.92, 52.228.39.244, 40.85.250.135, 40.85.250.212, 13.71.186.1, 40.85.252.47, 13.71.184.150 | 52.237.32.212, 52.237.24.126, 13.71.170.208 - 13.71.170.223, 13.71.175.160 - 13.71.175.191 |
-| Canada East | 52.232.128.155, 52.229.120.45, 52.229.126.25, 40.86.203.228, 40.86.228.93, 40.86.216.241, 40.86.226.149, 40.86.217.241 | 52.242.30.112, 52.242.35.152, 40.69.106.240 - 40.69.106.255, 40.69.111.0 - 40.69.111.31 |
-| Central India | 52.172.154.168, 52.172.186.159, 52.172.185.79, 104.211.101.108, 104.211.102.62, 104.211.90.169, 104.211.90.162, 104.211.74.145 | 52.172.212.129, 52.172.211.12, 20.43.123.0 - 20.43.123.31, 104.211.81.192 - 104.211.81.207 |
-| Central US | 13.67.236.125, 104.208.25.27, 40.122.170.198, 40.113.218.230, 23.100.86.139, 23.100.87.24, 23.100.87.56, 23.100.82.16 | 52.173.241.27, 52.173.245.164, 13.89.171.80 - 13.89.171.95, 13.89.178.64 - 13.89.178.95, 40.77.68.110 |
-| East Asia | 13.75.94.173, 40.83.127.19, 52.175.33.254, 40.83.73.39, 65.52.175.34, 40.83.77.208, 40.83.100.69, 40.83.75.165 | 13.75.110.131, 52.175.23.169, 13.75.36.64 - 13.75.36.79, 104.214.164.0 - 104.214.164.31 |
-| East US | 13.92.98.111, 40.121.91.41, 40.114.82.191, 23.101.139.153, 23.100.29.190, 23.101.136.201, 104.45.153.81, 23.101.132.208 | 40.71.249.139, 40.71.249.205, 40.114.40.132, 40.71.11.80 - 40.71.11.95, 40.71.15.160 - 40.71.15.191, 52.188.157.160 |
-| East US 2 | 40.84.30.147, 104.208.155.200, 104.208.158.174, 104.208.140.40, 40.70.131.151, 40.70.29.214, 40.70.26.154, 40.70.27.236 | 52.225.129.144, 52.232.188.154, 104.209.247.23, 40.70.146.208 - 40.70.146.223, 40.70.151.96 - 40.70.151.127, 40.65.220.25 |
-| France Central | 52.143.164.80, 52.143.164.15, 40.89.186.30, 20.188.39.105, 40.89.191.161, 40.89.188.169, 40.89.186.28, 40.89.190.104 | 40.89.186.239, 40.89.135.2, 40.79.130.208 - 40.79.130.223, 40.79.148.96 - 40.79.148.127 |
-| France South | 52.136.132.40, 52.136.129.89, 52.136.131.155, 52.136.133.62, 52.136.139.225, 52.136.130.144, 52.136.140.226, 52.136.129.51 | 52.136.142.154, 52.136.133.184, 40.79.178.240 - 40.79.178.255, 40.79.180.224 - 40.79.180.255 |
-| Germany North | 51.116.211.168, 51.116.208.165, 51.116.208.175, 51.116.208.192, 51.116.208.200, 51.116.208.222, 51.116.208.217, 51.116.208.51 | 51.116.60.192, 51.116.211.212, 51.116.59.16 - 51.116.59.31, 51.116.60.192 - 51.116.60.223 |
-| Germany West Central | 51.116.233.35, 51.116.171.49, 51.116.233.33, 51.116.233.22, 51.116.168.104, 51.116.175.17, 51.116.233.87, 51.116.175.51 | 51.116.158.97, 51.116.236.78, 51.116.155.80 - 51.116.155.95, 51.116.158.96 - 51.116.158.127 |
-| Japan East | 13.71.158.3, 13.73.4.207, 13.71.158.120, 13.78.18.168, 13.78.35.229, 13.78.42.223, 13.78.21.155, 13.78.20.232 | 13.73.21.230, 13.71.153.19, 13.78.108.0 - 13.78.108.15, 40.79.189.64 - 40.79.189.95 |
-| Japan West | 40.74.140.4, 104.214.137.243, 138.91.26.45, 40.74.64.207, 40.74.76.213, 40.74.77.205, 40.74.74.21, 40.74.68.85 | 104.215.27.24, 104.215.61.248, 40.74.100.224 - 40.74.100.239, 40.80.180.64 - 40.80.180.95 |
-| Korea Central | 52.231.14.11, 52.231.14.219, 52.231.15.6, 52.231.10.111, 52.231.14.223, 52.231.77.107, 52.231.8.175, 52.231.9.39 | 52.141.1.104, 52.141.36.214, 20.44.29.64 - 20.44.29.95, 52.231.18.208 - 52.231.18.223 |
-| Korea South | 52.231.204.74, 52.231.188.115, 52.231.189.221, 52.231.203.118, 52.231.166.28, 52.231.153.89, 52.231.155.206, 52.231.164.23 | 52.231.201.173, 52.231.163.10, 52.231.147.0 - 52.231.147.15, 52.231.148.224 - 52.231.148.255 |
-| North Central US | 168.62.248.37, 157.55.210.61, 157.55.212.238, 52.162.208.216, 52.162.213.231, 65.52.10.183, 65.52.9.96, 65.52.8.225 | 52.162.126.4, 52.162.242.161, 52.162.107.160 - 52.162.107.175, 52.162.111.192 - 52.162.111.223 |
-| North Europe | 40.113.12.95, 52.178.165.215, 52.178.166.21, 40.112.92.104, 40.112.95.216, 40.113.4.18, 40.113.3.202, 40.113.1.181 | 52.169.28.181, 52.178.150.68, 94.245.91.93, 13.69.227.208 - 13.69.227.223, 13.69.231.192 - 13.69.231.223, 40.115.108.29 |
-| Norway East | 51.120.88.52, 51.120.88.51, 51.13.65.206, 51.13.66.248, 51.13.65.90, 51.13.65.63, 51.13.68.140, 51.120.91.248 | 51.120.100.192, 51.120.92.27, 51.120.98.224 - 51.120.98.239, 51.120.100.192 - 51.120.100.223 |
-| South Africa North | 102.133.231.188, 102.133.231.117, 102.133.230.4, 102.133.227.103, 102.133.228.6, 102.133.230.82, 102.133.231.9, 102.133.231.51 | 102.133.168.167, 40.127.2.94, 102.133.155.0 - 102.133.155.15, 102.133.253.0 - 102.133.253.31 |
-| South Africa West | 102.133.72.98, 102.133.72.113, 102.133.75.169, 102.133.72.179, 102.133.72.37, 102.133.72.183, 102.133.72.132, 102.133.75.191 | 102.133.72.85, 102.133.75.194, 102.37.64.0 - 102.37.64.31, 102.133.27.0 - 102.133.27.15 |
-| South Central US | 104.210.144.48, 13.65.82.17, 13.66.52.232, 23.100.124.84, 70.37.54.122, 70.37.50.6, 23.100.127.172, 23.101.183.225 | 52.171.130.92, 13.65.86.57, 13.73.244.224 - 13.73.244.255, 104.214.19.48 - 104.214.19.63 |
-| South India | 52.172.50.24, 52.172.55.231, 52.172.52.0, 104.211.229.115, 104.211.230.129, 104.211.230.126, 104.211.231.39, 104.211.227.229 | 13.71.127.26, 13.71.125.22, 20.192.184.32 - 20.192.184.63, 40.78.194.240 - 40.78.194.255 |
-| Southeast Asia | 13.76.133.155, 52.163.228.93, 52.163.230.166, 13.76.4.194, 13.67.110.109, 13.67.91.135, 13.76.5.96, 13.67.107.128 | 52.187.115.69, 52.187.68.19, 13.67.8.240 - 13.67.8.255, 13.67.15.32 - 13.67.15.63 |
-| Switzerland North | 51.103.137.79, 51.103.135.51, 51.103.139.122, 51.103.134.69, 51.103.138.96, 51.103.138.28, 51.103.136.37, 51.103.136.210 | 51.103.142.22, 51.107.86.217, 51.107.59.16 - 51.107.59.31, 51.107.60.224 - 51.107.60.255 |
-| Switzerland West | 51.107.239.66, 51.107.231.86, 51.107.239.112, 51.107.239.123, 51.107.225.190, 51.107.225.179, 51.107.225.186, 51.107.225.151, 51.107.239.83 | 51.107.156.224, 51.107.231.190, 51.107.155.16 - 51.107.155.31, 51.107.156.224 - 51.107.156.255 |
-| UAE Central | 20.45.75.200, 20.45.72.72, 20.45.75.236, 20.45.79.239, 20.45.67.170, 20.45.72.54, 20.45.67.134, 20.45.67.135 | 20.45.67.45, 20.45.67.28, 20.37.74.192 - 20.37.74.207, 40.120.8.0 - 40.120.8.31 |
-| UAE North | 40.123.230.45, 40.123.231.179, 40.123.231.186, 40.119.166.152, 40.123.228.182, 40.123.217.165, 40.123.216.73, 40.123.212.104 | 65.52.250.208, 40.123.224.120, 40.120.64.64 - 40.120.64.95, 65.52.250.208 - 65.52.250.223 |
-| UK South | 51.140.74.14, 51.140.73.85, 51.140.78.44, 51.140.137.190, 51.140.153.135, 51.140.28.225, 51.140.142.28, 51.140.158.24 | 51.140.74.150, 51.140.80.51, 51.140.61.124, 51.105.77.96 - 51.105.77.127, 51.140.148.0 - 51.140.148.15 |
-| UK West | 51.141.54.185, 51.141.45.238, 51.141.47.136, 51.141.114.77, 51.141.112.112, 51.141.113.36, 51.141.118.119, 51.141.119.63 | 51.141.52.185, 51.141.47.105, 51.141.124.13, 51.140.211.0 - 51.140.211.15, 51.140.212.224 - 51.140.212.255 |
-| West Central US | 52.161.27.190, 52.161.18.218, 52.161.9.108, 13.78.151.161, 13.78.137.179, 13.78.148.140, 13.78.129.20, 13.78.141.75 | 52.161.101.204, 52.161.102.22, 13.78.132.82, 13.71.195.32 - 13.71.195.47, 13.71.199.192 - 13.71.199.223 |
-| West Europe | 40.68.222.65, 40.68.209.23, 13.95.147.65, 23.97.218.130, 51.144.182.201, 23.97.211.179, 104.45.9.52, 23.97.210.126, 13.69.71.160, 13.69.71.161, 13.69.71.162, 13.69.71.163, 13.69.71.164, 13.69.71.165, 13.69.71.166, 13.69.71.167 | 52.166.78.89, 52.174.88.118, 40.91.208.65, 13.69.64.208 - 13.69.64.223, 13.69.71.192 - 13.69.71.223, 13.93.36.78 |
-| West India | 104.211.164.80, 104.211.162.205, 104.211.164.136, 104.211.158.127, 104.211.156.153, 104.211.158.123, 104.211.154.59, 104.211.154.7 | 104.211.189.124, 104.211.189.218, 20.38.128.224 - 20.38.128.255, 104.211.146.224 - 104.211.146.239 |
-| West US | 52.160.92.112, 40.118.244.241, 40.118.241.243, 157.56.162.53, 157.56.167.147, 104.42.49.145, 40.83.164.80, 104.42.38.32, 13.86.223.0, 13.86.223.1, 13.86.223.2, 13.86.223.3, 13.86.223.4, 13.86.223.5 | 13.93.148.62, 104.42.122.49, 40.112.195.87, 13.86.223.32 - 13.86.223.63, 40.112.243.160 - 40.112.243.175 |
-| West US 2 | 13.66.210.167, 52.183.30.169, 52.183.29.132, 13.66.210.167, 13.66.201.169, 13.77.149.159, 52.175.198.132, 13.66.246.219 | 52.191.164.250, 52.183.78.157, 13.66.140.128 - 13.66.140.143, 13.66.145.96 - 13.66.145.127, 13.66.164.219 |
+| Australia East | 13.75.149.4, 104.210.91.55, 104.210.90.241, 52.187.227.245, 52.187.226.96, 52.187.231.184, 52.187.229.130, 52.187.226.139 | 52.237.214.72, 13.72.243.10, 13.70.72.192 - 13.70.72.207, 13.70.78.224 - 13.70.78.255, 20.70.220.192 - 20.70.220.223, 20.70.220.224 - 20.70.220.239 |
+| Australia Southeast | 13.73.114.207, 13.77.3.139, 13.70.159.205, 52.189.222.77, 13.77.56.167, 13.77.58.136, 52.189.214.42, 52.189.220.75 | 52.255.48.202, 13.70.136.174, 13.77.50.240 - 13.77.50.255, 13.77.55.160 - 13.77.55.191, 20.92.3.64 - 20.92.3.95, 20.92.3.96 - 20.92.3.111 |
+| Brazil South | 191.235.82.221, 191.235.91.7, 191.234.182.26, 191.237.255.116, 191.234.161.168, 191.234.162.178, 191.234.161.28, 191.234.162.131 | 191.232.191.157, 104.41.59.51, 191.233.203.192 - 191.233.203.207, 191.233.207.160 - 191.233.207.191, 191.238.76.112 - 191.238.76.127, 191.238.76.128 - 191.238.76.159 |
+| Brazil Southeast | 20.40.32.81, 20.40.32.19, 20.40.32.85, 20.40.32.60, 20.40.32.116, 20.40.32.87, 20.40.32.61, 20.40.32.113 | 23.97.120.109, 23.97.121.26, 20.206.0.0 - 20.206.0.63, 191.233.51.0 - 191.233.51.63 |
+| Canada Central | 52.233.29.92, 52.228.39.244, 40.85.250.135, 40.85.250.212, 13.71.186.1, 40.85.252.47, 13.71.184.150 | 52.237.32.212, 52.237.24.126, 13.71.170.208 - 13.71.170.223, 13.71.175.160 - 13.71.175.191, 20.48.200.192 - 20.48.200.223, 20.48.200.224 - 20.48.200.239 |
+| Canada East | 52.232.128.155, 52.229.120.45, 52.229.126.25, 40.86.203.228, 40.86.228.93, 40.86.216.241, 40.86.226.149, 40.86.217.241 | 52.242.30.112, 52.242.35.152, 40.69.106.240 - 40.69.106.255, 40.69.111.0 - 40.69.111.31, 52.139.111.0 - 52.139.111.31, 52.139.111.32 - 52.139.111.47 |
+| Central India | 52.172.154.168, 52.172.186.159, 52.172.185.79, 104.211.101.108, 104.211.102.62, 104.211.90.169, 104.211.90.162, 104.211.74.145 | 52.172.212.129, 52.172.211.12, 20.43.123.0 - 20.43.123.31, 104.211.81.192 - 104.211.81.207, 20.192.168.64 - 20.192.168.95, 20.192.168.96 - 20.192.168.111 |
+| Central US | 13.67.236.125, 104.208.25.27, 40.122.170.198, 40.113.218.230, 23.100.86.139, 23.100.87.24, 23.100.87.56, 23.100.82.16 | 52.173.241.27, 52.173.245.164, 13.89.171.80 - 13.89.171.95, 13.89.178.64 - 13.89.178.95, 40.77.68.110, 20.98.144.224 - 20.98.144.255, 20.98.145.0 - 20.98.145.15 |
+| East Asia | 13.75.94.173, 40.83.127.19, 52.175.33.254, 40.83.73.39, 65.52.175.34, 40.83.77.208, 40.83.100.69, 40.83.75.165 | 13.75.110.131, 52.175.23.169, 13.75.36.64 - 13.75.36.79, 104.214.164.0 - 104.214.164.31, 20.205.67.48 - 20.205.67.63, 20.205.67.64 - 20.205.67.95, 104.214.165.128 - 104.214.165.191 |
+| East US | 13.92.98.111, 40.121.91.41, 40.114.82.191, 23.101.139.153, 23.100.29.190, 23.101.136.201, 104.45.153.81, 23.101.132.208 | 40.71.249.139, 40.71.249.205, 40.114.40.132, 40.71.11.80 - 40.71.11.95, 40.71.15.160 - 40.71.15.191, 52.188.157.160, 20.88.153.176 - 20.88.153.191, 20.88.153.192 - 20.88.153.223 |
+| East US 2 | 40.84.30.147, 104.208.155.200, 104.208.158.174, 104.208.140.40, 40.70.131.151, 40.70.29.214, 40.70.26.154, 40.70.27.236 | 52.225.129.144, 52.232.188.154, 104.209.247.23, 40.70.146.208 - 40.70.146.223, 40.70.151.96 - 40.70.151.127, 40.65.220.25, 20.98.192.80 - 20.98.192.95, 20.98.192.96 - 20.98.192.127 |
+| France Central | 52.143.164.80, 52.143.164.15, 40.89.186.30, 20.188.39.105, 40.89.191.161, 40.89.188.169, 40.89.186.28, 40.89.190.104 | 40.89.186.239, 40.89.135.2, 40.79.130.208 - 40.79.130.223, 40.79.148.96 - 40.79.148.127, 51.138.215.48 - 51.138.215.63, 51.138.215.64 - 51.138.215.95 |
+| France South | 52.136.132.40, 52.136.129.89, 52.136.131.155, 52.136.133.62, 52.136.139.225, 52.136.130.144, 52.136.140.226, 52.136.129.51 | 52.136.142.154, 52.136.133.184, 40.79.178.240 - 40.79.178.255, 40.79.180.224 - 40.79.180.255, 52.136.189.16 - 52.136.189.31, 52.136.189.32 - 52.136.189.63 |
+| Germany North | 51.116.211.168, 51.116.208.165, 51.116.208.175, 51.116.208.192, 51.116.208.200, 51.116.208.222, 51.116.208.217, 51.116.208.51 | 51.116.60.192, 51.116.211.212, 51.116.59.16 - 51.116.59.31, 51.116.60.192 - 51.116.60.223, 51.116.55.240 - 51.116.55.255, 51.116.74.32 - 51.116.74.63 |
+| Germany West Central | 51.116.233.35, 51.116.171.49, 51.116.233.33, 51.116.233.22, 51.116.168.104, 51.116.175.17, 51.116.233.87, 51.116.175.51 | 51.116.158.97, 51.116.236.78, 51.116.155.80 - 51.116.155.95, 51.116.158.96 - 51.116.158.127, 20.52.93.80 - 20.52.93.95, 20.52.93.96 - 20.52.93.127 |
+| Japan East | 13.71.158.3, 13.73.4.207, 13.71.158.120, 13.78.18.168, 13.78.35.229, 13.78.42.223, 13.78.21.155, 13.78.20.232 | 13.73.21.230, 13.71.153.19, 13.78.108.0 - 13.78.108.15, 40.79.189.64 - 40.79.189.95, 20.89.11.48 - 20.89.11.63, 20.89.11.64 - 20.89.11.95 |
+| Japan West | 40.74.140.4, 104.214.137.243, 138.91.26.45, 40.74.64.207, 40.74.76.213, 40.74.77.205, 40.74.74.21, 40.74.68.85 | 104.215.27.24, 104.215.61.248, 40.74.100.224 - 40.74.100.239, 40.80.180.64 - 40.80.180.95, 20.189.192.144 - 20.189.192.159, 20.189.192.160 - 20.189.192.191 |
+| Korea Central | 52.231.14.11, 52.231.14.219, 52.231.15.6, 52.231.10.111, 52.231.14.223, 52.231.77.107, 52.231.8.175, 52.231.9.39 | 52.141.1.104, 52.141.36.214, 20.44.29.64 - 20.44.29.95, 52.231.18.208 - 52.231.18.223, 20.200.194.160 - 20.200.194.191, 20.200.194.192 - 20.200.194.207 |
+| Korea South | 52.231.204.74, 52.231.188.115, 52.231.189.221, 52.231.203.118, 52.231.166.28, 52.231.153.89, 52.231.155.206, 52.231.164.23 | 52.231.201.173, 52.231.163.10, 52.231.147.0 - 52.231.147.15, 52.231.148.224 - 52.231.148.255, 52.147.117.32 - 52.147.117.63, 52.147.117.64 - 52.147.117.79 |
+| North Central US | 168.62.248.37, 157.55.210.61, 157.55.212.238, 52.162.208.216, 52.162.213.231, 65.52.10.183, 65.52.9.96, 65.52.8.225 | 52.162.126.4, 52.162.242.161, 52.162.107.160 - 52.162.107.175, 52.162.111.192 - 52.162.111.223, 20.51.4.192 - 20.51.4.223, 20.51.4.224 - 20.51.4.239 |
+| North Europe | 40.113.12.95, 52.178.165.215, 52.178.166.21, 40.112.92.104, 40.112.95.216, 40.113.4.18, 40.113.3.202, 40.113.1.181 | 52.169.28.181, 52.178.150.68, 94.245.91.93, 13.69.227.208 - 13.69.227.223, 13.69.231.192 - 13.69.231.223, 40.115.108.29, 20.82.246.112 - 20.82.246.127, 52.146.138.32 - 52.146.138.63 |
+| Norway East | 51.120.88.52, 51.120.88.51, 51.13.65.206, 51.13.66.248, 51.13.65.90, 51.13.65.63, 51.13.68.140, 51.120.91.248 | 51.120.100.192, 51.120.92.27, 51.120.98.224 - 51.120.98.239, 51.120.100.192 - 51.120.100.223, 20.100.0.96 - 20.100.0.127, 20.100.0.128 - 20.100.0.143 |
+| South Africa North | 102.133.231.188, 102.133.231.117, 102.133.230.4, 102.133.227.103, 102.133.228.6, 102.133.230.82, 102.133.231.9, 102.133.231.51 | 102.133.168.167, 40.127.2.94, 102.133.155.0 - 102.133.155.15, 102.133.253.0 - 102.133.253.31, 102.37.166.80 - 102.37.166.95, 102.37.166.96 - 102.37.166.127 |
+| South Africa West | 102.133.72.98, 102.133.72.113, 102.133.75.169, 102.133.72.179, 102.133.72.37, 102.133.72.183, 102.133.72.132, 102.133.75.191 | 102.133.72.85, 102.133.75.194, 102.37.64.0 - 102.37.64.31, 102.133.27.0 - 102.133.27.15, 102.37.84.128 - 102.37.84.159, 102.37.84.160 - 102.37.84.175 |
+| South Central US | 104.210.144.48, 13.65.82.17, 13.66.52.232, 23.100.124.84, 70.37.54.122, 70.37.50.6, 23.100.127.172, 23.101.183.225 | 52.171.130.92, 13.65.86.57, 13.73.244.224 - 13.73.244.255, 104.214.19.48 - 104.214.19.63, 20.97.33.48 - 20.97.33.63, 20.97.33.64 - 20.97.33.95, 104.214.70.191 |
+| South India | 52.172.50.24, 52.172.55.231, 52.172.52.0, 104.211.229.115, 104.211.230.129, 104.211.230.126, 104.211.231.39, 104.211.227.229 | 13.71.127.26, 13.71.125.22, 20.192.184.32 - 20.192.184.63, 40.78.194.240 - 40.78.194.255, 20.192.152.64 - 20.192.152.95, 20.192.152.96 - 20.192.152.111, 52.172.80.0 - 52.172.80.63 |
+| Southeast Asia | 13.76.133.155, 52.163.228.93, 52.163.230.166, 13.76.4.194, 13.67.110.109, 13.67.91.135, 13.76.5.96, 13.67.107.128 | 52.187.115.69, 52.187.68.19, 13.67.8.240 - 13.67.8.255, 13.67.15.32 - 13.67.15.63, 20.195.82.240 - 20.195.82.255, 20.195.83.0 - 20.195.83.31 |
+| Switzerland North | 51.103.137.79, 51.103.135.51, 51.103.139.122, 51.103.134.69, 51.103.138.96, 51.103.138.28, 51.103.136.37, 51.103.136.210 | 51.103.142.22, 51.107.86.217, 51.107.59.16 - 51.107.59.31, 51.107.60.224 - 51.107.60.255, 51.107.246.112 - 51.107.246.127, 51.107.246.128 - 51.107.246.159 |
+| Switzerland West | 51.107.239.66, 51.107.231.86, 51.107.239.112, 51.107.239.123, 51.107.225.190, 51.107.225.179, 51.107.225.186, 51.107.225.151, 51.107.239.83 | 51.107.156.224, 51.107.231.190, 51.107.155.16 - 51.107.155.31, 51.107.156.224 - 51.107.156.255, 51.107.254.32 - 51.107.254.63, 51.107.254.64 - 51.107.254.79 |
+| UAE Central | 20.45.75.200, 20.45.72.72, 20.45.75.236, 20.45.79.239, 20.45.67.170, 20.45.72.54, 20.45.67.134, 20.45.67.135 | 20.45.67.45, 20.45.67.28, 20.37.74.192 - 20.37.74.207, 40.120.8.0 - 40.120.8.31, 20.45.90.208 - 20.45.90.223, 20.45.90.224 - 20.45.90.255 |
+| UAE North | 40.123.230.45, 40.123.231.179, 40.123.231.186, 40.119.166.152, 40.123.228.182, 40.123.217.165, 40.123.216.73, 40.123.212.104 | 65.52.250.208, 40.123.224.120, 40.120.64.64 - 40.120.64.95, 65.52.250.208 - 65.52.250.223, 40.120.86.16 - 40.120.86.31, 40.120.86.32 - 40.120.86.63 |
+| UK South | 51.140.74.14, 51.140.73.85, 51.140.78.44, 51.140.137.190, 51.140.153.135, 51.140.28.225, 51.140.142.28, 51.140.158.24 | 51.140.74.150, 51.140.80.51, 51.140.61.124, 51.105.77.96 - 51.105.77.127, 51.140.148.0 - 51.140.148.15, 20.90.129.0 - 20.90.129.31, 20.90.129.32 - 20.90.129.47 |
+| UK West | 51.141.54.185, 51.141.45.238, 51.141.47.136, 51.141.114.77, 51.141.112.112, 51.141.113.36, 51.141.118.119, 51.141.119.63 | 51.141.52.185, 51.141.47.105, 51.141.124.13, 51.140.211.0 - 51.140.211.15, 51.140.212.224 - 51.140.212.255, 20.58.70.192 - 20.58.70.223, 20.58.70.224 - 20.58.70.239 |
+| West Central US | 52.161.27.190, 52.161.18.218, 52.161.9.108, 13.78.151.161, 13.78.137.179, 13.78.148.140, 13.78.129.20, 13.78.141.75 | 52.161.101.204, 52.161.102.22, 13.78.132.82, 13.71.195.32 - 13.71.195.47, 13.71.199.192 - 13.71.199.223, 20.69.4.0 - 20.69.4.31, 20.69.4.32 - 20.69.4.47 |
+| West Europe | 40.68.222.65, 40.68.209.23, 13.95.147.65, 23.97.218.130, 51.144.182.201, 23.97.211.179, 104.45.9.52, 23.97.210.126, 13.69.71.160, 13.69.71.161, 13.69.71.162, 13.69.71.163, 13.69.71.164, 13.69.71.165, 13.69.71.166, 13.69.71.167 | 52.166.78.89, 52.174.88.118, 40.91.208.65, 13.69.64.208 - 13.69.64.223, 13.69.71.192 - 13.69.71.223, 13.93.36.78, 20.86.93.32 - 20.86.93.63, 20.86.93.64 - 20.86.93.79 |
+| West India | 104.211.164.80, 104.211.162.205, 104.211.164.136, 104.211.158.127, 104.211.156.153, 104.211.158.123, 104.211.154.59, 104.211.154.7 | 104.211.189.124, 104.211.189.218, 20.38.128.224 - 20.38.128.255, 104.211.146.224 - 104.211.146.239, 20.192.82.48 - 20.192.82.63, 20.192.82.64 - 20.192.82.95 |
+| West US | 52.160.92.112, 40.118.244.241, 40.118.241.243, 157.56.162.53, 157.56.167.147, 104.42.49.145, 40.83.164.80, 104.42.38.32, 13.86.223.0, 13.86.223.1, 13.86.223.2, 13.86.223.3, 13.86.223.4, 13.86.223.5 | 13.93.148.62, 104.42.122.49, 40.112.195.87, 13.86.223.32 - 13.86.223.63, 40.112.243.160 - 40.112.243.175, 20.59.77.0 - 20.59.77.31, 20.66.6.112 - 20.66.6.127 |
+| West US 2 | 13.66.210.167, 52.183.30.169, 52.183.29.132, 13.66.210.167, 13.66.201.169, 13.77.149.159, 52.175.198.132, 13.66.246.219 | 52.191.164.250, 52.183.78.157, 13.66.140.128 - 13.66.140.143, 13.66.145.96 - 13.66.145.127, 13.66.164.219, 20.83.220.208 - 20.83.220.223, 20.83.220.224 - 20.83.220.255 |
|||| <a name="azure-government-outbound"></a>
This section lists the outbound IP addresses for the Azure Logic Apps service an
| Region | Logic Apps IP | Managed connectors IP | |--||--|
-| US DoD Central | 52.182.48.215, 52.182.92.143 | 52.127.58.160 - 52.127.58.175, 52.182.54.8, 52.182.48.136, 52.127.61.192 - 52.127.61.223 |
-| US Gov Arizona | 52.244.67.143, 52.244.65.66, 52.244.65.190 | 52.127.2.160 - 52.127.2.175, 52.244.69.0, 52.244.64.91, 52.127.5.224 - 52.127.5.255 |
-| US Gov Texas | 52.238.114.217, 52.238.115.245, 52.238.117.119 | 52.127.34.160 - 52.127.34.175, 40.112.40.25, 52.238.161.225, 20.140.137.128 - 20.140.137.159 |
-| US Gov Virginia | 13.72.54.205, 52.227.138.30, 52.227.152.44 | 52.127.42.128 - 52.127.42.143, 52.227.143.61, 52.227.162.91 |
+| US DoD Central | 52.182.48.215, 52.182.92.143 | 52.127.58.160 - 52.127.58.175, 52.182.54.8, 52.182.48.136, 52.127.61.192 - 52.127.61.223, 52.245.153.80 - 52.245.153.95, 52.245.153.96 - 52.245.153.127 |
+| US Gov Arizona | 52.244.67.143, 52.244.65.66, 52.244.65.190 | 52.127.2.160 - 52.127.2.175, 52.244.69.0, 52.244.64.91, 52.127.5.224 - 52.127.5.255, 20.141.9.240 - 20.141.9.255, 20.141.10.0 - 20.141.10.31 |
+| US Gov Texas | 52.238.114.217, 52.238.115.245, 52.238.117.119 | 52.127.34.160 - 52.127.34.175, 40.112.40.25, 52.238.161.225, 20.140.137.128 - 20.140.137.159, 20.140.146.192 - 20.140.146.223, 20.140.146.224 - 20.140.146.239 |
+| US Gov Virginia | 13.72.54.205, 52.227.138.30, 52.227.152.44 | 52.127.42.128 - 52.127.42.143, 52.227.143.61, 52.227.162.91, 20.140.94.192 - 20.140.94.223, 52.235.252.144 - 52.235.252.159, 52.235.252.160 - 52.235.252.191 |
|||| ## Next steps
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-automated-ml.md
Title: What is automated ML? AutoML
-description: Learn how Azure Machine Learning can automatically generate a model by using the parameters and criteria you provide.
+description: Learn how Azure Machine Learning can automatically generate a model by using the parameters and criteria you provide with automated machine learning.
Previously updated : 10/27/2020 Last updated : 07/01/2021
Traditional machine learning model development is resource-intensive, requiring
## AutoML in Azure Machine Learning
-Azure Machine Learning offers two experiences for working with automated ML:
+Azure Machine Learning offers the following two experiences for working with automated ML. See the following sections to understand [feature availability in each experience](#parity).
* For code experienced customers, [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro). Get started with [Tutorial: Use automated machine learning to predict taxi fares](tutorial-auto-train-models.md).
Azure Machine Learning offers two experiences for working with automated ML:
* [Tutorial: Create a classification model with automated ML in Azure Machine Learning](tutorial-first-experiment-automated-ml.md). * [Tutorial: Forecast demand with automated machine learning](tutorial-automated-ml-forecast.md)
+<a name="parity"></a>
+
+### Experiment settings
+
+The following settings allow you to configure your automated ML experiment.
+
+| |The Python SDK|The studio web experience|
+-|:-:|:-:
+|**Split data into train/validation sets**| Γ£ô|Γ£ô
+|**Supports ML tasks: classification, regression, and forecasting**| Γ£ô| Γ£ô
+|**Optimizes based on primary metric**| Γ£ô| Γ£ô
+|**Supports Azure ML compute as compute target** | Γ£ô|Γ£ô
+|**Configure forecast horizon, target lags & rolling window**|Γ£ô|Γ£ô
+|**Set exit criteria** |Γ£ô|Γ£ô
+|**Set concurrent iterations**| Γ£ô|Γ£ô
+|**Drop columns**| Γ£ô|Γ£ô
+|**Block algorithms**|Γ£ô|Γ£ô
+|**Cross validation** |Γ£ô|Γ£ô
+|**Supports training on Azure Databricks clusters**| Γ£ô|
+|**View engineered feature names**|Γ£ô|
+|**Featurization summary**| Γ£ô|
+|**Featurization for holidays**|Γ£ô|
+|**Log file verbosity levels**| Γ£ô|
+
+### Model settings
+
+These settings can be applied to the best model as a result of your automated ML experiment.
+
+| |The Python SDK|The studio web experience|
+|-|:-:|:-:|
+|**Best model registration, deployment, explainability**| Γ£ô|Γ£ô|
+|**Enable voting ensemble & stack ensemble models**| Γ£ô|Γ£ô|
+|**Show best model based on non-primary metric**|Γ£ô||
+|**Enable/disable ONNX model compatibility**|Γ£ô||
+|**Test the model** | Γ£ô| |
+
+### Run control settings
+
+These settings allow you to review and control your experiment runs and its child runs.
+
+| |The Python SDK|The studio web experience|
+|-|:-:|:-:|
+|**Run summary table**| Γ£ô|Γ£ô|
+|**Cancel runs & child runs**| Γ£ô|Γ£ô|
+|**Get guardrails**| Γ£ô|Γ£ô|
+|**Pause & resume runs**| Γ£ô| |
## When to use AutoML: classify, regression, & forecast
For example, building a model __for each instance or individual__ in the followi
* Predictive maintenance for hundreds of oil wells * Tailoring an experience for individual users.
-<a name="parity"></a>
-
-### Experiment settings
-
-The following settings allow you to configure your automated ML experiment.
-
-| |The Python SDK|The studio web experience|
--|:-:|:-:
-|**Split data into train/validation sets**| Γ£ô|Γ£ô
-|**Supports ML tasks: classification, regression, and forecasting**| Γ£ô| Γ£ô
-|**Optimizes based on primary metric**| Γ£ô| Γ£ô
-|**Supports Azure ML compute as compute target** | Γ£ô|Γ£ô
-|**Configure forecast horizon, target lags & rolling window**|Γ£ô|Γ£ô
-|**Set exit criteria** |Γ£ô|Γ£ô
-|**Set concurrent iterations**| Γ£ô|Γ£ô
-|**Drop columns**| Γ£ô|Γ£ô
-|**Block algorithms**|Γ£ô|Γ£ô
-|**Cross validation** |Γ£ô|Γ£ô
-|**Supports training on Azure Databricks clusters**| Γ£ô|
-|**View engineered feature names**|Γ£ô|
-|**Featurization summary**| Γ£ô|
-|**Featurization for holidays**|Γ£ô|
-|**Log file verbosity levels**| Γ£ô|
-
-### Model settings
-
-These settings can be applied to the best model as a result of your automated ML experiment.
-
-| |The Python SDK|The studio web experience|
-|-|:-:|:-:|
-|**Best model registration, deployment, explainability**| Γ£ô|Γ£ô|
-|**Enable voting ensemble & stack ensemble models**| Γ£ô|Γ£ô|
-|**Show best model based on non-primary metric**|Γ£ô||
-|**Enable/disable ONNX model compatibility**|Γ£ô||
-|**Test the model** | Γ£ô| |
-
-### Run control settings
-
-These settings allow you to review and control your experiment runs and its child runs.
-
-| |The Python SDK|The studio web experience|
-|-|:-:|:-:|
-|**Run summary table**| Γ£ô|Γ£ô|
-|**Cancel runs & child runs**| Γ£ô|Γ£ô|
-|**Get guardrails**| Γ£ô|Γ£ô|
-|**Pause & resume runs**| Γ£ô| |
- <a name="use-with-onnx"></a> ## AutoML & ONNX
There are multiple resources to get you up and running with AutoML.
### Tutorials/ how-tos Tutorials are end-to-end introductory examples of AutoML scenarios.
-+ **For a code first experience**, follow the [Tutorial: Automatically train a regression model with Azure Machine Learning Python SDK](tutorial-auto-train-models.md).
++ **For a code first experience**, follow the [Tutorial: Train a regression model with AutoML and Python](tutorial-auto-train-models.md).
- + **For a low or no-code experience**, see the [Tutorial: Create automated ML classification models with Azure Machine Learning studio](tutorial-first-experiment-automated-ml.md).
+ + **For a low or no-code experience**, see the [Tutorial: Train a classification model with no-code AutoML in Azure Machine Learning studio](tutorial-first-experiment-automated-ml.md).
-How to articles provide additional detail into what functionality AutoML offers. For example,
+How-to articles provide additional detail into what functionality automated ML offers. For example,
+ Configure the settings for automatic training experiments
- + In Azure Machine Learning studio, [use these steps](how-to-use-automated-ml-for-ml-models.md).
- + With the Python SDK, [use these steps](how-to-configure-auto-train.md).
+ + [Without code in the Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md).
+ + [With the Python SDK](how-to-configure-auto-train.md).
-+ Learn how to auto train using time series data, [with these steps](how-to-auto-train-forecast.md).
++ Learn how to [train forecasting models with time series data](how-to-auto-train-forecast.md). ### Jupyter notebook samples
machine-learning How To Attach Compute Targets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-attach-compute-targets.md
compute_target.detach()
``` > [!WARNING]
-> Detaching a cluster **does not delete the cluster**. To delete an Azure Kubernetes Service cluster, see [Use the Azure CLI with AKS](/aks/kubernetes-walkthrough.md#delete-the-cluster) or to delete an Azure Arc enabled Kubernetes cluster, see [Azure Arc quickstart](/azure-arc/kubernetes/quickstart-connect-cluster#clean-up-resources).
+> Detaching a cluster **does not delete the cluster**. To delete an Azure Kubernetes Service cluster, see [Use the Azure CLI with AKS](/azure/aks/kubernetes-walkthrough#delete-the-cluster). To delete an Azure Arc enabled Kubernetes cluster, see [Azure Arc quickstart](/azure/azure-arc/kubernetes/quickstart-connect-cluster#7-clean-up-resources).
## Notebook examples
See these notebooks for examples of training with various compute targets:
* [Tutorial: Train a model](tutorial-train-models-with-aml.md) uses a managed compute target to train a model. * Learn how to [efficiently tune hyperparameters](how-to-tune-hyperparameters.md) to build better models. * Once you have a trained model, learn [how and where to deploy models](how-to-deploy-and-where.md).
-* [Use Azure Machine Learning with Azure Virtual Networks](./how-to-network-security-overview.md)
+* [Use Azure Machine Learning with Azure Virtual Networks](./how-to-network-security-overview.md)
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-auto-train.md
Previously updated : 06/11/2021 Last updated : 07/01/2021
Requirements for training data in machine learning:
- Data must be in tabular form. - The value to predict, target column, must be in the data.
-**For remote experiments**, training data must be accessible from the remote compute. AutoML only accepts [Azure Machine Learning TabularDatasets](/python/api/azureml-core/azureml.data.tabulardataset) when working on a remote compute.
+**For remote experiments**, training data must be accessible from the remote compute. Automated ML only accepts [Azure Machine Learning TabularDatasets](/python/api/azureml-core/azureml.data.tabulardataset) when working on a remote compute.
Azure Machine Learning datasets expose functionality to: * Easily transfer data from static files or URL sources into your workspace. * Make your data available to training scripts when running on cloud compute resources. See [How to train with datasets](how-to-train-with-datasets.md#mount-files-to-remote-compute-targets) for an example of using the `Dataset` class to mount data to your remote compute target.
-The following code creates a TabularDataset from a web url. See [Create a TabularDatasets](how-to-create-register-datasets.md#create-a-tabulardataset) for code examples on how to create datasets from other sources like local files and datastores.
+The following code creates a TabularDataset from a web url. See [Create a TabularDataset](how-to-create-register-datasets.md#create-a-tabulardataset) for code examples on how to create datasets from other sources like local files and datastores.
```python from azureml.core.dataset import Dataset
Classification | Regression | Time Series Forecasting
||| Average ||| SeasonalAverage ||| [ExponentialSmoothing](https://www.statsmodels.org/v0.10.2/generated/statsmodels.tsa.holtwinters.ExponentialSmoothing.html)+ ### Primary Metric The `primary metric` parameter determines the metric to be used during model training for optimization. The available metrics you can select is determined by the task type you choose, and the following table shows valid primary metrics for each task type.
Learn about the specific definitions of these metrics in [Understand automated m
|`norm_macro_recall` | `normalized_mean_absolute_error` | |`precision_score_weighted` |
-### Primary metrics for classification scenarios
-
-Post thresholded metrics, like `accuracy`, `average_precision_score_weighted`, `norm_macro_recall`, and `precision_score_weighted` may not optimize as well for datasets which are small, have very large class skew (class imbalance), or when the expected metric value is very close to 0.0 or 1.0. In those cases, `AUC_weighted` can be a better choice for the primary metric. After automated ML completes, you can choose the winning model based on the metric best suited to your business needs.
+#### Metrics for classification scenarios
+Post-thresholded metrics, like `accuracy`, `average_precision_score_weighted`, `norm_macro_recall`, and `precision_score_weighted` may not optimize as well for datasets which are small, have very large class skew (class imbalance), or when the expected metric value is very close to 0.0 or 1.0. In those cases, `AUC_weighted` can be a better choice for the primary metric. After automated ML completes, you can choose the winning model based on the metric best suited to your business needs.
| Metric | Example use case(s) | | | - |
Post thresholded metrics, like `accuracy`, `average_precision_score_weighted`, `
| `norm_macro_recall` | Churn prediction | | `precision_score_weighted` | |
-### Primary metrics for regression scenarios
-
+#### Metrics for regression scenarios
+
Metrics like `r2_score` and `spearman_correlation` can better represent the quality of model when the scale of the value-to-predict covers many orders of magnitude. For instance salary estimation, where many people have a salary of $20k to $100k, but the scale goes very high with some salaries in the $100M range. `normalized_mean_absolute_error` and `normalized_root_mean_squared_error` would in this case treat a $20k prediction error the same for a worker with a $30k salary as a worker making $20M. While in reality, predicting only $20k off from a $20M salary is very close (a small 0.1% relative difference), whereas $20k off from $30k is not close (a large 67% relative difference). `normalized_mean_absolute_error` and `normalized_root_mean_squared_error` are useful when the values to predict are in a similar scale.
Metrics like `r2_score` and `spearman_correlation` can better represent the qual
| `r2_score` | Airline delay, Salary estimation, Bug resolution time | | `normalized_mean_absolute_error` | |
-### Primary metrics for time series forecasting scenarios
-
-See regression notes, above.
+#### Metrics for time series forecasting scenarios
+The recommendations are similar to those noted for regression scenarios.
| Metric | Example use case(s) | | | - |
machine-learning How To Connect Data Ui https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-connect-data-ui.md
To create a dataset in the studio:
1. Select **Next** to open the **Datastore and file selection** form. On this form you select where to keep your dataset after creation, as well as select what data files to use for your dataset. 1. Enable skip validation if your data is in a virtual network. Learn more about [virtual network isolation and privacy](how-to-enable-studio-virtual-network.md). 1. For Tabular datasets, you can specify a 'timeseries' trait to enable time related operations on your dataset. Learn how to [add the timeseries trait to your dataset](how-to-monitor-datasets.md#studio-dataset).
-1. Select **Next** to populate the **Settings and preview** and **Schema** forms; they are intelligently populated based on file type and you can further configure your dataset prior to creation on these forms.
+1. Select **Next** to populate the **Settings and preview** and **Schema** forms; they are intelligently populated based on file type and you can further configure your dataset prior to creation on these forms. You can also indicate on this form if your data contains multi-line data.
1. Select **Next** to review the **Confirm details** form. Check your selections and create an optional data profile for your dataset. Learn more about [data profiling](#profile). 1. Select **Create** to complete your dataset creation.
machine-learning Tutorial First Experiment Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-first-experiment-automated-ml.md
Previously updated : 06/11/2021 Last updated : 07/01/2021 # Customer intent: As a non-coding data scientist, I want to use automated machine learning techniques so that I can build a classification model.
You won't write any code in this tutorial, you'll use the studio interface to pe
> [!div class="checklist"] > * Create an Azure Machine Learning workspace. > * Run an automated machine learning experiment.
-> * View experiment details.
-> * Deploy the model.
+> * Explore model details.
+> * Deploy the recommended model.
Also try automated machine learning for these other model types:
Before you configure your experiment, upload your data file to your workspace in
1. Select **Next** on the bottom left, to upload it to the default container that was automatically set up during your workspace creation.
- When the upload is complete, the Settings and preview form is pre-populated based on the file type.
+ When the upload is complete, the **Settings and preview** form is pre-populated based on the file type.
1. Verify that the **Settings and preview** form is populated as follows and select **Next**.
After you load and configure your data, you can set up your experiment. This set
-|| Compute name | A unique name that identifies your compute context. | automl-compute Min / Max nodes| To profile data, you must specify 1 or more nodes.|Min nodes: 1<br>Max nodes: 6
- Idle seconds before scale down | Idle time before the cluster is automatically scaled down to the minimum node count.|120 (default)
+ Idle seconds before scale down | Idle time before the cluster is automatically scaled down to the minimum node count.|1800 (default)
Advanced settings | Settings to configure and authorize a virtual network for your experiment.| None 1. Select **Create** to create your compute target.
After you load and configure your data, you can set up your experiment. This set
1. Select **Next**.
-1. On the **Task type and settings** form, complete the setup for your automated ML experiment by specifying the machine learning task type and configuration settings.
+1. On the **Select task and settings** form, complete the setup for your automated ML experiment by specifying the machine learning task type and configuration settings.
1. Select **Classification** as the machine learning task type.
After you load and configure your data, you can set up your experiment. This set
Select **Save**.
-1. Select **Finish** to run the experiment. The **Run Detail** screen opens with the **Run status** at the top as the experiment preparation begins. This status updates as the experiment progresses. Notifications also appear in the top right corner of the studio, to inform you of the status of your experiment.
+1. Select **Finish** to run the experiment. The **Run Detail** screen opens with the **Run status** at the top as the experiment preparation begins. This status updates as the experiment progresses. Notifications also appear in the top right corner of the studio to inform you of the status of your experiment.
>[!IMPORTANT] > Preparation takes **10-15 minutes** to prepare the experiment run.
marketplace Azure App Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-apis.md
Previously updated : 06/01/2021 Last updated : 07/01/2021 # Partner Center submission API to onboard Azure apps in Partner Center
https://apidocs.microsoft.com/services/partneringestion/
## Next steps * [Create an Azure Container technical asset](azure-container-technical-assets.md)
-* [Create an Azure Container offer](azure-container-offer-setup.md)
+* [Create an Azure Container offer](azure-container-offer-setup.md)
marketplace Azure App Metered Billing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-metered-billing.md
description: This documentation is a guide for ISVs publishing Azure application
Previously updated : 04/22/2020 Last updated : 07/01/2021
marketplace Azure App Review Feedback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-review-feedback.md
description: Handle feedback for your Azure application offer from the Microsoft
Previously updated : 11/11/2019 Last updated : 07/01/2021
marketplace Marketplace Solution Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-solution-templates.md
Previously updated : 04/22/2020 Last updated : 07/01/2021 # Publishing guide for Azure applications solution template offers
If you haven't already done so, learn how to [Grow your cloud business with Azur
To register for and start working in Partner Center: - [Sign in to Partner Center](https://partner.microsoft.com/dashboard/account/v3/enrollment/introduction/partnership) to create or complete your offer.-- See [Create an Azure application offer](./azure-app-offer-setup.md) for more information.
+- See [Create an Azure application offer](./azure-app-offer-setup.md) for more information.
migrate Onboard To Azure Arc With Azure Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/onboard-to-azure-arc-with-azure-migrate.md
Unable to connect to server. Either you have provided incorrect credentials on t
**Recommended actions** - Ensure that the impacted server has the latest kernel and OS updates installed. - Ensure that there is no network latency between the appliance and the server. It is recommended to have the appliance and source server on the same domain to avoid latency issues.-- Connect to the impacted server from the appliance and run the commands [documented here](./troubleshoot-appliance-discovery.md) to check if they return null or empty data.
+- Connect to the impacted server from the appliance and run the commands [documented here](./troubleshoot-appliance.md) to check if they return null or empty data.
- If the issue persists, submit a Microsoft support case providing the appliance machine ID (available in the footer of the appliance configuration manager). ### Error 60108 - SoftwareInventoryCredentialNotAssociated
migrate Troubleshoot Appliance Discovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/troubleshoot-appliance-discovery.md
- Title: Troubleshoot Azure Migrate appliance deployment and discovery
-description: Get help with appliance deployment and server discovery.
--
-ms.
- Previously updated : 01/02/2020---
-# Troubleshoot the Azure Migrate appliance and discovery
-
-This article helps you troubleshoot issues when deploying the [Azure Migrate](migrate-services-overview.md) appliance, and using the appliance to discover on-premises servers.
-
-## What's supported?
-
-[Review](migrate-appliance.md) the appliance support requirements.
-
-## "Invalid OVF manifest entry"
-
-If you receive the error "The provided manifest file is invalid: Invalid OVF manifest entry", do the following:
-
-1. Verify that the Azure Migrate appliance OVA file is downloaded correctly by checking its hash value. [Learn more](./tutorial-discover-vmware.md). If the hash value doesn't match, download the OVA file again and retry the deployment.
-2. If deployment still fails, and you're using the VMware vSphere client to deploy the OVF file, try deploying it through the vSphere web client. If deployment still fails, try using a different web browser.
-3. If you're using the vSphere web client and trying to deploy it on vCenter Server 6.5 or 6.7, try to deploy the OVA directly on the ESXi host:
- - Connect to the ESXi host directly (instead of vCenter Server) with the web client (https://<*host IP Address*>/ui).
- - In **Home** > **Inventory**, select **File** > **Deploy OVF template**. Browse to the OVA and complete the deployment.
-4. If the deployment still fails, contact Azure Migrate support.
-
-## Can't connect to the internet
-
-This can happen if the appliance server is behind a proxy.
--- Make sure you provide the authorization credentials if the proxy needs them.-- If you're using a URL-based firewall proxy to control outbound connectivity, add [these URLs](migrate-appliance.md#url-access) to an allowlist.-- If you're using an intercepting proxy to connect to the internet, import the proxy certificate onto the appliance using [these steps](./migrate-appliance.md).-
-## Can't sign into Azure from the appliance web app
-
-The error "Sorry, but we're having trouble signing you in" appears if you're using the incorrect Azure account to sign into Azure. This error occurs for a couple of reasons:
--- If you sign into the appliance web application for the public cloud, using user account credentials for the Government cloud portal.-- If you sign into the appliance web application for the government cloud using user account credentials for the private cloud portal.-
-Ensure you're using the correct credentials.
-
-## Date/time synchronization error
-
-An error about date and time synchronization (802) indicates that the server clock might be out of synchronization with the current time by more than five minutes. Change the clock time on the collector server to match the current time:
-
-1. Open an admin command prompt on the server.
-2. To check the time zone, run **w32tm /tz**.
-3. To synchronize the time, run **w32tm /resync**.
-
-## "UnableToConnectToServer"
-
-If you get this connection error, you might be unable to connect to vCenter Server *Servername*.com:9443. The error details indicate that there's no endpoint listening at `https://\*servername*.com:9443/sdk` that can accept the message.
--- Check whether you're running the latest version of the appliance. If you're not, upgrade the appliance to the [latest version](./migrate-appliance.md).-- If the issue still occurs in the latest version, the appliance might be unable to resolve the specified vCenter Server name, or the specified port might be wrong. By default, if the port is not specified, the collector will try to connect to port number 443.-
- 1. Ping *Servername*.com from the appliance.
- 2. If step 1 fails, try to connect to the vCenter server using the IP address.
- 3. Identify the correct port number to connect to vCenter Server.
- 4. Verify that vCenter Server is up and running.
-
-## Error 60052/60039: Appliance might not be registered
--- Error 60052, "The appliance might not be registered successfully to the project" occurs if the Azure account used to register the appliance has insufficient permissions.
- - Make sure that the Azure user account used to register the appliance has at least Contributor permissions on the subscription.
- - [Learn more](./migrate-appliance.md#appliancevmware) about required Azure roles and permissions.
-- Error 60039, "The appliance might not be registered successfully to the project" can occur if registration fails because the project used to the register the appliance can't be found.
- - In the Azure portal and check whether the project exists in the resource group.
- - If the project doesn't exist, create a new project in your resource group and register the appliance again. [Learn how to](./create-manage-projects.md#create-a-project-for-the-first-time) create a new project.
-
-## Error 60030/60031: Key Vault management operation failed
-
-If you receive the error 60030 or 60031, "An Azure Key Vault management operation failed", do the following:
--- Make sure the Azure user account used to register the appliance has at least Contributor permissions on the subscription.-- Make sure the account has access to the key vault specified in the error message, and then retry the operation.-- If the issue persists, contact Microsoft support.-- [Learn more](./migrate-appliance.md#appliancevmware) about the required Azure roles and permissions.-
-## Error 60028: Discovery couldn't be initiated
-
-Error 60028: "Discovery couldn't be initiated because of an error. The operation failed for the specified list of hosts or clusters" indicates that discovery couldn't be started on the hosts listed in the error because of a problem in accessing or retrieving server information. The rest of the hosts were successfully added.
--- Add the hosts listed in the error again, using the **Add host** option.-- If there's a validation error, review the remediation guidance to fix the errors, and then try the **Save and start discovery** option again.-
-## Error 60025: Azure AD operation failed
-
-Error 60025: "An Azure AD operation failed. The error occurred while creating or updating the Azure AD application" occurs when the Azure user account used to initiate the discovery is different from the account used to register the appliance. Do one of the following:
--- Ensure that the user account initiating the discovery is same as the one used to register the appliance.-- Provide Azure Active Directory application access permissions to the user account for which the discovery operation is failing.-- Delete the resource group previously created for the project. Create another resource group to start again.-- [Learn more](./migrate-appliance.md#appliancevmware) about Azure Active Directory application permissions.-
-## Error 50004: Can't connect to host or cluster
-
-Error 50004: "Can't connect to a host or cluster because the server name can't be resolved. WinRM error code: 0x803381B9" might occur if the Azure DNS service for the appliance can't resolve the cluster or host name you provided.
--- If you see this error on the cluster, cluster FQDN.-- You might also see this error for hosts in a cluster. This indicates that the appliance can connect to the cluster, but the cluster returns host names that aren't FQDNs. To resolve this error, update the hosts file on the appliance by adding a mapping of the IP address and host names:
- 1. Open Notepad as an admin.
- 2. Open the C:\Windows\System32\Drivers\etc\hosts file.
- 3. Add the IP address and host name in a row. Repeat for each host or cluster where you see this error.
- 4. Save and close the hosts file.
- 5. Check whether the appliance can connect to the hosts, using the appliance management app. After 30 minutes, you should see the latest information for these hosts in the Azure portal.
-
-## Error 60001: Unable to connect to server
--- Ensure there is connectivity from the appliance to the server-- If it is a linux server, ensure password-based authentication is enabled using the following steps:
- 1. Log in to the linux server and open the ssh configuration file using the command 'vi /etc/ssh/sshd_config'
- 2. Set "PasswordAuthentication" option to yes. Save the file.
- 3. Restart ssh service by running "service sshd restart"
-- If it is a windows server, ensure the port 5985 is open to allow for remote WMI calls.-- If you are discovering a GCP linux server and using a root user, use the following commands to change the default setting for root login
- 1. Log in to the linux server and open the ssh configuration file using the command 'vi /etc/ssh/sshd_config'
- 2. Set "PermitRootLogin" option to yes.
- 3. Restart ssh service by running "service sshd restart"
-
-## Error: No suitable authentication method found
-
-Ensure password-based authentication is enabled on the linux server using the following steps:
- 1. Log in to the linux server and open the ssh configuration file using the command 'vi /etc/ssh/sshd_config'
- 2. Set "PasswordAuthentication" option to yes. Save the file.
- 3. Restart ssh service by running "service sshd restart"
-
-## Discovered servers not in portal
-
-If discovery state is "Discovery in progress", but don't yet see the servers in the portal, wait a few minutes:
--- It takes around 15 minutes for a server on VMware.-- It takes around two minutes for each added host for servers on Hyper-V discovery.-
-If you wait and the state doesn't change, select **Refresh** on the **Servers** tab. This should show the count of the discovered servers in Azure Migrate: Discovery and assessment and Azure Migrate: Server Migration.
-
-If this doesn't work and you're discovering VMware servers:
--- Verify that the vCenter account you specified has permissions set correctly, with access to at least one server.-- Azure Migrate can't discover servers on VMware if the vCenter account has access granted at vCenter VM folder level. [Learn more](set-discovery-scope.md) about scoping discovery.-
-## Server data not in portal
-
-If discovered servers don't appear in the portal or if the server data is outdated, wait a few minutes. It takes up to 30 minutes for changes in discovered server configuration data to appear in the portal. It may take a few hours for changes in software inventory data to appear. If there's no data after this time, try refreshing, as follows
-
-1. In **Windows, Linux and SQL Servers** > **Azure Migrate: Discovery and assessment**, select **Overview**.
-2. Under **Manage**, select **Agent Health**.
-3. Select **Refresh agent**.
-4. Wait for the refresh operation to complete. You should now see up-to-date information.
-
-## Deleted servers appear in portal
-
-If you delete servers and they still appear in the portal, wait 30 minutes. If they still appear, refresh as described above.
-
-## Discovered software inventory and SQL Server instances and databases not in portal
-
-After you have initiated discovery on the appliance, it may take up to 24 hours to start showing the inventory data in the portal.
-
-If you have not provided Windows authentication or SQL Server authentication credentials on the appliance configuration manager, then add the credentials so that the appliance can use them to connect to respective SQL Server instances.
-
-Once connected, appliance gathers configuration and performance data of SQL Server instances and databases. The SQL Server configuration data is updated once every 24 hours and the performance data is captured every 30 seconds. Hence any change to the properties of the SQL Server instance and databases such as database status, compatibility level etc. can take up to 24 hours to update on the portal.
-
-## SQL Server instance is showing up in "Not connected" state on portal
-
-To view the issues encountered during discovery of SQL Server instances and databases please click on "Not connected" status in connection status column on 'Discovered servers' page in your project.
-
-Creating assessment on top of servers containing SQL instances that were not discovered completely or are in not connected state, may lead to readiness being marked as "unknown".
-
-## I do not see performance data for some network adapters on my physical servers
-
-This can happen if the physical server has Hyper-V virtualization enabled. Due to a product gap, the network throughput is captured on the virtual network adapters discovered.
-
-## Error: The file uploaded is not in the expected format
-
-Some tools have regional settings that create the CSV file with semi-colon as a delimiter. Please change the settings to ensure the delimiter is a comma.
-
-## I imported a CSV but I see "Discovery is in progress"
-
-This status appears if your CSV upload failed due to a validation failure. Try to import the CSV again. You can download the error report of the previous upload and follow the remediation guidance in the file to fix the errors. The error report can be downloaded from the 'Import Details' section on 'Discover servers' page.
-
-## Do not see software inventory details even after updating guest credentials
-
-The software inventory discovery runs once every 24 hours. If you would like to see the details immediately, refresh as follows. This may take a few minutes depending on the no. of servers discovered.
-
-1. In **Windows, Linux and SQL Servers** > **Azure Migrate: Discovery and assessment**, select **Overview**.
-2. Under **Manage**, select **Agent Health**.
-3. Select **Refresh agent**.
-4. Wait for the refresh operation to complete. You should now see up-to-date information.
-
-## Unable to export software inventory
-
-Ensure the user downloading the inventory from the portal has Contributor privileges on the subscription.
-
-## No suitable authentication method found to complete authentication (publickey)
-
-Key based authentication will not work, use password authentication.
-
-## Common app discovery errors
-
-Azure Migrate supports discovery of software inventory, using Azure Migrate: Discovery and assessment. App discovery is currently supported for VMware only. [Learn more](how-to-discover-applications.md) about the requirements and steps for setting up app discovery.
-
-Typical app discovery errors are summarized in the table.
-
-| **Error** | **Cause** | **Action** |
-|--|--|--|
-| 9000: VMware tool status cannot be detected. | VMware tools might not be installed or is corrupted. | Ensure VMware tools is installed and running on the server. |
-| 9001: VMware tools is not installed. | VMware tools might not be installed or is corrupted. | Ensure VMware tools is installed and running on the server. |
-| 9002: VMware tools is not running. | VMware tools might not be installed or is corrupted. | Ensure VMware tools is installed and running on the server. |
-| 9003: Operating system type not supported for guest server discovery. | Operating system running on the server is neither Windows nor Linux. | Supported operating system types are Windows and Linux only. If the server is indeed Windows or Linux, check the operating system type specified in vCenter Server. |
-| 9004: Server is not running. | Server is powered off. | Ensure the server is powered on. |
-| 9005: Operating system type not supported for guest server discovery. | Operating system type not supported for guest server discovery. | Supported operating system types are Windows and Linux only. |
-| 9006: The URL to download the metadata file from guest is empty. | This could happen if the discovery agent is not working as expected. | The issue should automatically resolve in24 hours. If the issue persists, contact Microsoft Support. |
-| 9007: Process running the discovery task in the guest server is not found. | This could happen if the discovery agent is not working properly. | The issue should automatically resolve in 24 hours. If the issue persists, contact Microsoft Support. |
-| 9008: Guest server process status cannot be retrieved. | The issue can occur due to an internal error. | The issue should automatically resolve in 24 hours. If the issue persists, contact Microsoft Support. |
-| 9009: Windows UAC has prevented discovery task execution on the server. | Windows User Account Control (UAC) settings on the server are restrictive and prevent discovery of installed software inventory. | In 'User Account Control' settings on the server, configure the UAC setting to be at one of the lower two levels. |
-| 9010: Server is powered off. | Server is powered off. | Ensure the server is powered on. |
-| 9011: Discovered metadata file not found in guest server file system. | The issue can occur due to an internal error. | The issue should automatically resolve in 24 hours. If the issue persists, contact Microsoft Support. |
-| 9012: Discovered metadata file is empty. | The issue can occur due to an internal error. | The issue should automatically resolve in 24 hours. If the issue persists, contact Microsoft Support. |
-| 9013: A new temporary profile is created for every login. | A new temporary profile is created for every login to the server on VMware. | Contact Microsoft Support for a resolution. |
-| 9014: Unable to retrieve metadata from guest server file system. | No connectivity to ESXi host | Ensure the appliance can connect to port 443 on the ESXi host running the server |
-| 9015: Guest Operations role is not enabled on the vCenter user account | Guest Operations role is not enabled on the vCenter user account. | Ensure Guest Operations role is enabled on the vCenter user account. |
-| 9016: Unable to discover as guest operations agent is out of date. | VMware tools is not properly installed or is not up to date. | Ensure the VMware tools is properly installed and up to date. |
-| 9017: File with discovered metadata is not found on the server. | The issue can occur due to an internal error. | Contact Microsoft Support for a resolution. |
-| 9018: PowerShell is not installed in the Guest servers. | PowerShell is not available in the guest server. | Install PowerShell in the guest server. |
-| 9019: Unable to discover due to guest server operation failures. | VMware Guest operation failed on the server. | Ensure that the server credentials are valid and user name provided in the guest server credentials is in UPN format. |
-| 9020: File creation permission is denied. | The role associated to the user or the group policy is restricting the user from creating the file in folder | Check if the guest user provided has create permission for the file in folder. See **Notifications** in Azure Migrate: Discovery and assessment for the name of the folder. |
-| 9021: Unable to create file in System Temp path. | VMware tool reports System Temp path instead of Users Temp Path. | Upgrade your VMware tool version above 10287 (NGC/VI Client format). |
-| 9022: Access to WMI object is denied. | The role associated to the user or the group policy is restricting the user from accessing WMI object. | Please contact Microsoft Support. |
-| 9023: Unable to run PowerShell as SystemRoot environment variable value is empty. | The value of SystemRoot environment variable is empty for the guest server. | Contact Microsoft Support for a resolution. |
-| 9024: Unable to discover as TEMP environment variable value is empty. | The value of TEMP environment variable is empty for the guest server. | Please contact Microsoft Support. |
-| 9025: PowerShell is corrupted in the guest servers. | PowerShell is corrupted in the guest server. | Reinstall PowerShell in the guest server and verify PowerShell can be run on the guest server. |
-| 9026: Unable to run guest operations on the server. | Server state does not allow guest operations to be run on the server. | Contact Microsoft Support for a resolution. |
-| 9027: Guest operations agent is not running in the server. | Failed to contact the guest operations agent running inside the virtual server. | Contact Microsoft Support for a resolution. |
-| 9028: File cannot be created due to insufficient disk storage in server. | Not enough space on the disk. | Ensure enough space is available in the disk storage of the server. |
-| 9029: No access to PowerShell on the guest server credential provided. | Access to PowerShell is not available for the user. | Ensure the user added on appliance can access PowerShell on the guest server. |
-| 9030: Unable to gather discovered metadata as ESXi host is disconnected. | The ESXi host is in a disconnected state. | Ensure the ESXi host running the server is connected. |
-| 9031: Unable to gather discovered metadata as the ESXi host is not responding. | Remote host is in Invalid state. | Ensure the ESXi host running the server is running and connected. |
-| 9032: Unable to discover due to an internal error. | The issue can occur due to an internal error. | Contact Microsoft Support for a resolution. |
-| 9033: Unable to discover as the server username contains invalid characters. | Invalid characters were detected in the username. | Provide the server credential again ensuring there are no invalid characters. |
-| 9034: Username provided is not in UPN format. | Username is not in UPN format. | Ensure that the username is in User Principal Name (UPN) format. |
-| 9035: Unable to discover as PowerShell language mode is not set to 'Full Language'. | Language mode for PowerShell in guest server is not set to full language. | Ensure that PowerShell language mode is set to 'Full Language'. |
-| 9037: Data collection paused temporarily as server response time is too high. | The discovered server is taking too long to respond | No action required. A retry will be attempted in 24 hours for software inventory discovery and 3 hours for dependency analysis (agentless). |
-| 10000: Operating system type is not supported. | Operating system running on the server is neither Windows nor Linux. | Supported operating system types are Windows and Linux only. |
-| 10001: Script for server discovery is not found on the appliance. | Discovery is not working as expected. | Contact Microsoft Support for a resolution. |
-| 10002: Discovery task has not completed in time. | Discovery agent is not working as expected. | The issue should automatically resolve in 24 hours. If the issue persists, contact Microsoft Support. |
-| 10003: Process executing the discovery task exited with an error. | Process executing the discovery task exited with an error. | The issue should automatically resolve in 24 hours. If the issue still persists, please contact Microsoft Support. |
-| 10004: Credential not provided for the guest operating system type. | Credentials to access servers of this OS type were not provided in the Azure Migrate appliance. | Add credentials for servers on the appliance |
-| 10005: Credentials provided are not valid. | Credentials provided for appliance to access the server are incorrect. | Update the credentials provided in the appliance and ensure that the server is accessible using the credentials. |
-| 10006: Guest OS type not supported by credential store. | Operating system running on the server is neither Windows nor Linux. | Supported operating system types are Windows and Linux only. |
-| 10007: Unable to process the metadata discovered. | Error occurred while trying to deserialize the JSON. | Contact Microsoft Support for a resolution. |
-| 10008: Unable to create a file on the server. | The issue may occur due to an internal error. | Contact Microsoft Support for a resolution. |
-| 10009: Unable to write discovered metadata to a file on the server. | The issue can occur due to an internal error. | Contact Microsoft Support for a resolution. |
-
-## Common SQL Server instances and database discovery errors
-
-Azure Migrate supports discovery of SQL Server instances and databases running on on-premises machines, using Azure Migrate: Discovery and assessment. SQL discovery is currently supported for VMware only. Refer to the [Discovery](tutorial-discover-vmware.md) tutorial to get started.
-
-Typical SQL discovery errors are summarized in the table.
-
-| **Error** | **Cause** | **Action** | **Guide**
-|--|--|--|--|
-|30000: Credentials associated with this SQL Server didn't work.|Either manually associated credentials are invalid or auto associated credentials can no longer access the SQL Server.|Add credentials for SQL Server on the appliance and wait until the next SQL discovery cycle or force refresh.| - |
-|30001: Unable to connect to SQL Server from appliance.|1. Appliance doesnΓÇÖt have network line of sight to SQL Server.<br/>2. Firewall blocking connection between SQL Server and appliance.|1. Make SQL Server reachable from appliance.<br/>2. Allow incoming connections from appliance to SQL Server.| - |
-|30003: Certificate is not trusted.|A trusted certificate is not installed on the computer running SQL Server.|Please set up a trusted certificate on the server. [Learn more](/troubleshoot/sql/connect/error-message-when-you-connect)| [View](/troubleshoot/sql/connect/error-message-when-you-connect) |
-|30004: Insufficient Permissions.|This error could occur due to the lack of permissions required to scan SQL Server instances. |Grant sysadmin role to the credentials/ account provided on the appliance for discovering SQL Server instances and databases. [Learn more](/sql/t-sql/statements/grant-server-permissions-transact-sql)| [View](/sql/t-sql/statements/grant-server-permissions-transact-sql) |
-|30005: SQL Server login failed to connect because of a problem with its default master database.|Either the database itself is invalid or the login lacks CONNECT permission on the database.|Use ALTER LOGIN to set the default database to master database.<br/>Grant sysadmin role to the credentials/ account provided on the appliance for discovering SQL Server instances and databases. [Learn more](/sql/relational-databases/errors-events/mssqlserver-4064-database-engine-error)| [View](/sql/relational-databases/errors-events/mssqlserver-4064-database-engine-error) |
-|30006: SQL Server login cannot be used with Windows Authentication.|1. The login may be a SQL Server login but the server only accepts Windows Authentication.<br/>2. You are trying to connect using SQL Server Authentication but the login used does not exist on SQL Server.<br/>3. The login may use Windows Authentication but the login is an unrecognized Windows principal. An unrecognized Windows principal means that the login cannot be verified by Windows. This could be because the Windows login is from an untrusted domain.|If you are trying to connect using SQL Server Authentication, verify that SQL Server is configured in Mixed Authentication Mode and SQL Server login exists.<br/>If you are trying to connect using Windows Authentication, verify that you are properly logged into the correct domain. [Learn more](/sql/relational-databases/errors-events/mssqlserver-18452-database-engine-error)| [View](/sql/relational-databases/errors-events/mssqlserver-18452-database-engine-error) |
-|30007: Password expired.|The password of the account has expired.|The SQL Server login password may have expired, re-set the password and/ or extend the password expiration date. [Learn more](/sql/relational-databases/native-client/features/changing-passwords-programmatically)| [View](/sql/relational-databases/native-client/features/changing-passwords-programmatically) |
-|30008: Password must be changed.|The password of the account must be changed.|Change the password of the credential provided for SQL Server discovery. [Learn more](/previous-versions/sql/sql-server-2008-r2/cc645934(v=sql.105))| [View](/previous-versions/sql/sql-server-2008-r2/cc645934(v=sql.105)) |
-|30009: An internal error occurred.|Internal error occurred while discovering SQL Server instances and databases. |Please contact Microsoft support if the issue persists.| - |
-|30010: No databases found.|Unable to find any databases from the selected server instance.|Grant sysadmin role to the credentials/ account provided on the appliance for discovering SQL databases.| - |
-|30011: An internal error occurred while assessing a SQL instance or database.|Internal error occurred while performing assessment.|Please contact Microsoft support if the issue persists.| - |
-|30012: SQL connection failed.|1. The firewall on the server has refused the connection.<br/>2. The SQL Server Browser service (sqlbrowser) is not started.<br/>3. SQL Server did not respond to the client request because the server is probably not started.<br/>4. The SQL Server client cannot connect to the server. This error could occur because the server is not configured to accept remote connections.<br/>5. The SQL Server client cannot connect to the server. The error could occur because either the client cannot resolve the name of the server or the name of the server is incorrect.<br/>6. The TCP, or named pipe protocols are not enabled.<br/>7. Specified SQL Server instance name is not valid.|Please use [this](https://go.microsoft.com/fwlink/?linkid=2153317) interactive user guide to troubleshoot the connectivity issue. Please wait for 24 hours after following the guide for the data to update in the service. If the issue still persists please contact Microsoft support.| [View](https://go.microsoft.com/fwlink/?linkid=2153317) |
-|30013: An error occurred while establishing a connection to the SQL server instance.|1. SQL ServerΓÇÖs name cannot be resolved from appliance.<br/>2. SQL Server does not allow remote connections.|If you can ping SQL server from appliance, please wait 24 hours to check if this issue auto resolves. If it doesnΓÇÖt, please contact Microsoft support. [Learn more](/sql/relational-databases/errors-events/mssqlserver-53-database-engine-error)| [View](/sql/relational-databases/errors-events/mssqlserver-53-database-engine-error) |
-|30014: Username or password is invalid.| This error could occur because of an authentication failure that involves a bad password or username.|Please provide a credential with a valid Username and Password. [Learn more](/sql/relational-databases/errors-events/mssqlserver-18456-database-engine-error)| [View](/sql/relational-databases/errors-events/mssqlserver-18456-database-engine-error) |
-|30015: An internal error occurred while discovering the SQL instance.|An internal error occurred while discovering the SQL instance.|Please contact Microsoft support if the issue persists.| - |
-|30016: Connection to instance '%instance;' failed due to a timeout.| This could occur if firewall on the server refuses the connection.|Verify whether firewall on the SQL Server is configured to accept connections. If the error persists, please contact Microsoft support. [Learn more](/sql/relational-databases/errors-events/mssqlserver-neg2-database-engine-error)| [View](/sql/relational-databases/errors-events/mssqlserver-neg2-database-engine-error) |
-|30017: Internal error occurred.|Unhandled exception.|Please contact Microsoft support if the issue persists.| - |
-|30018: Internal error occurred.|An internal error occurred while collecting data such as Temp DB size, File size etc of the SQL instance.|Please wait for 24 hours and contact Microsoft support if the issue persists.| - |
-|30019: An internal error occurred.|An internal error occurred while collecting performance metrics such as memory utilization, etc. of a database or an instance.|Please wait for 24 hours and contact Microsoft support if the issue persists.| - |
-
-## Next steps
-
-Set up an appliance for [VMware](how-to-set-up-appliance-vmware.md), [Hyper-V](how-to-set-up-appliance-hyper-v.md), or [physical servers](how-to-set-up-appliance-physical.md).
migrate Troubleshoot Appliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/troubleshoot-appliance.md
+
+ Title: Troubleshoot Azure Migrate appliance deployment and discovery
+description: Get help with appliance deployment and server discovery.
++
+ms.
+ Last updated : 07/01/2020+++
+# Troubleshoot the Azure Migrate appliance and discovery
+
+This article helps you troubleshoot issues when deploying the [Azure Migrate](migrate-services-overview.md) appliance, and using the appliance to discover on-premises servers.
+
+## What's supported?
+
+[Review](migrate-appliance.md) the appliance support requirements.
+
+## "Invalid OVF manifest entry"
+
+If you receive the error "The provided manifest file is invalid: Invalid OVF manifest entry", do the following:
+
+1. Verify that the Azure Migrate appliance OVA file is downloaded correctly by checking its hash value. [Learn more](./tutorial-discover-vmware.md). If the hash value doesn't match, download the OVA file again and retry the deployment.
+2. If deployment still fails, and you're using the VMware vSphere client to deploy the OVF file, try deploying it through the vSphere web client. If deployment still fails, try using a different web browser.
+3. If you're using the vSphere web client and trying to deploy it on vCenter Server 6.5 or 6.7, try to deploy the OVA directly on the ESXi host:
+ - Connect to the ESXi host directly (instead of vCenter Server) with the web client (https://<*host IP Address*>/ui).
+ - In **Home** > **Inventory**, select **File** > **Deploy OVF template**. Browse to the OVA and complete the deployment.
+4. If the deployment still fails, contact Azure Migrate support.
+
+## Can't connect to the internet
+
+This can happen if the appliance server is behind a proxy.
+
+- Make sure you provide the authorization credentials if the proxy needs them.
+- If you're using a URL-based firewall proxy to control outbound connectivity, add [these URLs](migrate-appliance.md#url-access) to an allowlist.
+- If you're using an intercepting proxy to connect to the internet, import the proxy certificate onto the appliance using [these steps](./migrate-appliance.md).
+
+## Can't sign into Azure from the appliance web app
+
+The error "Sorry, but we're having trouble signing you in" appears if you're using the incorrect Azure account to sign into Azure. This error occurs for a couple of reasons:
+
+- If you sign into the appliance web application for the public cloud, using user account credentials for the Government cloud portal.
+- If you sign into the appliance web application for the government cloud using user account credentials for the private cloud portal.
+
+Ensure you're using the correct credentials.
+
+## Date/time synchronization error
+
+An error about date and time synchronization (802) indicates that the server clock might be out of synchronization with the current time by more than five minutes. Change the clock time on the collector server to match the current time:
+
+1. Open an admin command prompt on the server.
+2. To check the time zone, run **w32tm /tz**.
+3. To synchronize the time, run **w32tm /resync**.
+
+## "UnableToConnectToServer"
+
+If you get this connection error, you might be unable to connect to vCenter Server *Servername*.com:9443. The error details indicate that there's no endpoint listening at `https://\*servername*.com:9443/sdk` that can accept the message.
+
+- Check whether you're running the latest version of the appliance. If you're not, upgrade the appliance to the [latest version](./migrate-appliance.md).
+- If the issue still occurs in the latest version, the appliance might be unable to resolve the specified vCenter Server name, or the specified port might be wrong. By default, if the port is not specified, the collector will try to connect to port number 443.
+
+ 1. Ping *Servername*.com from the appliance.
+ 2. If step 1 fails, try to connect to the vCenter server using the IP address.
+ 3. Identify the correct port number to connect to vCenter Server.
+ 4. Verify that vCenter Server is up and running.
+
+## Error 60052/60039: Appliance might not be registered
+
+- Error 60052, "The appliance might not be registered successfully to the project" occurs if the Azure account used to register the appliance has insufficient permissions.
+ - Make sure that the Azure user account used to register the appliance has at least Contributor permissions on the subscription.
+ - [Learn more](./migrate-appliance.md#appliancevmware) about required Azure roles and permissions.
+- Error 60039, "The appliance might not be registered successfully to the project" can occur if registration fails because the project used to the register the appliance can't be found.
+ - In the Azure portal and check whether the project exists in the resource group.
+ - If the project doesn't exist, create a new project in your resource group and register the appliance again. [Learn how to](./create-manage-projects.md#create-a-project-for-the-first-time) create a new project.
+
+## Error 60030/60031: Key Vault management operation failed
+
+If you receive the error 60030 or 60031, "An Azure Key Vault management operation failed", do the following:
+
+- Make sure the Azure user account used to register the appliance has at least Contributor permissions on the subscription.
+- Make sure the account has access to the key vault specified in the error message, and then retry the operation.
+- If the issue persists, contact Microsoft support.
+- [Learn more](./migrate-appliance.md#appliancevmware) about the required Azure roles and permissions.
+
+## Error 60028: Discovery couldn't be initiated
+
+Error 60028: "Discovery couldn't be initiated because of an error. The operation failed for the specified list of hosts or clusters" indicates that discovery couldn't be started on the hosts listed in the error because of a problem in accessing or retrieving server information. The rest of the hosts were successfully added.
+
+- Add the hosts listed in the error again, using the **Add host** option.
+- If there's a validation error, review the remediation guidance to fix the errors, and then try the **Save and start discovery** option again.
+
+## Error 60025: Azure AD operation failed
+
+Error 60025: "An Azure AD operation failed. The error occurred while creating or updating the Azure AD application" occurs when the Azure user account used to initiate the discovery is different from the account used to register the appliance. Do one of the following:
+
+- Ensure that the user account initiating the discovery is same as the one used to register the appliance.
+- Provide Azure Active Directory application access permissions to the user account for which the discovery operation is failing.
+- Delete the resource group previously created for the project. Create another resource group to start again.
+- [Learn more](./migrate-appliance.md#appliancevmware) about Azure Active Directory application permissions.
+
+## Error 50004: Can't connect to host or cluster
+
+Error 50004: "Can't connect to a host or cluster because the server name can't be resolved. WinRM error code: 0x803381B9" might occur if the Azure DNS service for the appliance can't resolve the cluster or host name you provided.
+
+- If you see this error on the cluster, cluster FQDN.
+- You might also see this error for hosts in a cluster. This indicates that the appliance can connect to the cluster, but the cluster returns host names that aren't FQDNs. To resolve this error, update the hosts file on the appliance by adding a mapping of the IP address and host names:
+ 1. Open Notepad as an admin.
+ 2. Open the C:\Windows\System32\Drivers\etc\hosts file.
+ 3. Add the IP address and host name in a row. Repeat for each host or cluster where you see this error.
+ 4. Save and close the hosts file.
+ 5. Check whether the appliance can connect to the hosts, using the appliance management app. After 30 minutes, you should see the latest information for these hosts in the Azure portal.
+
+## Error 60001: Unable to connect to server
+
+- Ensure there is connectivity from the appliance to the server
+- If it is a linux server, ensure password-based authentication is enabled using the following steps:
+ 1. Log in to the linux server and open the ssh configuration file using the command 'vi /etc/ssh/sshd_config'
+ 2. Set "PasswordAuthentication" option to yes. Save the file.
+ 3. Restart ssh service by running "service sshd restart"
+- If it is a windows server, ensure the port 5985 is open to allow for remote WMI calls.
+- If you are discovering a GCP linux server and using a root user, use the following commands to change the default setting for root login
+ 1. Log in to the linux server and open the ssh configuration file using the command 'vi /etc/ssh/sshd_config'
+ 2. Set "PermitRootLogin" option to yes.
+ 3. Restart ssh service by running "service sshd restart"
+
+## Error: No suitable authentication method found
+
+Ensure password-based authentication is enabled on the linux server using the following steps:
+
+1. Log in to the linux server and open the ssh configuration file using the command 'vi /etc/ssh/sshd_config'
+2. Set "PasswordAuthentication" option to yes. Save the file.
+3. Restart ssh service by running "service sshd restart"
++
+## Next steps
+
+Set up an appliance for [VMware](how-to-set-up-appliance-vmware.md), [Hyper-V](how-to-set-up-appliance-hyper-v.md), or [physical servers](how-to-set-up-appliance-physical.md).
migrate Troubleshoot Assessment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/troubleshoot-assessment.md
Title: Troubleshoot assessment and dependency visualization in Azure Migrate
-description: Get help with assessment and dependency visualization in Azure Migrate.
+ Title: Troubleshoot assessments in Azure Migrate
+description: Get help with assessment in Azure Migrate.
ms.
Last updated 01/02/2020
-# Troubleshoot assessment/dependency visualization
+# Troubleshoot assessment
This article helps you troubleshoot issues with assessment and dependency visualization with [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool).
Readiness category may be incorrectly marked as "Not Ready" in the case of a phy
This can happen if the physical server has Hyper-V virtualization enabled. On these servers, Azure Migrate currently discovers both the physical and virtual adapters. Hence, the no. of NICs discovered is higher than actual.
-## Dependency visualization in Azure Government
-
-Agent-based dependency analysis is not supported in Azure Government. Please use agentless dependency analysis.
--
-## Dependencies don't show after agent install
-
-After you've installed the dependency visualization agents on on-premises VMs, Azure Migrate typically takes 15-30 minutes to display the dependencies in the portal. If you've waited for more than 30 minutes, make sure that the Microsoft Monitoring Agent (MMA) can connect to the Log Analytics workspace.
-
-For Windows VMs:
-1. In the Control Panel, start MMA.
-2. In the **Microsoft Monitoring Agent properties** > **Azure Log Analytics (OMS)**, make sure that the **Status** for the workspace is green.
-3. If the status isn't green, try removing the workspace and adding it again to MMA.
-
- ![MMA status](./media/troubleshoot-assessment/mma-properties.png)
-
-For Linux VMs, make sure that the installation commands for MMA and the dependency agent succeeded. Refer to more troubleshooting guidance [here](../azure-monitor/vm/service-map.md#post-installation-issues).
-
-## Supported operating systems
--- **MMS agent**: Review the supported [Windows](../azure-monitor/agents/agents-overview.md#supported-operating-systems), and [Linux](../azure-monitor/agents/agents-overview.md#supported-operating-systems) operating systems.-- **Dependency agent**: the supported [Windows and Linux](../azure-monitor/vm/vminsights-enable-overview.md#supported-operating-systems) operating systems.-
-## Visualize dependencies for > 1 hour
-
-With agentless dependency analysis, you can visualize dependencies or export them in a map for a duration of up to 30 days.
-
-With agent-based dependency analysis, Although Azure Migrate allows you to go back to a particular date in the last month, the maximum duration for which you can visualize the dependencies is one hour. For example, you can use the time duration functionality in the dependency map to view dependencies for yesterday, but you can view them for a one-hour period only. However, you can use Azure Monitor logs to [query the dependency data](./how-to-create-group-machine-dependencies.md) over a longer duration.
-
-## Visualized dependencies for > 10 servers
-
-In Azure Migrate, with agent-based dependency analysis, you can [visualize dependencies for groups](./how-to-create-a-group.md#refine-a-group-with-dependency-mapping) with up to 10 VMs. For larger groups, we recommend that you split the VMs into smaller groups to visualize dependencies.
--
-## Servers show "Install agent"
-
-After migrating servers with dependency visualization enabled to Azure, servers might show "Install agent" action instead of "View dependencies" due to the following behavior:
--- After migration to Azure, on-premises servers are turned off and equivalent VMs are spun up in Azure. These servers acquire a different MAC address.-- Servers might also have a different IP address, based on whether you've retained the on-premises IP address or not.-- If both MAC and IP addresses are different from on-premises, Azure Migrate doesn't associate the on-premises servers with any Service Map dependency data. In this case, it will show the option to install the agent rather than to view dependencies.-- After a test migration to Azure, on-premises servers remain turned on as expected. Equivalent servers spun up in Azure acquire different MAC address and might acquire different IP addresses. Unless you block outgoing Azure Monitor log traffic from these servers, Azure Migrate won't associate the on-premises servers with any Service Map dependency data, and thus will show the option to install agents, rather than to view dependencies.-
-## Dependencies export CSV shows "Unknown process"
-In agentless dependency analysis, the process names are captured on a best-effort basis. In certain scenarios, although the source and destination server names and the destination port are captured, it is not feasible to determine the process names at both ends of the dependency. In such cases, the process is marked as "Unknown process".
-
-## My Log Analytics workspace is not listed when trying to configure the workspace in Azure Migrate
-Azure Migrate currently supports creation of OMS workspace in East US, Southeast Asia and West Europe regions. If the workspace is created outside of Azure Migrate in any other region, it currently cannot be associated with a project.
- ## Capture network traffic
migrate Troubleshoot Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/troubleshoot-dependencies.md
+
+ Title: Troubleshoot issues with agentless and agent-based dependency analysis
+description: Get help with dependency visualization in Azure Migrate.
++
+ms.
+ Last updated : 07/01/2020++
+# Troubleshoot dependency visualization
+
+This article helps you troubleshoot issues with agent-based and agentless dependency analysis _(only available for VMware servers)_. [Learn more](concepts-dependency-visualization.md) about the types of dependency visualization supported in Azure Migrate.
++
+## Visualize dependencies for > 1 hour with agentless dependency analysis
+
+With agentless dependency analysis, you can visualize dependencies or export them in a map for a duration of up to 30 days.
+
+## Visualized dependencies for > 10 servers with agentless dependency analysis
+
+Azure Migrate offers a Power BI template that you can use to visualize network connections of many servers at once, and filter by process and server. [Learn more](how-to-create-group-machine-dependencies-agentless.md#visualize-network-connections-in-power-bi) on how to visualize the dependencies for many servers together.
+
+## Dependencies export CSV shows "Unknown process" with agentless dependency analysis
+In agentless dependency analysis, the process names are captured on a best-effort basis. In certain scenarios, although the source and destination server names and the destination port are captured, it is not feasible to determine the process names at both ends of the dependency. In such cases, the process is marked as "_Unknown process_".
+
+## Common agentless dependency analysis errors
+
+Azure Migrate supports agentless dependency analysis, using Azure Migrate: Discovery and assessment. agentless dependency analysis is currently supported for VMware only. [Learn more](how-to-create-group-machine-dependencies-agentless.md) about the requirements for agentless dependency analysis.
+
+The list of agentless dependency analysis errors is summarized in the table below.
+
+> [!Note]
+> Same errors can also be encountered with software inventory as it follows the same methodology as agentless dependency analysis to collect the required data.
+
+| **Error** | **Cause** | **Action** |
+|--|--|--|
+| **9000:** VMware tools status on the server cannot be detected | VMware tools might not be installed on the server or the installed version is corrupted. | Ensure that VMware tools later than version 10.2.1 is installed and running on the server. |
+| **9001:** VMware tools not installed on the server. | VMware tools might not be installed on the server or the installed version is corrupted. | Ensure that VMware tools later than version 10.2.1 is installed and running on the server. |
+| **9002:** VMware tools not running on the server. | VMware tools might not be installed on the server or the installed version is corrupted. | Ensure that VMware tools later than version 10.2.0 is installed and running on the server. |
+| **9003:** Operation system type running on the server is not supported. | Operating system running on the server is neither Windows nor Linux. | Only Windows and Linux OS types are supported. If the server is indeed running Windows or Linux OS, check the operating system type specified in vCenter Server. |
+| **9004:** Server is not a running state. | Server is in powered off state. | Ensure that the server is in a running state. |
+| **9005:** Operation system type running on the server is not supported. | Operating system running on the server is neither Windows nor Linux. | Only Windows and Linux OS types are supported. \<FetchedParameter> operating system is not supported currently. |
+| **9006:** The URL needed to download the discovery metadata file from the server is empty. | This could be a transient issue due to the discovery agent on the appliance not working as expected. | The issue should automatically resolve in the next cycle within 24 hours. If the issue persists, submit a Microsoft support case. |
+| **9007:** The process that runs the script to collect the metadata is not found in the server. | This could be a transient issue due to the discovery agent on the appliance not working as expected. | The issue should automatically resolve in the next cycle within 24 hours. If the issue persists, submit a Microsoft support case. |
+| **9008:** The status of the process running on the server to collect the metadata cannot be retrieved. | This could be a transient issue due to an internal error. | The issue should automatically resolve in the next cycle within 24 hours. If the issue persists, submit a Microsoft support case. |
+| **9009:** Window User Account Control (UAC) is preventing the execution of discovery operations on the server. | Windows User Account Control (UAC) settings are restricting the discovery of installed applications from the server. | On the impacted server, lower the level of the 'User Account Control' settings on Control Panel. |
+| **9010:** Server is powered off. | Server is in powered off state. | Ensure that the server is in a powered on state. |
+| **9011:** The file containing the discovered metadata cannot be found on the server. | This could be a transient issue due to an internal error. | The issue should automatically resolve in the next cycle within 24 hours. If the issue persists, submit a Microsoft support case. |
+| **9012:** The file containing the discovered metadata on the server is empty. | This could be a transient issue due to an internal error. | The issue should automatically resolve in the next cycle within 24 hours. If the issue persists, submit a Microsoft support case. |
+| **9013:** A new temporary user profile is getting created on logging in the server each time. | A new temporary user profile is getting created on logging in the server each time. | Please submit a Microsoft support case to help troubleshoot this issue. |
+| **9014:** Unable to retrieve the file containing the discovered metadata due to an error encountered on the ESXi host. Error code: %ErrorCode; Details: %ErrorMessage | Encountered an error on the ESXi host \<HostName>. Error code: %ErrorCode; Details: %ErrorMessage | Ensure that port 443 is open on the ESXi host on which the server is running.|
+| **9015:** The vCenter Server user account provided for server discovery does not have Guest operations privileges enabled. | The required privileges of Guest Operations has not been enabled on the vCenter Server user account. | Ensure that the vCenter Server user account has privileges enabled for Virtual Machines > Guest Operations, in order to interact with the server and pull the required data. <br/><br/> [Learn more](tutorial-discover-vmware.md#prepare-vmware) on how to set up the vCenter Server account with required privileges. |
+| **9016:** Unable to discover the metadata as the guest operations agent on the server is outdated. | Either the VMware tools is not installed on the server or the installed version is not up-to-date. | Ensure that the VMware tools is installed and running up-to-date on the server. The VMware Tools version must be version 10.2.1 or later. |
+| **9017:** The file containing the discovered metadata cannot be found on the server. | This could be a transient issue due to an internal error. | Please submit a Microsoft support case to help troubleshoot this issue. |
+| **9018:** PowerShell is not installed on the server. | PowerShell cannot be found on the server. | Ensure that PowerShell version 2.0 or later is installed on the server.|
+| **9019:** Unable to discover the metadata due to guest operation failures on the server. | VMware guest operations failed on the server.The issue was encountered when trying the following credentials on the server: <FriendlyNameOfCredentials>. | Ensure that the server credentials provided on the appliance are valid and username provided in the credentials is in UPN format. (find the friendly name of the credentials tried by Azure Migrate in the possible causes) |
+| **9020:** Unable to create the file required to contain the discovered metadata on the server. | The role associated to the credentials provided on the appliance or a group policy on-premises is restricting the creation of file in the required folder. The issue was encountered when trying the following credentials on the server: <FriendlyNameOfCredentials>. | 1. Check if the credentials provided on the appliance has create file permission on the folder \<folder path/folder name> in the server. <br/>2. If the credentials provided on the appliance do not have the required permissions, either provide another set of credentials or edit an existing one. (find the friendly name of the credentials tried by Azure Migrate in the possible causes) |
+| **9021:** Unable to create the file required to contain the discovered metadata at right path on the server. | VMware tools is reporting an incorrect file path to create the file. | Ensure that VMware tools later than version 10.2.0 is installed and running on the server. |
+| **9022:** The access is denied to run the Get-WmiObject cmdlet on the server. | The role associated to the credentials provided on the appliance or a group policy on-premises is restricting access to WMI object. The issue was encountered when trying the following credentials on the server: \<FriendlyNameOfCredentials>. | 1. Check if the credentials provided on the appliance has create file Administrator privileges and has WMI enabled. <br/> 2. If the credentials provided on the appliance do not have the required permissions, either provide another set of credentials or edit an existing one. (find the friendly name of the credentials tried by Azure Migrate in the possible causes).|
+| **9023:** Unable to run PowerShell as the %SystemRoot% environment variable value is empty. | The value of %SystemRoot% environment variable is empty for the server. | 1. Check if the environment variable is returning an empty value by running echo %systemroot% command on the impacted server. <br/> 2. If issue persists, submit a Microsoft support case. |
+| **9024:** Unable to perform discovery as the %TEMP% environment variable value is empty. | The value of %TEMP% environment variable is empty for the server. | 1. Check if the environment variable is returning an empty value by running echo %temp% command on the impacted server. <br/> 2. If issue persists, submit a Microsoft support case. |
+| **9025:** Unable to perform discovery PowerShell is corrupted on the server. | PowerShell is corrupted on the server. | Reinstall PowerShell and verify that it is running on the impacted server. |
+| **9026:** Unable to run guest operations on the server. | The current state of the server is not allowing the guest operations to be run. | 1. Ensure that the impacted server is up and running.<br/> 2. If issue persists, submit a Microsoft support case. |
+| **9027:** Unable to discover the metadata as the guest operations agent is not running on the server. | Unable to contact the guest operations agent on the server. | Ensure that VMware tools later than version 10.2.0 is installed and running on the server. |
+| **9028:** Unable to create the file required to contain the discovered metadata due to insufficient storage on the server. | There is lack of sufficient storage space on the server disk. | Ensure that enough space is available on disk storage of the impacted server. |
+| **9029:** The credentials provided on the appliance do not have access permissions to run PowerShell. | The credentials provided on the appliance do not have access permissions to run PowerShell. The issue was encountered when trying the following credentials on the server: \<FriendlyNameOfCredentials>. | 1. Ensure that the credentials provided on the appliance can access PowerShell on the server.<br/> 2. If the credentials provided on the appliance do not have the required access, either provide another set of credentials or edit an existing one. (find the friendly name of the credentials tried by Azure Migrate in the possible causes) |
+| **9030:** Unable to gather the discovered metadata as the ESXi host where the server is hosted is in a disconnected state. | The ESXi host on which server is residing is in a disconnected state. | Ensure that the ESXi host running the server is in a connected state. |
+| **9031:** Unable to gather the discovered metadata as the ESXi host where the server is hosted is not responding. | The ESXi host on which server is residing is in an invalid state. | Ensure that the ESXi host running the server is in a running and connected state. |
+| **9032:** Unable to discover due to an internal error. | The issue encountered is due to an internal error. | Follow the steps given below the table to remediate the issue. If the issue persists, open a Microsoft support case. |
+| **9033:** Unable to discover as the username of the credentials provided on the appliance for the server have invalid characters. | The credentials provided on the appliance contain invalid characters in the username. The issue was encountered when trying the following credentials on the server: \<FriendlyNameOfCredentials>. | Ensure that the credentials provided on the appliance do not have any invalid characters in the username. You can go back to the appliance configuration manager to edit the credentials. (find the friendly name of the credentials tried by Azure Migrate in the possible causes). |
+| **9034:** Unable to discover as the username of the credentials provided on the appliance for the server is not in UPN format. | The credentials provided on the appliance do not have the username in the UPN format. The issue was encountered when trying the following credentials on the server: \<FriendlyNameOfCredentials>. | Ensure that the credentials provided on the appliance have their username in the User Principal Name (UPN) format. You can go back to the appliance configuration manager to edit the credentials. (find the friendly name of the credentials tried by Azure Migrate in the possible causes). |
+| **9035:** Unable to discover as PowerShell language mode in not set correctly. | PowerShell language mode is not set to 'Full language'. | Ensure that PowerShell language mode is set to 'Full Language'. |
+| **9036:** Unable to discover as the username of the credentials provided on the appliance for the server is not in UPN format. | The credentials provided on the appliance do not have the username in the UPN format. The issue was encountered when trying the following credentials on the server: \<FriendlyNameOfCredentials>. | Ensure that the credentials provided on the appliance have their username in the User Principal Name (UPN) format. You can go back to the appliance configuration manager to edit the credentials. (find the friendly name of the credentials tried by Azure Migrate in the possible causes). |
+| **9037:** The metadata collection is temporarily paused due to high response time from the server. | The server is taking too long to respond. | The issue should automatically resolve in the next cycle within 24 hours. If the issue persists, submit a Microsoft support case. |
+| **10000:** Operation system type running on the server is not supported. | Operating system running on the server is neither Windows nor Linux. | Only Windows and Linux OS types are supported. \<GuestOSName> operating system is not supported currently. |
+| **10001:** The script required to gather discovery metadata is not found on the server. | The script required to perform discovery may have been deleted or removed from the expected location. | Please submit a Microsoft support case to help troubleshoot this issue. |
+| **10002:** The discovery operations timed out on the server. | This could be a transient issue due to the discovery agent on the appliance not working as expected. | The issue should automatically resolve in the next cycle within 24 hours.|
+| **10003:** The process executing the discovery operations exited with an error. | The process executing the discovery operations exited abruptly due to an error.| The issue should automatically resolve in the next cycle within 24 hours. If the issue persists, submit a Microsoft support case. |
+| **10004:** Credentials not provided on the appliance for the server OS type. | The credentials for the server OS type were not added on the appliance. | 1. Ensure that you add the credentials for the OS type of the impacted server on the appliance.<br/> 2. You can now add multiple server credentials on the appliance. |
+| **10005:** Credentials provided on the appliance for the server are invalid. | The credentials provided on the appliance are not valid. The issue was encountered when trying the following credentials on the server: \<FriendlyNameOfCredentials>. | 1. Ensure that the credentials provided on the appliance are valid and the server is accessible using the credentials.<br/> 2. You can now add multiple server credentials on the appliance.<br/> 3. Go back to the appliance configuration manager to either provide another set of credentials or edit an existing one. (find the friendly name of the credentials tried by Azure Migrate in the possible causes).|
+| **10006:** Operation system type running on the server is not supported. | Operating system running on the server is neither Windows nor Linux. | Only Windows and Linux OS types are supported. \<GuestOSName> operating system is not supported currently. |
+| **10007:** Unable to process the discovered metadata from the server. | An error ocuured when parsing the contents of the file containing the discovered metadata. | Please submit a Microsoft support case to help troubleshoot this issue. |
+| **10008:** Unable to create the file required to contain the discovered metadata on the server. | The role associated to the credentials provided on the appliance or a group policy on-premises is restricting the creation of file in the required folder. The issue was encountered when trying the following credentials on the server: <FriendlyNameOfCredentials>. | 1. Check if the credentials provided on the appliance has create file permission on the folder \<folder path/folder name> in the server.<br/> 2. If the credentials provided on the appliance do not have the required permissions, either provide another set of credentials or edit an existing one. (find the friendly name of the credentials tried by Azure Migrate in the possible causes) |
+| **10009:** Unable to write the discovered metadata in the file on the server. | The role associated to the credentials provided on the appliance or a group policy on-premises is restricting writing in the file on the server. The issue was encountered when trying the following credentials on the server: \<FriendlyNameOfCredentials>. | 1. Check if the credentials provided on the appliance has write file permission on the folder <folder path/folder name> in the server.<br/> 2. If the credentials provided on the appliance do not have the required permissions, either provide another set of credentials or edit an existing one. (find the friendly name of the credentials tried by Azure Migrate in the possible causes) |
+| **10010:** Unable to discover as the command- %CommandName; required to collect some metadata is missing on the server. | The package containing the command %CommandName; is not installed on the server. | Ensure that the package containing the command %CommandName; is installed on the server. |
+| **10011:** The credentials provided on the appliance were used to log in and log off for an interactive session. | The interactive log in and log off forces the registry keys to be unloaded in the profile of the account, being used.This condition makes the keys unavailable for future use. | Use the resolution methods documented [here](https://go.microsoft.com/fwlink/?linkid=2132821) |
+| **10012:** Credentials have not been provided on the appliance for the server. | Either no credentials have been provided for the server or you have provided domain credentials with incorrect domain name on the appliance.| 1. Ensure that the credentials are provided on the appliance for the server and the server is accessible using the credentials. <br/> 2. You can now add multiple credentials on the appliance for servers.Go back to the appliance configuration manager to provide credentials for the server.|
+
+## Error 970: DependencyMapInsufficientPrivilegesException
+
+### Cause
+The error usually comes for Linux servers when you have not provided credentials with the required privileges on the appliance.
+
+### Remediation
+- Ensure that you have either provided a root user account OR
+- An account that has these permissions on /bin/netstat and /bin/ls files:
+ - CAP_DAC_READ_SEARCH
+ - CAP_SYS_PTRACE
+- To check if the user account provided on the appliance has the required privileges, perform these steps:
+1. Login in to the server where you have encountered this error with the same user account as mentioned in the error message.
+2. Run the following commands in Shell. You will get errors if you don't have the required privileges for agentless dependency analysis:
+
+ ````
+ ps -o pid,cmd | grep -v ]$
+ netstat -atnp | awk '{print $4,$5,$7}'
+ ````
+3. Set the required permissions on /bin/netstat and /bin/ls files by running the following commands:
+
+ ````
+ sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls
+ sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat
+ ````
+4. You can validate if the above commands assigned the required permissions to the user account or not:
+
+ ````
+ getcap /usr/bin/ls
+ getcap /usr/bin/netstat
+ ````
+5. Rerun the commands provided in step 2 to get a successful output.
++
+## Error 9014: HTTPGetRequestToRetrieveFileFailed
+
+### Cause
+The issue happens when VMware discovery agent in appliance tries to download the output file containing dependency data from the server file system through ESXi host on which the server is hosted.
+
+### Remediation
+- You can test TCP connectivity to the ESXi host _(name provided in the error message)_ on port 443 (required to be open on ESXi hosts to pull dependency data) from the appliance by opening PowerShell on appliance server and executing the following command:
+ ````
+ Test -NetConnection -ComputeName <Ip address of the ESXi host> -Port 443
+ ````
+- If command returns successful connectivity, you can go to the Azure Migrate project> Discovery and assessment>Overview>Manage>Appliances, select the appliance name and select **Refresh services**.
+
+## Error 9018: PowerShellNotFound
+
+### Cause
+The error usually comes for servers running Windows Server 2008 or lower.
+
+### Remediation
+You need to install the required PowerShell version (2.0 or later) at this location on the server: ($SYSTEMROOT)\System32\WindowsPowershell\v1.0\powershell.exe. [Learn more](https://docs.microsoft.com/powershell/scripting/windows-powershell/install/installing-windows-powershell) on how to install PowerShell in Windows Server.
+
+After installing the required PowerShell version, you can verify if the error was resolved by following steps below under "Mitigation verification using VMware PowerCLI".
+
+## Error 9022: GetWMIObjectAccessDenied
+
+### Remediation
+Make sure that the user account provided in the appliance has access to WMI Namespace and subnamespaces. You can set the access by following these steps:
+1. Go to the server which is reporting this error.
+2. Search and select ΓÇÿRunΓÇÖ from the Start menu. In the ΓÇÿRunΓÇÖ dialog box, type wmimgmt.msc in the ΓÇÿOpen:ΓÇÖ text field and press enter.
+3. The wmimgmt console will open where you can find ΓÇ£WMI Control (Local)ΓÇ¥ in the left panel. Right click on it and select ΓÇÿPropertiesΓÇÖ from the menu.
+4. In the ΓÇÿWMI Control (Local) PropertiesΓÇÖ dialog box, select ΓÇÿSecuritiesΓÇÖ tab.
+5. On the Securities tab, select 'Security' button which will open ΓÇÿSecurity for ROOTΓÇÖ dialog box.
+7. Select 'Advanced' button to open 'Advanced Security Settings for Root' dialog box.
+8. Select 'Add' button which opens the 'Permission Entry for Root' dialog box.
+9. Click on ΓÇÿSelect a principalΓÇÖ to open ΓÇÿSelect Users, Computers, Service Accounts or GroupsΓÇÖ dialog box.
+10. Select the user name(s) or group(s) you want to grant access to the WMI and click 'OK'.
+11. Ensure you grant execute permissions and select "This namespace and subnamespaces" in the 'Applies to:' drop-down.
+12. Select 'Apply' button to save the settings and close all dialog boxes.
+
+After getting the required access, you can verify if the error was resolved by following steps below under "Mitigation verification using VMware PowerCLI".
+
+## Error 9032: InvalidRequest
+
+### Cause
+There can be multiple reasons for this issue, one of the reason is when the username provided (server credentials) on the appliance configuration manager is having invalid XML characters, which causes error in parsing the SOAP request.
+
+### Remediation
+- Make sure the username of the server credentials does not have invalid XML characters and is in username@domain.com format popularly known as UPN format.
+- After editing the credentials on the appliance, you can verify if the error was resolved by following steps below under "Mitigation verification using VMware PowerCLI".
++
+## Error 10002: ScriptExecutionTimedOutOnVm
+
+### Cause
+- This error occurs when server is slow or unresponsive and the script executed to pull the dependency data starts timing out.
+- Once the discovery agent encounters this error on the server, appliance does not attempt agentless dependency analysis on the server thereafter to avoid overloading the unresponsive server.
+- Hence you will continue to see the error until you check the issue with the server and restart the discovery service.
+
+### Remediation
+1. Login into the server encountering this error.
+1. Run following commands on PowerShell:
+ ````
+ Get-WMIObject win32_operatingsystem;
+ Get-WindowsFeature | Where-Object {$_.InstallState -eq 'Installed' -or ($_.InstallState -eq $null -and $_.Installed -eq 'True')};
+ Get-WmiObject Win32_Process;
+ netstat -ano -p tcp | select -Skip 4;
+ ````
+3. If commands output the result in few seconds, then you can go to the Azure Migrate project> Discovery and assessment>Overview>Manage>Appliances, select the appliance name and select **Refresh services** to restart the discovery service.
+4. If the commands are timing out without giving any output, then
+- You need to figure out which process are consuming high CPU or memory on the server.
+- You can try and provide more cores/memory to that server and execute the commands again.
+
+## Error 10005: GuestCredentialNotValid
+
+### Remediation
+- Ensure the validity of credentials _(friendly name provided in the error)_ by clicking on "Revalidate credentials" on the appliance config manager.
+- Ensure that you are able to login into the impacted server using the same credential provided in the appliance.
+- You can try using another user account (for the same domain, in case server is domain-joined) for that server instead of Administrator account .
+- The issue can happen when Global Catalog <-> Domain Controller communication is broken. You can check this by creating a new user account in the domain controller and providing the same in the appliance. This might also require restarting the Domain controller.
+- After taking the remediation steps, you can verify if the error was resolved by following steps below under "Mitigation verification using VMware PowerCLI".
+
+## Error 10012: CredentialNotProvided
+
+### Cause
+This error occurs when you have provided a domain credential with a wrong domain name on the appliance configuration manager. For example, if you have provided a domain credentials with username user@abc.com but provided a the domain name as def.com, those credentials will not be attempted if the server is connected to def.com and you will get this error message.
+
+### Remediation
+- Go to appliance configuration manager to add a server credential or edit an existing one as explained in the cause.
+- After taking the remediation steps, you can verify if the error was resolved by following steps below under "Mitigation verification using VMware PowerCLI".
+
+## Mitigation verification using VMware PowerCLI
+
+After using the mitigation steps on the errors listed above, you can verify if the mitigation worked by running few PowerCLI commands from the appliance server. If the commands succeed, it means that the issue is now resolved else you need to check and follow the remediation steps again.
+
+1. Run the following commands to set up PowerCLI on the appliance server:
+ ````
+ Install-Module -Name VMware.PowerCLI -AllowClobber
+ Set-PowerCLIConfiguration -InvalidCertificateAction Ignore
+ ````
+2. Connect to vCenter Server from appliance by providing the vCenter Server IP address in the command and credentials in the prompt:
+ ````
+ Connect-VIServer -Server <IPAddress of vCenter Server>
+ ````
+3. Connect to the target server from appliance by providing the server name and server credentials (as provided on appliance):
+ ````
+ $vm = get-VM <VMName>
+ $credential = Get-Credential
+ ````
+4. For agentless dependency analysis, run the following commands to see you get a successful output:
+
+ - For Windows servers:
+
+ ````
+ Invoke-VMScript -VM $vm -ScriptText "powershell.exe 'Get-WmiObject Win32_Process'" -GuestCredential $credential
+
+ Invoke-VMScript -VM $vm -ScriptText "powershell.exe 'netstat -ano -p tcp'" -GuestCredential $credential
+ ````
+ - For Linux servers:
+ ````
+ Invoke-VMScript -VM $vm -ScriptText "ps -o pid,cmd | grep -v ]$" -GuestCredential $credential
+
+ Invoke-VMScript -VM $vm -ScriptText "netstat -atnp | awk '{print $4,$5,$7}'" -GuestCredential $credential
+ ````
+5. After verifying that the mitigation worked, you can go to the Azure Migrate project> Discovery and assessment>Overview>Manage>Appliances, select the appliance name and select **Refresh services** to start a fresh discovery cycle.
++
+## My Log Analytics workspace is not listed when trying to configure the workspace in Azure Migrate for agent-based dependency analysis
+Azure Migrate currently supports creation of OMS workspace in East US, Southeast Asia and West Europe regions. If the workspace is created outside of Azure Migrate in any other region, it currently cannot be associated with a project.
+
+## Agent-based dependency visualization in Azure Government
+
+Agent-based dependency analysis is not supported in Azure Government. Please use agentless dependency analysis _(only available for VMware servers)_.
+
+## Agent-based dependencies don't show after agent install
+
+After you've installed the dependency visualization agents on on-premises VMs, Azure Migrate typically takes 15-30 minutes to display the dependencies in the portal. If you've waited for more than 30 minutes, make sure that the Microsoft Monitoring Agent (MMA) can connect to the Log Analytics workspace.
+
+For Windows VMs:
+1. In the Control Panel, start MMA.
+2. In the **Microsoft Monitoring Agent properties** > **Azure Log Analytics (OMS)**, make sure that the **Status** for the workspace is green.
+3. If the status isn't green, try removing the workspace and adding it again to MMA.
+
+ ![MMA status](./media/troubleshoot-assessment/mma-properties.png)
+
+For Linux VMs, make sure that the installation commands for MMA and the dependency agent succeeded. Refer to more troubleshooting guidance [here](../azure-monitor/vm/service-map.md#post-installation-issues).
+
+## Supported operating systems for agent-based dependency analysis
+
+- **MMS agent**: Review the supported [Windows](../azure-monitor/agents/agents-overview.md#supported-operating-systems), and [Linux](../azure-monitor/agents/agents-overview.md#supported-operating-systems) operating systems.
+- **Dependency agent**: the supported [Windows and Linux](../azure-monitor/vm/vminsights-enable-overview.md#supported-operating-systems) operating systems.
+
+## Visualize dependencies for > 1 hour with agent-based dependency analysis
+
+With agent-based dependency analysis, Although Azure Migrate allows you to go back to a particular date in the last month, the maximum duration for which you can visualize the dependencies is one hour. For example, you can use the time duration functionality in the dependency map to view dependencies for yesterday, but you can view them for a one-hour period only. However, you can use Azure Monitor logs to [query the dependency data](./how-to-create-group-machine-dependencies.md) over a longer duration.
+
+## Visualized dependencies for > 10 servers with agent-based dependency analysis
+
+In Azure Migrate, with agent-based dependency analysis, you can [visualize dependencies for groups](./how-to-create-a-group.md#refine-a-group-with-dependency-mapping) with up to 10 VMs. For larger groups, we recommend that you split the VMs into smaller groups to visualize dependencies.
+
+## Servers show "Install agent" for agent-based dependency analysis
+
+After migrating servers with dependency visualization enabled to Azure, servers might show "Install agent" action instead of "View dependencies" due to the following behavior:
+
+- After migration to Azure, on-premises servers are turned off and equivalent VMs are spun up in Azure. These servers acquire a different MAC address.
+- Servers might also have a different IP address, based on whether you've retained the on-premises IP address or not.
+- If both MAC and IP addresses are different from on-premises, Azure Migrate doesn't associate the on-premises servers with any Service Map dependency data. In this case, it will show the option to install the agent rather than to view dependencies.
+- After a test migration to Azure, on-premises servers remain turned on as expected. Equivalent servers spun up in Azure acquire different MAC address and might acquire different IP addresses. Unless you block outgoing Azure Monitor log traffic from these servers, Azure Migrate won't associate the on-premises servers with any Service Map dependency data, and thus will show the option to install agents, rather than to view dependencies.
+
+## Capture network traffic
+
+Collect network traffic logs as follows:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Press F12 to start Developer Tools. If needed, clear the **Clear entries on navigation** setting.
+3. Select the **Network** tab, and start capturing network traffic:
+ - In Chrome, select **Preserve log**. The recording should start automatically. A red circle indicates that traffic is being captured. If the red circle doesn't appear, select the black circle to start.
+ - In Microsoft Edge and Internet Explorer, recording should start automatically. If it doesn't, select the green play button.
+4. Try to reproduce the error.
+5. After you've encountered the error while recording, stop recording, and save a copy of the recorded activity:
+ - In Chrome, right-click and select **Save as HAR with content**. This action compresses and exports the logs as a HTTP Archive (har) file.
+ - In Microsoft Edge or Internet Explorer, select the **Export captured traffic** option. This action compresses and exports the log.
+6. Select the **Console** tab to check for any warnings or errors. To save the console log:
+ - In Chrome, right-click anywhere in the console log. Select **Save as**, to export, and zip the log.
+ - In Microsoft Edge or Internet Explorer, right-click the errors and select **Copy all**.
+7. Close Developer Tools.
++
+## Next steps
+
+[Create](how-to-create-assessment.md) or [customize](how-to-modify-assessment.md) an assessment.
migrate Troubleshoot Discovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/troubleshoot-discovery.md
+
+ Title: Troubleshoot the ongoing server discovery, software inventory and SQL discovery
+description: Get help with server discovery,software inventory and SQL discovery
++
+ms.
+ Last updated : 07/01/2020++
+# Troubleshoot the ongoing server discovery, software inventory and SQL discovery
+
+This article helps you troubleshoot issues with ongoing server discovery, software inventory and discovery of SQL Server instances and databases.
+
+## Discovered servers not showing in portal
+
+If discovery state is "Discovery in progress", but don't yet see the servers in the portal, wait a few minutes:
+
+- It takes around 15 minutes for discovery of servers running on a vCenter Server.
+- It takes around two minutes for each Hyper-V host, added on appliance for discovery of servers running on the host.
+- It takes around a minute for discovery of each server, added on the physical appliance.
+
+If you wait and the state doesn't change, select **Refresh** on the **Servers** tab. This should show the count of the discovered servers in Azure Migrate: Discovery and assessment and Azure Migrate: Server Migration.
+
+If this doesn't work and you're discovering VMware servers:
+
+- Verify that the vCenter account you specified has permissions set correctly, with access to at least one server.
+- Azure Migrate can't discover servers on VMware if the vCenter account has access granted at vCenter VM folder level. [Learn more](set-discovery-scope.md) about scoping discovery.
+
+## Server data not updating in portal
+
+If discovered servers don't appear in the portal or if the server data is outdated, wait a few minutes. It takes up to 30 minutes for changes in discovered server configuration data to appear in the portal. It may take a few hours for changes in software inventory data to appear. If there's no data after this time, try refreshing, as follows
+
+1. In **Windows, Linux and SQL Servers** > **Azure Migrate: Discovery and assessment**, select **Overview**.
+2. Under **Manage**, select **Appliances**.
+3. Select **Refresh services**.
+4. Wait for the refresh operation to complete. You should now see up-to-date information.
+
+## Deleted servers appear in portal
+
+If you delete servers and they still appear in the portal, wait 30 minutes. If they still appear, refresh as described above.
+
+## I imported a CSV but I see "Discovery is in progress"
+
+This status appears if your CSV upload failed due to a validation failure. Try to import the CSV again. You can download the error report of the previous upload and follow the remediation guidance in the file to fix the errors. The error report can be downloaded from the 'Import Details' section on 'Discover servers' page.
+
+## Do not see software inventory details even after updating guest credentials
+
+The software inventory discovery runs once every 24 hours. If you would like to see the details immediately, refresh as follows. This may take a few minutes depending on the no. of servers discovered.
+
+1. In **Windows, Linux and SQL Servers** > **Azure Migrate: D